UK Government Delays AI Regulation Bill by a Year, Plans Comprehensive Legislation to Address Safety and Copyright Concerns
Efforts to regulate artificial intelligence in the UK have been pushed back by at least a year, with ministers now preparing a sweeping legislative package to address safety concerns, copyright issues, and the rapid growth of AI technologies.
According to The Guardian, Technology Secretary Peter Kyle intends to introduce a comprehensive AI bill in the next parliamentary session. The legislation will tackle a wide range of concerns, from the testing of large language models like ChatGPT to the use of copyrighted material in AI training. However, the bill will not be ready before the next King’s Speech – anticipated as late as May 2026 – prompting concern over delays in regulating one of the world’s most transformative technologies.
Labour initially planned to introduce a tightly focused AI bill within months of taking office, targeting the oversight of advanced AI systems. This early legislation would have required companies to submit their models for safety testing by the UK’s AI Safety Institute, amid growing fears that unregulated AI systems could pose existential threats.

However, the plan was quietly shelved. Ministers are now opting to align with potential policy shifts in the United States, especially under the anticipated return of Donald Trump’s administration. There is concern that premature UK regulation might deter AI companies from investing or operating within the country.
Instead, ministers now envision a single, expansive bill that also includes new provisions on copyright – a hotly contested area in the AI space. The move comes amid escalating tensions with the creative sector over the government’s broader Data Protection and Digital Information Bill.
That bill currently allows AI companies to scrape copyrighted material for training purposes unless the content owner explicitly opts out. The provision has sparked an uproar from artists and rights holders, with figures like Elton John, Paul McCartney, and Kate Bush backing campaigns to challenge the measure.
Earlier this week, the House of Lords supported an amendment requiring AI developers to disclose whether copyrighted content was used in their models – a move aimed at upholding existing copyright protections. Despite the backlash, ministers have refused to amend the bill. While Kyle has publicly acknowledged shortcomings in the government’s handling of the issue, he has maintained that the data bill is not the appropriate vehicle for regulating AI copyright practices.
In a letter to MPs over the weekend, Kyle committed to forming a cross-party working group to advise on AI and copyright. He reiterated that these issues would be more effectively addressed through the planned AI-specific legislation.
Public sentiment strongly supports tighter government oversight. A March survey conducted by the Ada Lovelace Institute and the Alan Turing Institute found that 88% of respondents believed the government should have the authority to halt AI products that pose serious risks. More than three-quarters agreed that AI safety should be governed by public institutions rather than left solely to private firms.
As the UK continues to position itself as a leader in AI development, the delay raises critical questions about how swiftly and effectively policymakers can respond to both innovation and risk. The government’s broader AI strategy now hinges on the next parliamentary session – and whether its long-promised “comprehensive” approach will deliver both public trust and industry clarity.