
More than 1,000 musicians, including Annie Lennox, Damon Albarn and Kate Bush, released a silent … [+]
The UK government is reviving a controversial proposal to grant AI companies unrestricted access to copyrighted material for training their models. First introduced in 2022, the plan was paused in 2023 following backlash but has now resurfaced, with a public consultation concluding on February 25, 2025. This proposed change, which effectively legalizes the mass scraping of artistic works, has ignited fierce opposition from musicians, composers, and the broader creative industry. More than 1,000 artists, including icons such as Kate Bush, Annie Lennox, Damon Albarn, and Hans Zimmer, have come together to protest what they describe as the “legalization of music theft” through a silent album titled Is This What We Want?
Playlist of the silent album “Is This What We Want’ that protests against the UK Copyright Exception … [+]
A Misleading Justification
The government insists that the current copyright framework is “holding back the creative industries, media, and AI sector from realizing their full potential”, claiming that the new approach will strike a balance between AI developers and rights holders. However, this justification is built on a false premise
The AI industry thrives on access to copyrighted works for training, and major AI developers, including OpenAI CEO Sam Altman and other tech figures aligned with the Trump administration, like Elon Musk and Peter Thiel, have openly lobbied for looser regulations that would allow them to use protected content without proper licensing. This push aligns with broader lobbying efforts to reshape copyright law in a way that prioritizes AI companies over human creators.
In practice, this overwhelming dependence on copyrighted content without consent constitutes a clear violation of intellectual property rights. Instead, this proposed law is a blatant attempt to appease Big Tech firms at the expense of the creative industries that contribute billions to the UK economy.
Adding to the controversy is the UK government’s misleading assertion that AI training on copyrighted content is already legally permitted in other countries, such as the US, Japan, and China. In reality, the legal status of AI training remains highly contested in the US, with over 30 ongoing lawsuits challenging its legality according to the Copyright Alliance.
Julia Garayo Willemyns, co-founder of UK Day One, a think tank positioning itself as a provider of ready-made policies to accelerate UK growth, claimed in an X thread that “US AI firms are already training on UK data under fair use (…). US courts are reviewing AI training under fair use. However, strong legal and strategic arguments suggest they will ultimately uphold it.” She also pointed to licensing deals between AI companies and rights holders in the US as evidence of a stable legal framework.
However, this assertion oversimplifies a complex and highly contextual legal principle. Fair use in the US is always determined on a case-by-case basis, evaluated against four key factors, meaning no single ruling can be universally applied. When generative AI outputs directly compete with the copyrighted works they were trained on, American courts are far more likely to rule against fair use. The surge in AI training licensing deals in the US further underscores this reality, these companies are not paying for copyrighted content out of goodwill; they are doing so because they recognize the significant legal risk of relying on a fair use defense in court.
Similarly, claims that artists can “opt out” of AI training ignore the impracticality of monitoring and enforcing such rights on a global scale. AI models scrape massive amounts of data from the internet, making it nearly impossible for individual creators to track where and how their work is being used. Unlike platforms such as YouTube, which employs automated content ID systems to detect copyright violations, no similar mechanism exists for AI datasets.
Even if an opt-out system were implemented, the burden would fall entirely on artists, requiring them to constantly monitor AI models and take legal action to remove their works, an impossible task given the scale and secrecy of AI training datasets. AI companies do not disclose the content they use, meaning artists often have no way of knowing if their work has already been ingested by an AI model. Once data has been used to train an AI system, it cannot be removed retroactively, effectively locking creators into a system where their intellectual property is exploited without their consent. Ed Newton-Rex, musician, artist advocate, who is also behind the silent album campaign, has stated bluntly: “Opt-out is an illusion.”
Following strong pushback from the creative industry, Technology Secretary Peter Kyle responded to the controversy, stating “I will not have one side forcing me to make a choice.” Kyle emphasized his commitment to ensuring that those opposed to AI training have the ability to “opt out.” However, opt-out is not a safeguard, it is a forced compromise. It effectively compels creators to participate in AI training without their knowledge, only allowing them to take action once their work has already been used.
This flawed opt-out system is little more than a PR tactic, designed to give the illusion of choice while allowing AI companies to continue using copyrighted works without proper authorization. The only effective solution is a mandatory opt-in licensing system, ensuring that AI developers obtain explicit permission from creators before training on their works.
Make It fair Campaign with identical front page on every UK newspapers with homepage takeovers.
Devastating Consequences for Artists
If passed, this exception would fundamentally undermine the economic viability of the creative industry. Generative AI tools would be free to ingest the works of musicians, writers, and visual artists, allowing companies to generate derivative content that competes with human-made art without any compensation to the original creators. The damage is twofold:
- Economic Harm: The UK music industry alone contributed a record £7.6 billion to the economy in 2023. This new policy threatens the financial sustainability of artists, particularly emerging musicians who rely on royalties to fund their careers. Without proper protections, AI-generated music could flood the market, devaluing human creativity and limiting revenue streams for real artists.
- Cultural Erosion: AI is incapable of truly understanding the depth of human creativity, emotion, and experience. By allowing AI models to replicate the works of living artists without their consent, the government is endorsing the dilution of authentic artistic expression. As composer Max Richter warned, these policies could “impoverish creators” across all fields, from literature to visual arts and beyond.
A Growing Backlash
The resistance from the artistic community has been swift and powerful. Alongside to the silent album, the Make It Fair campaign has launched a nationwide movement, urging the public to write to their MPs and demand protection for artists’ intellectual property rights. The campaign has gained significant traction, with high-profile musicians speaking out, as well as full-page ads in national newspapers emphasizing the threat to the UK’s creative industries. Even within political circles, opposition is mounting, with Conservative leader Kemi Badenoch openly criticizing the government’s plan and calling for a complete re-evaluation of the proposal.
Sir Paul McCartney, Ed Sheeran, Dua Lipa, Sting, and many others have signed open letters condemning the proposal. Simon Cowell called it “one of the biggest moments and decisions of our time,” warning that “AI shouldn’t be able to steal the talent of those humans who created the magic in the first place.”
The False Promises of AI
The push for this copyright exception is symptomatic of a broader trend: the overhyping of generative AI. Despite the grand claims that AI will revolutionize every industry, the reality is far less impressive. As stated by Edward Zitron “ There is no AI revolution”. AI has not delivered major technological breakthroughs beyond statistical pattern replication, and its applications remain mostly superficial. Consumer adoption of generative AI tools outside of ChatGPT is minimal, and nearly all AI companies are struggling financially. Big Tech, including Microsoft, has already started pulling back investments in AI due to its inability to generate sustainable revenue. OpenAI alone burned through $9 billion in 2024 while losing $5 billion, proving that AI is far from the profitable revolution it was hyped to be.
The reality is that AI does not “create” in the way humans do. It does not compose music with intent, paint with passion, or write literature with profound understanding. At best, AI is a glorified plagiarism machine, mimicking and remixing copyrighted works under the guise of ‘innovation.’ The AI industry is not being held back by copyright laws; rather, it is being artificially sustained by misleading narratives and aggressive lobbying aimed at pressuring governments into granting AI companies access to human creativity for free.
As stated by the singer-songwriter Rebecca Ferguson ‘The UK government doesn’t own musicians’ and writers’ intellectual property it can’t just give it away. AI companies must get permission from rights holders, not rely on government policy to bypass copyright law. This is a private sector issue, not one for the government to dictate’. This is not a matter of stifling innovation, it is a matter of preserving the integrity of human artistry and ensuring that creators are fairly compensated for their work.
If this proposal passes, the UK will not become a leader in AI. Instead, it will become a cautionary tale of what happens when a government prioritizes corporate interests over its cultural heritage. The battle for copyright in the AI age is far from over, but one thing is clear: musicians, writers, and artists are ready to fight back, and they will not be silenced.