The PA’s campaign sought to prevent this outcome. It has comprehensively failed.
An Awkward Moment for the Anti-AI Lobby
The UK government’s announcement of formal partnerships with Meta and Anthropic this week represents far more than a routine technology procurement decision.
For the Publishers Association (PA), which has positioned itself at the forefront of opposition to meaningful AI development in the creative industries, this collaboration marks a significant strategic setback – one that fundamentally undermines the PA’s campaign to position the government against AI companies on copyright grounds.
The timing could scarcely be more uncomfortable. It’s almost one year since PA CEO Dan Conway nailed his faded colours to the mast as a Govt. review closed, declaring in an impressive (credit where due!) soundbite that will long haunt him. “The great copyright heist cannot go unchallenged!”
Eleven months on, and the alleged great copyright heist remains decidedly unchallenged, and any hope that the UK Govt. might have sympathy for Conway’s position finally evaporated this week, leaving the PA’s position in tatters and Conway clutching desperately at soundbite straws.
Just as the PA has intensified its rhetoric portraying AI firms as digital pirates systematically plundering the intellectual property of authors and publishers, the Govt. has announced it will embed these very companies into the fabric of public service delivery.
The cognitive dissonance is palpable: how can AI companies simultaneously be characterised as existential threats to creative industries whilst being deemed suitable partners for delivering employment services, healthcare transformation, and other critical government functions?
The Delicious Irony: AI as Job Creator, Not Destroyer
Perhaps the most delicious aspect of this announcement – and the one that most thoroughly undermines years of apocalyptic warnings from the PA and its cohorts – is this specific detail: “The government is also collaborating with Anthropic to create AI assistants that support job seekers with career advice and finding employment.”
Let that sink in for a moment. The government is deploying AI specifically to help people find jobs. Not to eliminate employment. Not to render human workers obsolete. Not even to devour our fluffy kittens. But to assist those seeking work in navigating the modern labour market.
This represents such a comprehensive refutation of the “AI is stealing our jobs” narrative that one could almost suspect the Govt. communications team of deliberate and imaginative mischief, if “imaginative” can stand scrutiny in the same sentence as the usually beyond unimaginative Starmer Administration.
Here’s the thing: The PA has spent considerable time and resources stoking fears about AI-driven unemployment, particularly in creative professions. Yet here we have the Department for Work and Pensions – the very arm of government most attuned to employment challenges – choosing AI as a tool to enhance job-seeking outcomes. The implicit message is clear: the government’s expert analysis of AI’s impact on employment has led not to panic and prohibition, but to constructive deployment.
Had this been Meta alone, there might be room for concern. Meta’s track record on data privacy, content moderation, and corporate transparency leaves much to be desired.
So does its chatbot responses, as I explore in a recent TNPS op-ed.
But Anthropic? This is a company that has broadly built its brand on responsible AI development (caveats to follow), constitutional AI principles, and relatively transparent engagement with policymakers. The fact that Anthropic is the partner specifically chosen for employment services speaks volumes about the UK Govt.’s assessment of which AI companies can be trusted with sensitive public-facing applications.
The Strategic Implications: Partnership as Policy Signal
Governments do not casually enter into partnerships of this nature. The due diligence required for any technology provider working with public services is substantial; when that technology involves AI systems that will interact with vulnerable jobseekers, the scrutiny intensifies considerably. Legal teams will have examined contractual arrangements. Privacy officers will have assessed data protection implications. Policy advisers will have considered the political risks.
Policy signalling of the most deliberate kind.
And yet, despite knowing full well that both Meta and Anthropic face copyright litigation in the United States, despite being fully aware of the PA’s campaign and its demands for strict regulatory intervention, despite understanding that this partnership would generate precisely the criticism that Dan Conway has duly delivered – the government proceeded.
This is not naivety. This is not oversight. This is policy signalling of the most deliberate kind.
The government is effectively communicating that it has evaluated the copyright concerns raised by publishers, weighed them against the transformational potential of AI for public services, and concluded that partnership with AI companies represents the appropriate path forward.
Reality check: One does not invest in training AI fellows through the Alan Turing Institute, or pilot AI-powered employment services, whilst simultaneously planning to adopt regulatory positions that would cripple these same companies’ ability to operate.
The Legal Landscape: Reality and the PA’s Version
The PA’s response attempts to characterise both companies as already-guilty parties: “Anthropic has agreed to pay out billions of dollars in the US in compensation to publishers and authors for its use of pirated work in the training of its AI models.”
Sorry, Dan, but this framing, whilst technically referring to a settlement, deliberately obscures the crucial legal context.
Settlement is not an admission of liability. More importantly, settlement in one jurisdiction does not establish precedent in another. The UK government’s legal advisers will be acutely aware, just as you are, Dan, that copyright law varies significantly between the United States and the United Kingdom, particularly regarding concepts like fair use (US) versus fair dealing (UK), and that the outcome of American litigation provides limited guidance for British legal outcomes.
Moreover, the Govt. will have access to legal analysis far more sophisticated than any lobbying organisation can provide. His Majesty’s Treasury Solicitor’s Department will have examined the copyright implications exhaustively. The Intellectual Property Office will have been consulted. Competition and Markets Authority perspectives will have been considered.
Particularly – and this is the crux of the matter – regarding any regulations that might disadvantage UK AI development relative to international competitors.
The Govt. does not anticipate regulatory or judicial outcomes that would fundamentally impede AI training practices.
The fact that partnership proceeded makes clear these expert analyses concluded that the legal risks are manageable – or, more significantly, that the government does not anticipate regulatory or judicial outcomes that would fundamentally impede AI training practices. One does not build public services infrastructure on a legal foundation one expects to crumble.
The Political Calculation: Choosing Growth Over Protectionism
The UK Govt. faces a stark choice in its AI policy: it can attempt to protect incumbent industries through restrictive copyright interpretations, or it can position the UK as a global leader in AI development and deployment. It cannot do both.
The PA’s preferred approach – strict liability for any training on copyrighted works, mandatory licensing schemes, extensive restrictions on AI training datasets – would effectively kill the UK AI sector. The sound of champagne corks popping at the PA and SoA offices would not alter the reality that Anthropic and Meta would simply focus their innovation efforts on more welcoming jurisdictions. British AI startups would relocate to avoid legal uncertainty. The UK would become an AI importer rather than an AI developer, with all the economic and strategic implications that entails.
The Govt.’s partnership announcement is a clear two-fingered salute to the PA:
The Govt’s partnership announcement is a clear two-fingered salute to the PA: We have chosen growth over protectionism.
This aligns with multiple policy signals from the current administration: the emphasis on the AI Opportunities Action Plan, the investment in AI research through UKRI, the commitment to positioning the UK as an “AI superpower” (admittedly aspirational rather than likely). These are not the actions of a government planning to adopt the PA’s preferred regulatory framework because it cares what Dan Conway thinks.
Furthermore, the political optics are significant. In an era where public services face substantial challenges – NHS waiting lists, criminal justice backlogs, benefit processing delays – the Govt. is under intense pressure to demonstrate that emerging technologies can deliver tangible improvements to citizens’ lives. Partnering with AI companies to address these challenges sends a powerful message: this government backs innovation, not preservation of legacy business models.
The PA’s Weakening Position: From Influence to Irritant
Sorry, Dan, but that sub-title – introduced after my final draft by Anthropic’s Claude, says it all. The PA’s response statement perfectly encapsulates an organisation that recognises its influence is waning but cannot quite adjust its rhetoric accordingly. The attempted pivot – “It is right that the UK Govt. seeks to utilise AI tools” – reads as grudging acceptance of an irreversible trend. The subsequent “but you have to hope” betrays an organisation reduced to hoping rather than shaping policy outcomes.
Dan Conway, it’s time to step down gracefully and let someone run the PA who can move with the times.
Dan, maybe it’s time to step down gracefully and let someone run the PA who can move with the times.
The characterisation of Anthropic and Meta as digital pirates grows less persuasive with each repetition. “Anthropic has agreed to pay out billions of dollars” distorts a settlement into an admission of wrongdoing. “Meta has publicly admitted to using pirate networks” takes statements about training data sources out of context – every major AI developer has used internet-sourced data, and characterising all publicly available internet archives as “pirate networks” reveals the rhetorical desperation underlying the PA’s position.
More fundamentally, the PA appears to wilfully misunderstand the nature of government decision-making. Dan Conway’s warning that “the closer our public services are entwined with AI companies, the stronger the case gets for our government to take a firm line on transparency and copyright” is plain stupid. It deliberately inverts the actual causation.
The closer public services become entwined with AI companies, the stronger the Govt.’s incentive to ensure these partnerships succeed – which means creating regulatory environments that enable AI development, not constrain it.
The Broader Context: Global AI Competition
The UK Govt.’s decision cannot be understood in isolation from international competitive dynamics. The United States has committed enormous resources to maintaining AI leadership. China views AI as a strategic priority essential to its economic and technological ambitions. The European Union, through the AI Act, has attempted to position itself as the global standard-setter for AI regulation – though many observers question whether this regulatory approach will foster or hinder European AI innovation.
Against this backdrop, the UK faces a choice: embrace AI development with policies that attract investment and talent, or adopt restrictive approaches that satisfy vocal domestic lobbies but leave the UK dependent on foreign AI capabilities. The Govt.’s partnership with Meta and Anthropic clearly signals which path it has chosen.
Dan, the Govt. ain’t listening!
This has profound implications for the PA’s campaign. The organisation has premised its advocacy on the assumption that the UK would listen to the words of wisdom only Mr. Conway can deliver, and prioritise protection of short-term publishing industry interests over AI development. Oops! Dan, the Govt. ain’t listening!
The TDM Exception and Regulatory Trajectory
The UK’s text and data mining (TDM) exception for copyright, introduced in 2014 and expanded in subsequent years, already provides a more permissive framework for AI training than many publishers would prefer. The exception allows copying of works for computational analysis, subject to certain conditions. Whilst the current exception contains restrictions that AI companies find limiting, the trajectory of UK policy has been towards enabling computational uses of copyrighted material, not constraining them.
The PA has lobbied extensively for the TDM exception to be narrowed or made subject to mandatory licensing. The government’s AI partnerships shows this lobbying has been unsuccessful.
Rather, there are strong arguments that the Govt.’s partnerships create additional pressure to clarify and potentially expand the TDM exception. If Anthropic’s AI assistants are to help jobseekers effectively, these systems need to be trained on comprehensive datasets that reflect the full landscape of employment opportunities, career guidance resources, and labour market information. Overly restrictive copyright interpretations would undermine the very public services the Govt. is attempting to deploy.
The Litigation Distraction: American Disputes and British Policy
The PA places considerable emphasis on ongoing US litigation, cherry-picking these legal disputes as evidence of AI companies’ fundamental illegitimacy. This strategy reveals a concerning misunderstanding of how policy-makers evaluate technology partnerships.
The existence of litigation does not indicate that a technology is unlawful or unsuitable for government use
Litigation is a feature, not a bug, of disruptive technological change. When personal computers emerged, publishers sued over digital copying. When search engines developed, publishers sued over indexing and caching. When e-books arrived, publishers sued over digital lending. The existence of litigation does not indicate that a technology is unlawful or unsuitable for government use – it indicates that legacy industries are adapting to change through available legal mechanisms.
Moreover, US litigation outcomes provide limited guidance for UK policy. American copyright law differs substantially from British copyright law, particularly in the breadth of fair use defences and the nature of statutory damages. A settlement in California does not establish precedent in London, and government lawyers will be acutely aware of this distinction. Likewise the PA’s lawyers, but here Dan Conway cherry picks what suits his needs.
The Innovation Imperative: Why the Govt. Cannot Side with Publishers
The most fundamental reason the Govt undermine (and will continue to undermine) the PA’s campaign is economic and strategic necessity. The UK’s prosperity depends on its ability to participate in, and ideally lead(nothing wrong with dreaming!), the AI revolution. Publishing, whilst culturally important, represents a tiny fraction of UK economic output, no matter how much the PA wants us to believe otherwise. AI, conversely, is projected to contribute hundreds of billions to UK GDP over the coming decades.
No government can sacrifice this economic opportunity to protect the business models of a single industry, particularly when that industry’s demands would effectively exclude the UK from global AI leadership. The PA’s preferred regulatory framework – mandatory licensing for all training data, strict liability for any use of copyrighted works, extensive transparency requirements – would make the UK an inhospitable environment for AI development. Companies would relocate. Investment would flow elsewhere. The UK would become an AI consumer rather than an AI creator.
The Employment Services Masterstroke: Judo Politics in Action
Returning to that delicious detail about employment services, one can discern sophisticated political positioning. By choosing employment support as the initial public-facing deployment of AI in government services, the Starmer administration has performed a judo move on the PA’s “AI destroys jobs” narrative.
Critics of AI development face a rhetorical challenge: how does one argue that AI threatens employment whilst the Govt. uses AI specifically to help unemployed people find work? The contradiction is so stark that it inoculates the Govt. against broader job-displacement criticisms. If AI were truly an employment apocalypse, surely the Department for Work and Pensions would be resisting its deployment, not championing it?
This positioning is particularly effective because it’s difficult for AI critics to oppose without appearing callous. Arguing against AI-powered employment services means arguing against tools that might help vulnerable jobseekers navigate the labour market more effectively. For the PA to complain about Anthropic assisting unemployed people in finding work – because Anthropic trained its AI on copyrighted books – would reveal the organisation’s priorities with uncomfortable clarity.
The Transparency Trap: Conway’s Misguided Demand
Dan Conway’s call for the government to “take a firm line on transparency and copyright” attempts to bind together two separate issues in a manner that serves the PA’s interests whilst obscuring the actual policy trade-offs.
Transparency in AI systems is of course important, particularly when these systems make decisions affecting citizens’ access to public services. The Govt. should ensure that AI-powered employment services are auditable, that their recommendations can be explained, and that they do not perpetuate discrimination or bias.
But copyright transparency – the PA’s actual concern – is a separate matter. The PA wants AI companies to disclose their training datasets in granular detail, ostensibly for transparency but actually to enable copyright claims and to demonstrate that protected works were used in training. This demand conflates two distinct policy objectives: ensuring AI systems serve the public interest (genuine transparency concern) and enabling publishers to pursue copyright claims (commercial interest dressed as transparency concern).
The Path Not Taken: What a PA Victory Would Have Looked Like
To appreciate the significance of the Govt. partnerships as a defeat for the PA’s agenda, it’s worth considering what a PA victory would have looked like.
A government sympathetic to the PA’s position would have:
- Refused partnerships with AI companies facing copyright litigation
- Announced reviews of the TDM exception with consultation dominated by rights-holder concerns
- Signalled support for mandatory licensing schemes for AI training
- Emphasised the need to protect creative industries over AI development
- Commissioned reports emphasising AI’s threats to employment rather than deploying AI to address unemployment
A comprehensive rejection of the PA’s preferred policy direction.
None of this has occurred. Instead, the Govt. has embraced AI partnerships, positioned AI as a solution to public service challenges, and chosen companies that represent the cutting edge of AI development. This represents a comprehensive rejection of the PA’s preferred policy direction.
The International Dimension: Regulatory Arbitrage and the Race to the Top
The PA’s campaign implicitly assumes that the UK can adopt restrictive AI regulations without significant economic cost. This assumption ignores the reality of international competition and regulatory arbitrage.
Which is deeply worrying, given the PA is also fully aware almost all of the so-called growth in British publishing is happening overseas, and more than 60% of Britain’s publishing revenues comes from exports, per the PA’s own statistics that in different circumstances it proudly proclaims.
If the UK makes AI training legally hazardous through restrictive copyright interpretations, as Dan Conway champions, AI development will simply migrate to jurisdictions with clearer legal frameworks.
The United States, despite its litigation, maintains robust fair use doctrines that many legal scholars believe protect AI training. It also has AI deeply embedded in government interests, and should the US courts start turning on AI interests, an Executive Order is a pen-stroke away.
Singapore has explicitly adopted AI-friendly copyright exceptions. Even within Europe, individual member states interpret copyright directives differently, creating regulatory variation that AI companies can exploit.
The UK Govt. understands that it cannot afford to lose this regulatory competition. The partnerships with Meta and Anthropic are stakes in the ground: the UK is open for AI business, and our public services will demonstrate AI’s benefits to citizens. This is not a government planning to adopt PA-favoured regulations that would make these partnerships legally untenable.
The Evolving Legal Consensus: Courts Are Not Cooperating with Publishers
Whilst the PA emphasises ongoing litigation, it glosses over an inconvenient trend: courts in multiple jurisdictions are proving sceptical of copyright claims against AI companies. Early decisions in US cases have generally favoured AI defendants on key procedural and substantive questions. Copyright claims that initially appeared formidable have foundered on doctrinal problems.
This emerging legal consensus – that AI training likely falls within various copyright exceptions and limitations – has not escaped UK Govt. notice. Civil servants monitoring these cases will observe that publishers’ legal theories, whilst superficially appealing, face substantial doctrinal obstacles. The Govt. is not going to structure AI policy around legal theories that courts are declining to endorse.
The PA’s emphasis on settlements is telling: settlements avoid precedent-setting losses. A settlement is not a legal victory. It’s a compromise. If publishers possessed strong legal cases, they would pursue them to judgement and establish binding precedent. That they settle instead a recognition that courtroom victories are uncertain – a recognition that Govt. lawyers undoubtedly share.
The Cultural Battle: Framing AI as Opportunity, Not Threat
Beyond the specific copyright disputes, the government’s partnerships represent a broader cultural positioning: AI as opportunity rather than threat. Much as it pains me to suggest the Starmer Govt. knows what it’s doing, this framing directly challenges the narrative the PA and its allies have promoted.
For two years, organisations like the PA (Society of Authors and fellows of the Luddite Fringe, take a bow) have amplified every concerning AI development: hallucinations, bias, misinformation, job displacement, copyright infringement, missing fluffy kittens… The cumulative message has been that AI represents a dangerous disruption requiring strong regulatory constraint.
The Govt.’s response – deploying AI to help jobseekers, training AI fellows in public service, partnering with leading AI companies – inverts this narrative. The message becomes: AI is a tool for improving citizens’ lives, enhancing public services, and addressing longstanding challenges. Sure, copyright concerns exist, but they are not the primary lens through which to evaluate AI’s societal role.
This reframing is absolutely devastating for the PA’s campaign. If government ministers are enthusiastically discussing AI’s potential to transform healthcare, education, and employment services, the PA’s insistence that AI training represents an existential crisis begins to sound parochial, self-interested, and disconnected from broader societal priorities.
The Voluntary Nature of Services: A Subtle Reassurance
An easily overlooked detail in the announcement deserves attention: “The technology, the use of which will be entirely optional.” This apparently minor caveat reveals sophisticated political management.
By emphasising that jobseekers can choose whether to use AI-powered services, the Govt. addresses privacy and autonomy concerns whilst simultaneously normalising AI deployment. Citizens who use these services and find them helpful become advocates for further AI integration. Those who remain sceptical can decline participation without harm. Over time, successful deployment creates political permission for expanded AI use.
For the PA, this gradualism is problematic. Each successful public service deployment strengthens the argument that AI benefits society, making copyright concerns appear increasingly like special pleading from an industry resisting inevitable change. The PA, like the SoA, desperately wants AI to be seen as dangerous and controversial; the Govt.’s approach methodically demonstrates the opposite.
The Alan Turing Institute: Institutional Endorsement
The involvement of the Alan Turing Institute – the UK’s national institute for data science and artificial intelligence – adds institutional credibility to the Govt.’s AI partnerships. The Institute, named after Britain’s most famous computer scientist, carries considerable prestige in technology and scientific circles.
By routing Meta’s investment through the Turing Institute to recruit AI fellows, the government associates AI development with scientific excellence and national heritage. This is not fly-by-night technology adoption; this is the UK building on its proud tradition of computational innovation.
For critics attempting to portray AI development as reckless or illegitimate, the Turing Institute’s involvement is problematic. One cannot easily characterise government-supported AI research as irresponsible when it flows through the nation’s premier data science institution. The PA’s warnings about “responsible AI deployment” ring hollow when deployment occurs through institutions specifically dedicated to responsible research.
The Luddite Fringe: Why This Label Fits The PA Perfectly
Few will be surprised to know I get objections from some quarters about my use of the term “Luddite Fringe” when referencing recent leadership of the PA and the SoA. But the PA’s stance perfectly encapsulates why this is valid.
Historically, comparing technology sceptics to Luddites might have been seen as unfair. The original Luddites faced genuine economic displacement from mechanisation, and their concerns about industrial working conditions and wealth distribution were legitimate. Modern invocations of “Luddite” often caricature thoughtful critics as merely resistant to change.
This is textbook Luddism: recognising that a technology offers benefits, but insisting it must not be deployed because it disrupts existing economic arrangements.
However, the PA’s response to the Govt.’s partnerships earns this comparison in spades. The organisation acknowledges that AI tools can support public services, then immediately pivots to why this shouldn’t proceed due to copyright concerns. This is textbook Luddism: recognising that a technology offers benefits, but insisting it must not be deployed because it disrupts existing economic arrangements.
This position is untenable. It reveal an organisation that has lost perspective on how its interests relate to broader societal concerns, and how AI can benefit the very industry it purports to want to protect.
The Momentum Question: Can the PA Recover from This Setback?
Strategic campaigns depend on momentum. For the PA, momentum has meant amplifying copyright concerns, securing sympathetic media coverage, building coalitions with other creative industries, and pressuring policy-makers to adopt restrictive AI regulations.
Sympathetic media coverage is easy. The “AI menace” narrative is clickbait gold, and Conway is the king of clickbait. Many industry journals depend on currying favour with organisations like the PA, which is why Dan Conway’s opinions are presented as news headlines while those of AI advocates are relegated to the opinion pages.
The Govt.’s newly announced partnerships disrupt this momentum decisively. The PA can continue arguing its positions, but it does so from a weaker position. The organisation is now opposing partnerships that the Govt. has publicly embraced, arguing against AI deployment that ministers are championing, and attempting to delegitimise companies that the state has selected as trusted partners.
This creates a political dynamic where the PA’s continued opposition begins to damage its credibility rather than advance its interests. If Anthropic’s employment services succeed, if Meta’s AI fellows generate valuable innovations, the PA will have been proven wrong in its warnings. Each public service improvement enabled by AI partnerships strengthens the government’s position and weakens the PA’s.
Recovery would require the PA to shift strategy dramatically: acknowledging AI’s benefits, proposing constructive frameworks for balancing innovation with legitimate rights-holder interests, and positioning publishers as partners in AI development rather than its opponents.
Whether the PA possesses the strategic flexibility for this pivot is doubtful. It certainly won’t happen while Dan Conway is chasing soundbites.
The Escalation Trap: Why Conway Should Stop the Soundbites And Try Thinking First
The PA faces an escalation trap: each aggressive statement criticising government AI partnerships generates media attention but weakens the organisation’s policy influence. Ministers and civil servants who read the PA’s responses see an organisation prioritising publishers’ commercial interests over public service improvements. This perception damages the PA’s credibility for future policy debates.
Strategic retreat would serve publishers better than continued confrontation.
.Strategic retreat would serve publishers better than continued confrontation. The PA could acknowledge AI’s benefits, express satisfaction with Anthropic’s publisher settlement, note the importance of responsible deployment, and offer to participate constructively in developing best practices. This repositioning would preserve influence for future debates.
Instead, by framing the partnerships as problematic and demanding firmer government action, the PA escalates conflict in a manner that highlights the organisation’s detachment from broader societal priorities. Each critical statement reinforces perceptions that publishers oppose technological progress benefiting citizens.
The Final Analysis: Defeat Disguised as Critique
Dan Conway’s statement, carefully parsed, reveals the PA’s strategic situation. The opening – “It is right that the UK government seeks to utilise AI tools to support better and more tailored public services” – constitutes de facto acceptance of AI deployment. The subsequent criticisms cannot overcome this concession. This is strategic defeat disguised as critical engagement.
For the Luddite Fringe, whose opposition to AI appears motivated more by protecting existing economic arrangements than by genuine concerns about technology’s societal impact, the path forward narrows considerably. The government has signalled clearly that it will not sacrifice AI opportunity to satisfy industries demanding protection from technological change.
The PA can (and no doubt will) continue its campaign, but from a dramatically weakened position.
The future of AI policy in the UK looks clear: partnership over prohibition, opportunity over protectionism, and innovation over the preservation of legacy business models.
The PA’s campaign sought to prevent this outcome. It has comprehensively failed.
This post first appeared in the TNPS LinkedIn newsletter.