Data-centre construction alone created more jobs in the United States in 2024 than the entire traditional publishing sector employs – not more than it lost, more than it employs in total.
Sam Altman has a confession to make, though he’s threading the needle carefully enough that it barely registers as one.
Speaking at the India AI Impact Summit, the OpenAI CEO acknowledged what anyone paying attention has suspected for some time: companies are using artificial intelligence as cover for job cuts they were planning anyway. “There’s some AI washing,” Altman said, “where people are blaming AI for layoffs that they would otherwise do.”
It takes a particular kind of audacity to admit your product is being used as a scapegoat whilst simultaneously insisting it will eventually replace most human workers. But Altman is, if nothing else, a man who understands the rhetorical demands of his moment. He needs corporations to believe AI can replace people – that is literally the pitch to shareholders – but he also needs workers not to riot. So he offers the soothing addendum: “We’ll find new kinds of jobs, as we do with every tech revolution.”
He’s probably right on both counts. The problem is that the space between those two truths has been colonised by bad faith actors on multiple sides – corporations keen to rebrand redundancies as innovation, advocacy organisations more interested in generating outrage than achieving reform (Society of Authors, take a bow), and a media ecosystem that runs on fear.
The result is a public conversation about AI and employment so distorted by vested interests that the actual picture – messy, complicated, genuinely concerning in places, and strikingly positive in others – has become almost impossible to see.
This matters enormously for publishing, where the noise-to-signal ratio on AI has long been catastrophic. But it matters just as much for the broader economy – and for those of us who care about evidence-based thinking in a world increasingly allergic to it.
At which point, the Luddite Fringe best go back and bury their heads in the sand again. Time to deal with inconvenient facts, not fantasy-fuelled self-righteous indignation.
The Numbers Nobody Mentions
Let’s begin with the raw data, because the gap between the popular AI-jobs narrative and the actual statistics is extraordinary.
According to the delightfully named outplacement firm Challenger, Gray & Christmas, approximately 55,000 job losses in 2025 were attributed directly to AI. That sounds alarming until you learn it represents less than 1% of all job losses that year. Let me add here that .DOGE-related actions were actually the leading cause of job cut announcements in 2025, responsible for 293,753 planned layoffs. Yes, we can attribute that to Elon Musk, but no to his AI.
A paper from the National Bureau of Economic Research found that 90% of executives surveyed said AI had had no impact on workplace employment over the previous three years. J.P. Morgan’s research found “little association between various measures of AI intensity and job growth outside of selected tech industries,” concluding that so far AI “has not been a major driver of the composition of employment gains.”
A net creation ratio of nearly ten to one.
Meanwhile, on the creation side of the ledger: the Information Technology and Innovation Foundation calculated that AI directly created approximately 119,900 jobs in the United States in 2024 alone – against roughly 12,700 losses attributed to the technology. That is a net creation ratio of nearly ten to one.
Data centre construction alone accounted for over 110,000 new positions, each large-scale facility requiring around 1,500 on-site workers for up to three years of construction. AI-related job postings in the US surpassed 1.2 million in the first half of 2025, up from 980,000 in the equivalent period the previous year.
PwC’s 2025 Global AI Jobs Barometer, which analysed close to a billion job advertisements across six continents, found that workers with AI skills command a wage premium of 25–30% over colleagues in identical roles without those skills – and that premium is rising.
None of this appears in the Society of Authors’ and Publishers Association rants.
The Amazon Confession
The Altman admission finds its most perfect corporate illustration in Amazon’s behaviour last spring. The company cut 14,000 jobs whilst telling employees that AI implementation meant it would “need fewer people doing some of the jobs that are being done today.” By October, when the reputational damage had been assessed, the company was claiming AI wasn’t actually the reason for the cuts at all.
What happened in between was not a change of facts but a change of audience. To investors and the business press, AI-driven efficiency is a growth story: the company is innovating, automating, becoming leaner and more competitive. To workers and politicians, AI-driven job losses are a PR crisis. The solution is to deploy both narratives simultaneously in different rooms, hoping they never quite meet.
Companies are learning to speak in stories. AI becomes a rhetorical instrument rather than an explanation – invoked when it serves the speaker, disavowed when it doesn’t.
This is AI washing in its purest form. Workers lose their jobs for reasons rooted in economic cycles, post-pandemic correction, interest rate environments, political tariffs and strategic pivots. But “we’re restructuring in response to macroeconomic headwinds” is a dull headline. “AI is transforming our operations” is a story.
The Publishing Industry’s Own Version
Those who read my take down of the Society of Authors’ “Brave New World?” report will recognise this pattern immediately, because certain elements of British publishing has been performing its own version of the Amazon trick for several years now.
The report’s headline statistics – 86% of authors claiming AI has reduced their earnings, 72% citing cut job opportunities, 57% saying their career is no longer sustainable – would be genuinely shocking were they not describing conditions that have existed in British publishing since approximately 2006. UK author incomes fell 60% in real terms between 2006 and 2022, from a median of roughly £12,000 to £7,000. The percentage of full-time professional authors dropped from 40% to 19% over the same period. This happened entirely in the years before ChatGPT existed as a public product.
What was blamed in the 2018 ALCS report, published four years before generative AI entered public consciousness? Amazon’s market dominance. Publisher consolidation. The shift to digital formats. Declining advances. Exploitative buy-out contracts. “Take it or leave it” terms from publishers who knew authors had nowhere else to go.
What is blamed now? AI.
The structural problems haven’t changed. The contracts haven’t changed. The retail monopolies haven’t changed. The “winner takes all” economics that see the top 10% of authors earn 47% of total income – those haven’t changed either.
A new villain.
What’s changed is the availability of a new villain, one that is technically complex enough to seem frightening, new enough to pre-empt scepticism, and associated with tech companies that already serve as convenient antagonists in publishing narratives.
The SoA’s sleight of hand – surveying authors about whether AI has affected their income in an environment saturated with AI panic, then treating confirmation bias as research – is a smaller version of what Amazon did. Both are using AI as the story that obscures the real story.
The Chimney Sweep Problem
There is, embedded in all of this, a genuinely difficult moral question that rarely gets asked with any rigour: when did we last lose sleep over the jobs that our current jobs replaced?
An earlier essay of mine, written in January 2023 and updated a year later –
posed this question in deliberately provocative terms. You don’t own a quill. You’ve never ridden in a horse-drawn carriage to deliver a manuscript. You wash your dishes in a machine and heat your home with central heating and communicate by email and buy your books through an algorithm. Not once, if you’re a writer, have you thought: perhaps I should go back to the typewriter, to keep the typewriter repair industry viable.
The jobs that technology eliminated over the past two centuries – stable hands, coal merchants, candlestick makers, typesetters, film developers, telephone operators – were all somebody’s livelihood. They were all accompanied, at the time, by genuine human distress. We remember them now, if at all, as “casualties of progress,” a phrase that manages to be both accurate and entirely callous.
Yes, technological displacement causes real suffering.
The honest version of the AI employment debate would acknowledge this history directly: yes, technological displacement causes real suffering, particularly for people mid-career in disrupted industries who lack the resources or flexibility to retrain. That suffering deserves serious policy responses – portable benefits, retraining support, stronger social safety nets. What it does not deserve is the pretence that it isn’t happening, or that it can be reversed by demanding that AI companies stop what they’re doing.
What it also doesn’t deserve is the opposite pretence – that AI displacement is uniquely catastrophic, that this time is different, that the whole edifice of employment is about to collapse. History shows that automation tends to reallocate work rather than eliminating it outright. That is not a reason for complacency; it is a reason for precision.
What AI Is Actually Doing to the Economy
Here is the picture that almost never features in the advocacy literature, the newspaper headlines, or the political speeches: AI investment is currently acting as a significant macroeconomic stabiliser for the global economy, at a moment when that economy needs all the stabilisation it can get.
Business investment in AI and data centres was responsible for an outsized 30% of US GDP growth in Q2 2025 and 20% of economic expansion in Q1. The Federal Reserve Bank of St. Louis has noted that AI-related investment contributions to real GDP have surpassed those of IT components during the dot-com boom, both in levels and as a share of GDP. Corporate AI investment reached $252.3 billion in 2024, with private investment climbing 44.5% year-on-year – more than thirteenfold the total investment of a decade earlier.
Data-centre construction alone created more jobs in the United States in 2024 than the entire traditional publishing sector employs.
IDC research suggests that for every dollar invested in AI solutions, it generates an additional $4.90 in the broader economy – a multiplier effect comparable to the most productive infrastructure investments in modern economic history. AI-related capital expenditure contributed up to 1.3 percentage points of US GDP growth in Q2 2025, according to Bank of America Global Research.
Data-centre construction alone created more jobs in the United States in 2024 than the entire traditional publishing sector employs – not more than it lost, more than it employs in total.
Goldman Sachs projects AI could boost US productivity by 1.5% annually over the next decade, with measurable GDP impact beginning around 2027. The UAE has set a target of deriving roughly 14% of GDP from AI by 2030. India’s AI Mission has created 620,000 jobs and trained over 3.8 million people since its 2024 launch. The EU has mobilised €200 billion in AI investment through its InvestAI programme.
The countries that will prosper in this transition are, by and large, those that approach it with clear eyes about both the opportunities and the costs. Publishing, as a microcosm, is no different.
The Skills Premium and the Augmentation Story
Perhaps the most underreported dimension of the AI employment picture is what PwC’s global analysis calls the “augmentation” story: the evidence that AI is making many workers substantially more valuable, not less.
Revenue growth in AI-exposed industries has accelerated sharply since 2022, and the wage premium for AI skills – comparing workers in identical roles with and without those skills – has risen to around 30%, up from 25% the previous year. In the Autodesk AI Jobs Report, which analysed nearly three million job listings across design, engineering, and creative industries, the most striking finding was not that AI was eliminating roles but that design had overtaken technical expertise as the most in-demand skill in AI-specific job postings – with communication, leadership, and collaboration skills also in the top ten.
Human creative and interpretive capacities are not being made redundant; they are being revalued, for workers willing to develop them in conjunction with the new tools. This has direct implications for authors, translators, editors, and the other creative professionals who populate publishing’s workforce. The question is not “will AI replace me?” but “am I willing to understand what AI can and cannot do, and position my skills accordingly?”
The SoA, by spending its institutional energy demanding that AI companies voluntarily disclose training datasets – an ask so legally, commercially and practically naive it doesn’t merit analysis, just ridicule – is not helping its members make this adaptation. It is, instead, offering them an external enemy as a substitute for strategic thinking.
The Scapegoat Cycle
What we are watching, across industries but with particular intensity in publishing, is a scapegoat cycle that follows a predictable grammar.
- Phase one: a sector is already experiencing structural decline, driven by consolidation, changing consumer behaviour, digital disruption, or some combination of the three.
- Phase two: a new technology arrives that is genuinely disruptive but whose actual impact is much smaller and more diffuse than feared.
- Phase three: the existing decline is retrospectively attributed to the new technology, which becomes the official explanation for problems that long predate it.
- Phase four: advocacy organisations, unions, and sympathetic journalists amplify the new narrative because it generates engagement, fundraising, and political relevance.
- Phase five: the actual structural problems remain entirely unaddressed.
Publishing has run this cycle with chain bookstores in the 1990s, Amazon in the 2000s, ebooks in the early 2010s, Kindle Unlimited mid-decade, and Spotify audiobooks more recently.
In each case the predicted catastrophe either didn’t materialise or affected the industry rather differently than forecast. In each case the real problems – publisher consolidation, exploitative contracts, retail monopoly, the winner-takes-all dynamics of cultural markets – continued uninterrupted.
The difference with AI is that it is genuinely more capable than those previous disruptions, and will eventually have more significant effects on creative labour markets.
But “eventually” is doing enormous work in that sentence. As of mid-2025, less than 10% of firms in the overall economy indicated they were using AI regularly, rising to just over 20% in professional, scientific, and technical industries. The apocalypse, as always in publishing, is perpetually three to five years away. The publishing sky isn’t falling per se. It’s in orbital freefall, like a satellite in space.
The Question Nobody Asks
The argument for a more honest reckoning with AI and employment is not that displacement doesn’t happen, or that it doesn’t matter. It is that the current conversation is so dominated by bad faith – corporate AI washing on one side, Oscar-winning performative technophobia on the other – that the people who most need clear thinking are getting the least of it.
Those people are, in publishing’s case, the mid-list author whose income was already precarious before ChatGPT launched. The literary translator working in endangered languages. The illustrator who has genuinely seen their commercial commissions decline as clients experiment with generative imagery. These are real people with real problems that deserve real solutions.
Real solutions would involve the SoA negotiating contract terms that give authors a share of AI licensing revenue – revenue that is already flowing, in the hundreds of millions, mostly to publishers who, with bitter irony, have no contractual obligation to pass any of it on. Real solutions would involve transparency requirements so authors know when their work is licensed for AI training. Real solutions would involve challenging the buy-out contracts that allow publishers to monetise backlist indefinitely with no additional author compensation.
None of that is as gratuitously satisfying as a photograph of a letter being pushed through Meta’s letterbox. None of it generates the same headlines. But it would actually help.
The Global Picture
It is worth zooming out one final time to the macro level, because the AI employment debate in Britain and America is taking place in a very specific context that rarely gets named explicitly.
Both economies are navigating the aftermath of pandemic disruption, elevated interest rates, post-globalisation supply chain restructuring, while in the US unprecedented DOGE antics, ICE actions and political tariffs take their toll, and in the UK the ongoing economic consequences of Brexit. These are massive forces, compressing corporate margins, triggering restructuring across virtually every sector, and generating substantial redundancies that have nothing to do with artificial intelligence.
Into this environment arrives a technology that is genuinely significant, genuinely disruptive in the medium term, and – crucially – provides corporations with a narrative of forward-looking innovation that obscures what would otherwise look like straightforward cost-cutting under economic pressure. AI washing, in this context, is not merely a PR tactic; it is a symptom of deeper economic anxiety being displaced onto a more tractable target.
The industry that will thrive is the one that stops pretending the pre-AI status quo was acceptable – it wasn’t, for most of the people working in it – and starts building new models that distribute value more fairly in a transformed landscape.
Coda: What Altman Got Right
Sam Altman, threading his needle at the India AI Impact Summit, was performing a careful balancing act between constituencies with irreconcilable interests. But in the gap between his two statements – “some companies are blaming AI for layoffs they’d have made anyway” and “the real impact of AI on jobs will soon become palpable” – lies something true.
Both things are happening. The AI washing is real and it matters. The displacement is also real and it will grow. The new jobs are being created faster, at present, than the old ones are being lost. The economic investment is genuinely enormous and genuinely consequential. The wage premium for augmented workers is real and rising. And none of it absolves the Society of Authors, or its equivalents in other creative industries, from doing the hard, unglamorous work of addressing the structural problems their members actually face.
The convenient villain is always easier than the complicated truth. But the history of publishing suggests that decades of chasing convenient villains is how you end up with a median author income of £7,000 in 2022. A reminder ChatGPT launched in November of that year.
That was never AI’s fault. And the solution was never going to come from a letter through a letterbox.
This post first appeared in the TNPS LinkedIn newsletter.