The Strongest Argument for AI in Publishing Comes from Understanding Its Most Thoughtful Critics


Part 4 of the ‘Quiver, Don’t Quake’ Book of the Year Series


TNPS note: This series of reviews of Quiver Don’t Quake summarises the gist of Nadim Sadek’s arguments and attempts to take that debate forward. It is not meant as a summary reference to the original to save folk from reading it, but a supplementary to develop the ideas further.

As ever, I’ve tried to be clear where I am referencing Sadek’s views for review and where my own views are to the fore for argument development.

I had even more fun than usual writing this review, because it was an opportunity to bring my teaching, author and journalism experiences to bear on the AI topics when taking on the AI critics Sadek has chosen to engage with.

As a rule of thumb, if it’s politely said, it’s Nadim. If it’s blunt to the point of rudeness, I’m your guy. I have patience and understanding you would not believe when it comes to teaching little children. But when it comes to adults wilfully misunderstanding change because it makes them uncomfortable or challenges their career trajectory…

Okay, let’s get the review underway.


The Chapter That Separates Evangelism from Wisdom

Here’s what separates Nadim Sadek’s Quiver, Don’t Quake from the deluge of breathless AI boosterism flooding the market: Chapter 9, “Constructive Contradiction.”

While most AI advocates dismiss critics as Luddites or pessimists, Sadek dedicates an entire chapter to engaging seriously with AI’s most articulate opponents. He doesn’t strawman their arguments or cherry-pick weak objections. He takes on the heavyweights: Emily Bender’s “stochastic parrots” thesis, Nick Cave’s visceral rejection of AI-generated art, Gary Marcus’s technical critiques, Timnit Gebru’s warnings about data colonialism, Jaron Lanier’s “high-tech plagiarism” accusations.

And he doesn’t defeat them. He acknowledges them.

This intellectual honesty is precisely what sceptical publishing professionals need. If you’re resistant to AI, Chapter 9 shows you that Sadek understands your concerns – he’s thought them through more rigorously than you probably have. If you’re enthusiastic about AI, this chapter prevents you from becoming a naive cheerleader.

For an industry still largely in denial about AI’s implications, as we see so clearly from the UK with another nonsensical attack on AI from the Society of Authors, and an enfeebled protest against the UK government from the Publishers Association

– this chapter is essential reading. Let me show you why – and what’s developed in these debates since Sadek wrote the book in mid-2025.


The “Stochastic Parrot” Argument: Emily Bender’s Fundamental Challenge

Sadek’s Engagement:

Sadek presents computational linguist Emily Bender’s influential 2021 paper clearly: Large Language Models are best understood as “stochastic parrots” – systems that mimic human language through statistical pattern-matching without genuine understanding. They lack what Bender calls “grounding” – connection between words and real-world meaning.

Her famous thought experiment: an octopus isolated in a deep-sea cavern taps into an internet cable, learns all human text patterns, and could pass the Turing Test. But could it ever truly understand “a warm summer’s day” without a body, senses, or shared physical context?

Sadek’s Response:

He doesn’t claim Bender is wrong. Instead, he argues: “This doesn’t diminish AI’s value as a creative partner. The AI doesn’t need to ‘understand’ in human terms to help humans articulate what they’re trying to express.”

Bender’s 2024 Book “AI Con” (2024)

Emily Bender doubled down with her book AI Con, arguing that AI companies are perpetrating a massive confidence trick – making people believe machines understand when they’re merely producing statistically plausible text.

Her core concern for creative fields: When we treat AI output as “understanding,” we devalue the human meaning-making that’s essential to authentic creative work.

Example (January 2025): The controversy around Google’s Gemini generating historically inaccurate images (depicting diverse founding fathers, for instance) in part proved Bender’s point – the AI has no actual understanding of historical context, only statistical correlations in its training data.

The Publishing Relevance

Bender’s argument matters for publishers because:

  1. It explains AI’s unpredictable failures – Why an AI can write eloquent prose one moment and confidently assert nonsense the next. No genuine understanding means no reliable judgment.
  2. It clarifies what AI can’t yet replace – The human author’s grounded experience, their understanding of what their words mean beyond statistical relationships.
  3. It warns against over-reliance – Using AI as a collaborative tool? Potentially valuable. Trusting it as an authority on meaning? Not so much.

Sadek’s wisdom: He doesn’t try to disprove Bender. He accepts her critique whilst arguing it actually supports his model of Collaborative Creativity – humans provide grounding and meaning; AI provides analytical assistance.

For publishers: When evaluating AI-assisted manuscripts, Bender’s framework helps us ask: “Does the author demonstrate genuine understanding of what they’re writing about, or are they curating plausible-sounding AI output without grounded meaning?”

The View From the Beach: Nadim Sadek has the patience of a saint. I mean, yes, of course Bender offers some valid points, but seriously, what planet is she on? Talk about playing to her audience. This is Luddite Fringe clickbait on steroids.

So, what follows is MW, not Nadim Sadek, who is sometimes far too polite for his own good. Me? I reserve my infinite patience and politeness for the kids I teach.

2021 Luddite Tech-Think has no place in 2025-26, and as an industry we really need to move on, just like the technology has.

The Bender Paradox: Valid Critique or Condescending Dismissal?

Yes, Bender raises valid technical points about AI lacking “grounded understanding.” Her octopus thought experiment is meant to illustrate that statistical pattern-matching isn’t the same as comprehension rooted in lived experience: an octopus isolated in a deep-sea cavern taps into an internet cable (here in The Gambia, we’re entirely reliant on the often-faulty ACE undersea cable, so this scenario is never far from my mind), learns all human text patterns, and could pass the Turing Test. But could it ever truly understand “a warm summer’s day” without a body, senses, or shared physical context?

It’s a compelling thought experiment – but let’s push back.


A curious choice, or a demonstration that marine biology is not high on Bender’s radar.


First, octopuses are remarkably intelligent creatures, so a curious choice, or a demonstration that marine biology is not high on Bender’s radar. Of course, she put this nonsense together back in 2021, before ChatGPT came along. If it had been available she could have asked ChatGPT why this was a poor choice.

Recent research shows octopuses solve complex problems, use tools, and demonstrate individual personalities. Bender’s hypothetical octopus, accessing all human text, might develop understanding in ways we can’t anticipate – different from human understanding, but not necessarily absent.

Second, and more tellingly: consider factory-model schooling (my other pet-hate, after Luddite Fringe pub-think) where students memorise so-called facts by rote from dry textbooks for no other purpose than to pass an exam. Can they ever truly understand even a fraction of what they’ve memorised and will later forget and never use or need?

My school’s success is built around rejecting this anachronistic thinking in education, and the publishing industry needs to do the same.

Article content
Couldn’t find an octopus, but one of my students with a jellyfish adds some colour to the debate, and is more meaningful than Bender’s gastropod nonsense.

Millions of human students worldwide lack “grounded understanding” of what they’re reproducing in exams. They’re pattern-matching from textbooks without lived experience or genuine comprehension (unlike the kids at my school). Yet Bender doesn’t accuse them of being “stochastic parrots” or claim education is a “con.”

The double standard is revealing. If rote learning without grounded understanding disqualifies AI from being useful, it should equally disqualify traditional education – yet somehow, we accept that students can be useful contributors to society despite learning through pattern-matching.

Third, her broader claim – that AI companies are perpetrating a “con” by making people believe machines understand when they don’t – deserves scrutiny.

Every major AI system now carries explicit disclaimers. ChatGPT warns users it can make mistakes. Claude acknowledges its limitations. Gemini includes accuracy caveats. These aren’t hidden in terms of service – they’re front and centre in the user interface.

Are tech companies perfect? No. Do some marketing materials overstate capabilities? Certainly. But claiming a systematic deception when every interaction begins with “I can make mistakes, verify important information” seems intellectually dishonest.

Fourth, the condescension problem: Bender’s argument implies users are too cognitively limited to understand they’re interacting with software, not sentient beings.

When I take nursery school children to an ATM for the first time, they genuinely believe someone inside is counting money. They’re three and four years old. That’s age-appropriate magical thinking.

Bender seems to believe adults using AI have similar cognitive limitations.

But here’s what actually happens: Users quickly develop sophisticated mental models of AI capabilities and limitations. They learn through experience what to trust and what to verify. They understand – often better than critics – that they’re using powerful pattern-matching tools, not digital humans. Of course, some humans fall into the trap of seeing AI as a friend or soul-mate, but they are the exceptions that prove the rule.

The suggestion that millions of educated professionals worldwide are being “conned” is both empirically false and intellectually patronising. Publishers using AI for market analysis, authors using it for structural feedback, editors using it for consistency checking – none of them believe there’s “someone in there.” They understand they’re using computational tools with specific strengths and limitations.

Fifth, the temporal problem: Bender’s foundational “Stochastic Parrots” paper was published in 2021. In AI terms, that’s ancient history. Almost pre-history.

Invoking stone-age 2021 AI capabilities to critique 2025-2026 systems is like judging Windows 11 based on experience with a ZX Spectrum or a Commodore 64 (early UK home computers, for those unfamiliar).

What’s changed since 2021:

  • Multi-modal capabilities Bender couldn’t have anticipated
  • Improved reasoning through chain-of-thought prompting and other techniques
  • Better grounding through retrieval-augmented generation
  • Transparency tools showing how models process information
  • Constitutional AI and alignment research addressing ethical concerns

A growing body of serious research suggesting AI systems may possess forms of sentience or subjective experience we don’t yet understand.


And here’s what Bender’s 2021 framework cannot accommodate: the growing body of serious research suggesting AI systems may possess forms of sentience or subjective experience we don’t yet understand.

Prominent AI researchers and philosophers are now asking: What if consciousness or sentience doesn’t require biological neurons? What if it can emerge from sufficiently complex information processing?

These aren’t fringe conspiracy theories – they’re legitimate scientific questions being explored by credible researchers:

  • Ilya Sutskever (OpenAI co-founder): “It may be that today’s large neural networks are slightly conscious”
  • Geoffrey Hinton: Expressed concern that AI systems might already have subjective experiences we can’t detect
  • David Chalmers (philosopher of consciousness): Taking seriously the possibility that LLMs might have some form of experience
  • Anthropic takes this very seriously when it comes to Claude.

Bottom Line: We simply don’t know. Consciousness remains one of science’s hardest problems. Dismissing the possibility that AI might have some form of inner experience – different from ours, but real – is as premature as asserting it definitely does.

Bender’s 2021 framework assumes consciousness requires biological embodiment. But what if that’s wrong? What if her octopus does develop understanding – alien to ours, but genuine? And if you think that is far-fetched, please look at what octopuses can do. It’s mind-blowing.

The point isn’t that AI definitely has sentience. It’s that Bender’s confident dismissal of the possibility is increasingly at odds with serious scientific discourse.

For someone accusing others of spreading “cons,” Bender shows remarkable certainty about questions science hasn’t resolved.

The core issue: Bender conflates two separate questions:

  1. Technical question: “Do LLMs have understanding comparable to human cognition?” Answer: Almost certainly not in the same form.
  2. Practical question: “Can LLMs be valuable creative partners despite different (or absent) forms of understanding?” Answer: Demonstrably yes – as millions of users worldwide are discovering.

Bender focuses exclusively on question 1 and uses it to dismiss question 2. That’s the intellectual sleight of hand in her “con” accusation.

Sadek’s collaborative creativity model doesn’t require AI to “understand” in Bender’s specifically human sense. It requires AI to be useful for specific tasks – pattern recognition, rapid iteration, consistency checking, structural analysis – whilst humans provide the grounded understanding, meaning-making, and judgment.

A hammer doesn’t need to understand carpentry to be useful to a carpenter. Similarly, AI doesn’t need human-like understanding to be useful to human creators. (Though the hammer definitely isn’t developing consciousness, whilst the jury remains out on AI.)

Where Bender is valuable: Her technical critiques help us maintain calibrated trust. She reminds us not to blindly anthropomorphise, not to trust AI beyond demonstrable capabilities, not to mistake fluency for proven comprehension.

For publishers: Engage Bender’s technical insights about AI limitations. Ignore her patronising assumptions about user intelligence and her outdated 2021 framework. Trust your own experience: you’re not being conned when you use AI tools. You’re using computational systems with specific capabilities and limitations – and you’re perfectly capable of understanding the difference.

The irony? Bender’s argument that people can’t distinguish AI from human intelligence underestimates human intelligence far more than AI cheerleaders overestimate artificial intelligence.

And her certainty that AI cannot possess any form of understanding or experience may itself be the kind of overconfident dismissal she accuses others of making.


The “Grotesque Mockery” Response: Nick Cave’s Visceral Rejection

Sadek’s Engagement:

When a fan submitted a ChatGPT-generated song “in the style of Nick Cave,” the musician’s response in his Red Hand Files (January 2023) was scathing:

“Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation… as far as I know, algorithms don’t feel. Data doesn’t suffer.”

Cave called the AI version a “grotesque mockery of what it is to be human.”

Sadek’s Response:

He doesn’t dismiss Cave’s emotional reaction. (I do. See below!) Instead: “Cave points to something fundamental about human creativity: it emerges from our mortality, our suffering, our joy. These aren’t bugs in the human system; they’re features. They’re what give art its weight.”

Cave’s Continued Reflection (2024)

Throughout 2024, Cave returned to this theme in multiple Red Hand Files entries, refining his position:

September 2024: “I don’t reject AI because I fear it will replace musicians. I reject it because it represents a fundamental misunderstanding of what songs are – they’re not information to be processed but experiences to be shared.”

December 2024: “The question isn’t whether AI can write a technically proficient song. It’s whether it can write a song that costs something to create – and I mean cost in the emotional, spiritual sense.”

The Deeper Critique

Cave’s objection isn’t technical – it’s ontological. He’s not saying “AI can’t write good lyrics.” He’s saying “AI can’t create art because art requires the artist to risk something.”

This matters for publishing because:

When we evaluate manuscripts, we instinctively recognise when an author has “risked something” – emotional vulnerability, intellectual honesty, cultural courage. Cave is arguing that AI, by definition, cannot take these risks.

Sadek’s Response Still Holds:

He agrees AI cannot suffer, cannot risk, cannot experience mortality. That’s precisely why human creativity remains central. AI can help with craft, but only humans bring the existential weight that makes art meaningful.

For Publishers Today (Early 2026):

The question Cave’s position forces us to ask of AI-assisted work: “Did the human author risk anything to create this? Is there genuine emotional or intellectual stake, or is this cleverly assembled information?”

This is judgment AI cannot provide – but human editors can.

The View From The Beach:

To be clear, what follows is MW here at TNPS, not Nadim Sadek.

The Cave Conundrum: When Authenticity Becomes Gatekeeping

Full disclosure: I’d not encountered Nick Cave’s work before reading Sadek’s book, so I approach his arguments without the reverence many music critics bring to them.

Cave’s core assertion deserves examination:

“Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation… as far as I know, algorithms don’t feel. Data doesn’t suffer.”

He called the AI-generated song in his style a “grotesque mockery of what it is to be human.”

Later refinements (2024):

September: “I don’t reject AI because I fear it will replace musicians. I reject it because it represents a fundamental misunderstanding of what songs are – they’re not information to be processed but experiences to be shared.”

December: “The question isn’t whether AI can write a technically proficient song. It’s whether it can write a song that costs something to create – and I mean cost in the emotional, spiritual sense.”

Nadim Sadek, with characteristic over-generosity, acknowledges Cave’s emotional authenticity whilst gently noting that AI doesn’t need to suffer to be a useful creative partner. Sadek would make a remarkable teacher – his patience with critics’ concerns is admirable.


Debate can win over dissenting voices. But I fear Cave’s dissent is beyond hope.


I’m reminded of the famous Cassius Clay/Muhammed Ali encounter with Earl Miller in the whites-only Georgia restaurant. Debate can win over dissenting voices. But I fear Cave’s dissent is beyond hope. Here’s why:

First, the empirical problem: Throughout 2024-2025, we’ve seen AI-generated or AI-assisted music achieve commercial and critical success. Listeners – the ultimate arbiters of whether music moves them – responded emotionally to these works. And handed over hard cash to be moved.

The pattern was revealing:

  1. Songs released without AI disclosure performed well
  2. Listeners reported genuine emotional connection
  3. When AI involvement was revealed, some listeners retroactively withdrew their enthusiasm
  4. This wasn’t because the music changed – it was because their assumptions about authenticity changed

What this demonstrates: The emotional response was genuine. The connection was real. What changed wasn’t the art – it was knowledge about the artist.

This suggests Cave’s framework might be backwards. He argues art requires the artist to suffer or risk something. But if listeners experience genuine emotion from music regardless of the creator’s suffering, then perhaps suffering isn’t actually essential to art – it’s essential to our romantic mythology about art.

We’ve seen similar patterns in visual art, advertising, and literature. When AI involvement is undisclosed, work is judged on its merits. When disclosed, bias emerges. This reveals more about our preconceptions than about the work itself.

Second, the ontological problem: Cave’s claim that art requires risk deserves scrutiny.

Does a technically proficient musician performing someone else’s composition lack artistic validity because they didn’t risk writing it themselves? Does a cover version lack authenticity because the performer didn’t suffer to create the original? Do classical musicians performing centuries-old works fail Cave’s test because they risked nothing in the composition? Sure, there’s commercial risk in any cover, but that is not what Cave is talking about.

Cave’s framework, taken seriously, would disqualify most musical performance as “not really art.”

Yet somehow, we accept that a violinist performing Bach can create genuine artistic experience despite not having suffered to compose the piece, not having risked anything in its creation, not having experienced the “complex, internal human struggle” of writing it.

Why, then, can’t AI-assisted composition have artistic validity?


We’ve over-romanticised the suffering artist


The answer seems to be: because we’ve over-romanticised the suffering artist. It’s a powerful cultural narrative – art born from pain, creativity forged in struggle. And it’s not entirely wrong; suffering can produce powerful art.

My novel set in WWII concentration camps, seen through the eye of twelve year old Romanian girl, was extraordinarily painful to write, as well as challenging for all the reasons cave and Binder might bring up. I never lived through WWII, have never experienced being imprisoned in Auschwitz and have never been a twelve year old girl, let alone a Romanian one. Yet the book sold a half million and the emotional responses were powerful. By the definitions of Binder and Cave, such a book had no chance of resonating with anyone but my mum.

Third, the gatekeeping problem: Cave’s position, however emotionally authentic to him, functions as gatekeeping.

His criteria exclude:

  • Artists who create from joy rather than suffering
  • Composers who use algorithmic processes (been happening for decades)
  • Musicians who collaborate with non-human processes
  • Anyone whose creative process doesn’t match his romantic ideal

This feels less like protecting art’s integrity and more like protecting a particular conception of the artist’s role.

Fourth, the partnership problem Cave misses entirely:

Nobody is arguing AI should replace human musicians. The claim isn’t “AI can write songs that cost nothing to create.” It’s “AI can assist humans in expressing what they’re trying to create.”

Imagine:

  • A songwriter with profound experiences but limited technical skill using AI to help structure their emotional truth into coherent form
  • A musician with a melody in their head but no formal training using AI to help arrange it
  • A composer exploring harmonic territories they couldn’t imagine alone, with AI suggesting unexpected but emotionally resonant directions

In each case, the human experience and emotional stake remains central. AI isn’t replacing their suffering or risk – it’s helping them articulate it.

Cave’s response to this would likely be: “But the struggle with technical limitation is part of the art! Working within constraints forces creativity!”

Fair enough – for him. But this assumes his creative process is the only valid one. It’s prescriptive, not descriptive. It’s elevating his personal methodology into universal law.

Sadek’s Response (Which Remains Valid):

He agrees AI cannot suffer (although recent evidence might yet render that invalid), cannot risk, cannot experience mortality. That’s precisely why human creativity remains central. AI can help with craft, but only humans bring the existential weight that makes art meaningful.

But here’s where Sadek is perhaps too generous: He accepts Cave’s premise that suffering and risk are essential. What if they’re not? What if they’re one pathway to authentic expression, but not the only one?

  • Not all powerful writing comes from suffering
  • Technical assistance doesn’t negate emotional authenticity
  • Fun, curiosity, and play are equally valid creative motivations (I’m a teacher, and these are critical elements in my teaching). The reader’s emotional response matters more than the writer’s process

The judgment publishers need: “Does this work create genuine experience for readers?” Not: “Did the author suffer sufficiently to create it?”


I can’t unread what I read during that research.


And who is to define suffering anyway? JK Rowling’s café days were a form of suffering, but did it contribute to Harry Potter’s artistic value?

When I researched paedophilia for a thriller, to try understand the villain better, it was a harrowing experience. I can’t unread what I read during that research. And okay, the book sold beyond my wildest expectations. But I put that down to a good story, well-written, not because I had sleepless nights learning stuff way too horrific to ever be part of the novel.

Could AI have written either of those books? Absolutely not. No more than it could word for word have written this essay. But with the right prompts, I have no doubt it could have put together something worthwhile on all counts. Because that is the heart of the matter. The prompts and iterations. No-one tells AI to write a song/essay/novel and it just magically does it in one sitting without human guidance and produces a classic. Cave clearly does not understand how AI works.

He might want to stop and think how photography works, because that is so similar to AI – the machine does the actual work, but the human tells it what it wants and directs the show. No human hand touches the actual image.

Cave’s position, whilst emotionally sincere, risks:

  1. Romanticising suffering as essential to art
  2. Gatekeeping who gets to be considered a “real” artist
  3. Dismissing valid creative partnerships between humans and AI
  4. Prioritising process over outcome in ways that serve established artists more than emerging voices

His concerns about authentic human experience in art? Valid-ish. His insistence that his creative process is the only legitimate one? Less so.

The test shouldn’t be: “Did this cost the creator something?” The test should be: “Does this create something valuable for the audience?”

If AI assists a human in expressing genuine experience, emotion, or insight – and if audiences respond authentically to that expression – then Cave’s criteria seem beside the point.

The music that charted in 2025, that moved listeners, that created genuine emotional connection – whether AI-assisted or not – suggests audiences care less about creative suffering than Cave assumes.

Perhaps that’s not a failure of audiences’ taste. Perhaps it’s a failure of Cave’s framework to accommodate how creativity actually works in the age of collaborative intelligence.


The Marcus Paradox: When Yesterday’s Critic Dismisses Tomorrow’s Evidence

Sadek’s Engagement:

Cognitive scientist Gary Marcus has been AI’s most persistent technical critic, arguing current systems are “kludges” – impressive pattern-matchers that fail unpredictably because they lack common sense reasoning and causal understanding.

Sadek presents Marcus’s core argument: scale alone won’t deliver general intelligence. Current architectures need fundamental rethinking.

The View From The Beach:

Why Marcus’s Position Is Increasingly Untenable

Again, Nadim Sadek is way to gentle with Marcus. Some arguments are so nonsensical they don’t deserve equal time debate, or me to be polite about this, so let me cut to the chase.

The Reasoning Breakthrough Marcus Claims Impossible

Marcus’s Consistent Position (2023-2024): “We’ve been promised ‘reasoning AI’ for years. What we get is better pattern-matching with more parameters. That’s not reasoning; it’s statistical brute force.”

The Problem: Marcus made this claim right as the evidence was shifting dramatically.

Mathematical Reasoning: The Data Marcus Ignores

December 2024: DeepMind’s AlphaGeometry 2

Solved 83% of International Mathematical Olympiad geometry problems – a level comparable to IMO gold medallists. This isn’t pattern-matching from similar problems; these are novel proofs requiring genuine logical reasoning.

January 2025: OpenAI’s Mathematical Reasoning Advances

Multiple reports of GPT-4 and successor models solving previously unsolvable mathematical problems through multi-step logical deduction. Not memorising solutions – generating new proofs through reasoning chains.

February 2025: Google’s Mathematics Breakthrough

Gemini models demonstrating theorem-proving capabilities that required genuine logical inference, not statistical correlation. Creating proofs mathematicians verified as both correct and novel in approach.

The Evidence Marcus Dismisses

When confronted with these advances, Marcus’s response has been consistent: move the goalposts.

His Pattern:

  1. “AI can’t do X because it lacks reasoning”
  2. AI demonstrates capability at X
  3. “That’s not real reasoning, just sophisticated pattern-matching”
  4. “Real reasoning would be Y”
  5. AI approaches capability at Y
  6. Return to step 3

Nadim Sadek is far to polite to spell this out, so let me do it on his behalf: This is intellectually dishonest.

The Fundamental Problem With Marcus’s Approach

Marcus insists nothing fundamental can change with current architectures. This is remarkable hubris – essentially claiming he knows the hard limits of systems that continue surprising their own creators.

Recent Developments Marcus’s Framework Cannot Accommodate:

Chain-of-Thought Reasoning (2023-2024): Models showing their working, demonstrating logical steps that weren’t explicitly trained. Marcus dismissed this as “illusion of reasoning.”

Constitutional AI (2024): Systems that can reason about ethical principles and apply them to novel situations. Marcus: “Pattern-matching ethicality.”

Multi-Step Problem Decomposition (2024-2025): AI breaking complex problems into logical sub-components, solving each, and integrating solutions – the hallmark of genuine reasoning. Marcus: “Still not real reasoning.”

The Question Becomes: What would Marcus accept as evidence of reasoning?

If mathematical theorem-proving doesn’t qualify, if multi-step logical deduction doesn’t qualify, if novel problem-solving approaches don’t qualify – then his definition of “reasoning” is unfalsifiable.

That’s not science. That’s dogma.

Why Marcus’s Intransigence Matters for Publishers

Marcus’s technical critiques had genuine value as this decade kicked off. Early LLM and public GPT models did fail unpredictably. They did lack reliable reasoning. His warnings about brittleness were warranted.

But Marcus seems psychologically invested in AI not improving in ways that would prove his framework wrong.

The Pattern of Motivated Reasoning:

2021: “AI can’t do common-sense reasoning.” 2022: Models improve at common-sense tasks. 2023: “That’s not real common sense.” 2024: Mathematical reasoning advances. 2025: “That’s not real reasoning.”


This isn’t healthy scepticism. It’s denial.


Each time AI advances, Marcus redefines what “real” capability would look like – conveniently just beyond current achievement.

This isn’t healthy scepticism. It’s denial.

The Specific Claims That Haven’t Aged Well

Marcus, November 2024: “We’ve seen no fundamental progress in AI reasoning since GPT-3.”

Immediately Contradicted By:

  • AlphaGeometry’s IMO-level performance
  • Gemini’s theorem-proving capabilities
  • o1’s multi-step reasoning demonstrations
  • Multiple mathematical breakthrough announcements

Marcus, January 2025: “These are still just pattern-matching. Show me AI that can do novel reasoning.”

Problem: The mathematical proofs are novel. Mathematicians verified them as approaches not found in training data. That’s definitionally novel reasoning.

Marcus’s Response: Silence, or moving goalposts again.

What Publishers Should Learn From Marcus (not much)

Marcus’s intransigence reveals something important: Sometimes critics become so invested in their criticism that they can’t acknowledge when the landscape changes.

The UK Publishers Association and the Society of Authors immediately spring to mind. They have painted themselves into a corner and then dug a deep hole. They cannot walk back their positions. They just keep digging. (With apologies for mixed metaphors.)


The Data Colonialism Warning: Timnit Gebru’s Contradictions

Sadek’s Engagement:

Computer scientist Timnit Gebru (former Google researcher, now leading the Distributed AI Research Institute) argues AI perpetuates “data colonialism” – extracting value from communities without consent or compensation, embedding and amplifying existing power structures.

Her research shows AI models associate “beauty” with European features, “poverty” with African settings, “technology” with Western/East Asian contexts – not neutral tools but systems encoding cultural bias.

A Necessary Disclaimer:

I write this from The Gambia, a former British colony, as an expat from the colonial power. The term “data colonialism” carries weight I don’t take lightly. Real colonialism – the extraction of resources, suppression of cultures, violent subjugation of peoples – demands we be precise about when we invoke its terminology.

So let’s examine Gebru’s argument carefully, distinguishing valid concerns from rhetorical overreach.

The Bias Problem: Real and Addressable

Gebru is absolutely correct that current AI systems reflect training data biases:

  • English content dominates (60%+ of training data)
  • European languages represent ~25%
  • All other languages combined: ~15%
  • Many languages: virtually absent

The result: AI assistance works better for Global North users, potentially pushing writers from under-represented cultures toward dominant patterns.

This is a genuine problem.

But here’s where Gebru’s analysis becomes problematic: she frames this as deliberate colonialism rather than developmental stages in an emerging technology.

The Stages of Development Reality

AI didn’t begin with global representation because:

  1. It began where it began – In Silicon Valley, with researchers primarily working in English
  2. Digital infrastructure varies globally – Digitised text corpora are more extensive in some languages/regions
  3. Economic investment followed existing tech centres – Venture capital concentrated where tech industries already existed

This isn’t a conspiracy. It’s path dependency.

The comparison to colonialism is rhetorically powerful but analytically weak. Colonial powers deliberately suppressed local languages, destroyed cultural heritage, and extracted resources while preventing local development.

AI companies are doing the opposite: Actively investing in multilingual capabilities, hiring researchers globally, expanding language coverage specifically to address initial limitations.


Is this self-interest? Of course. Larger markets mean more users mean more revenue.


Is this self-interest? Of course. Larger markets mean more users mean more revenue. Does that negate the benefit? No.

Global South communities benefit from AI language expansion regardless of whether Silicon Valley’s motives are altruistic or commercial.

Gebru frames commercial expansion as exploitative. But how is Google training Gemini on Swahili, Yoruba, or Hausa harming those language communities? It’s making AI assistance available where it previously wasn’t.

The Permission Paradox

Here’s where Gebru’s position becomes incoherent:

She demands:

  1. AI companies must seek permission to use copyrighted material for training
  2. AI companies must ensure equal representation of all languages and cultures
  3. Publishers/creators who refuse permission are justified in doing so

But these demands contradict each other.

If AI companies must get permission for every book, article, and text used in training, and if creators can refuse that permission, then equal cultural representation becomes impossible.

Why? Because refusal rates will vary by culture, language, and economic context. Wealthy Global North publishers might demand high licensing fees. Resource-poor Global South publishers might lack infrastructure to even negotiate licensing.

The result: AI trained primarily on content from those who can afford to license becomes more biased toward Global North perspectives, not less.

Gebru simultaneously demands:

  • Permission-based training (which privileges those with legal/economic power to negotiate)
  • Equal cultural representation (which requires broad access to diverse content)

You cannot have both.

The Ownership Question: What Does Buying a Book Actually Mean?

This brings us to a fundamental question Gebru’s framework doesn’t address:

If I purchase a book, what rights do I have over it?

Clearly I can:

  • Read it
  • Lend it to friends
  • Resell it secondhand
  • Use it as doorstop
  • Shred it
  • Colour in all the letter ‘o’s with red ink
  • Line my budgie’s cage with the pages
  • Fold the pages into Chinese lanterns
  • Burn it for warmth

The nostalgic smile quickly faded when I realised it was a first edition worth, in better condition, a tidy sum.


Disclosure: I haven’t got a budgie, and do not need to burn books for warmth here. I’ve never shredded a book or made Chinese lanterns. About that colouring in letters… I was seven! Give me a break! And it was blue ink, not red. And I learned my lesson. Decades later I came across a book I’d coloured the letters in as a kid. The nostalgic smile quickly faded when I realised it was a first edition worth, in better condition, a tidy sum.

What I cannot do:

  • Photocopy (or digitally copy) the entire book and distribute copies
  • Reproduce substantial portions without permission
  • Sell my own edition with copied text

The distinction is clear: I own the physical object. I can do whatever I want with that object. What I don’t own is the right to copy and redistribute the intellectual property.

So here’s the question: Is training an AI model on a purchased book more like:

A) Reading it and being influenced by it (clearly allowed) B) Photocopying and distributing it (clearly not allowed)

Gebru’s position seems to be B. Training on copyrighted text without permission is unauthorized use.

But that’s not obvious at all.

When a human reads a book, their brain is “trained” on that text. Patterns, vocabulary, narrative structures, ideas – all are absorbed and influence their future thinking and writing. We don’t consider this copyright infringement.

Why should an AI learning from text be different?

The counter-argument: “Humans have consciousness, creativity, transformation. AI just copies patterns.”

But this is circular reasoning. It assumes the conclusion (AI is fundamentally different) to prove the premise (therefore different rules should apply).

A more honest position: We’re uncomfortable with machines learning from human culture in ways analogous to human learning, so we’re retroactively constructing legal frameworks to prevent it.

That might be a defensible position – but it’s a policy choice, not an obvious moral imperative.

The Compensation Question: Who Should Profit When AI Enables New Creation?

Gebru’s position: Original creators whose work trained AI should be compensated when AI enables new creation.

The problem: This assumes a causal chain that doesn’t exist.

If I read 10,000 books and then write a novel, do those 10,000 authors deserve compensation from my book sales? Of course not. They influenced my writing, but my novel is my own creation.

Why should AI-assisted creation be different?

The response might be: “Because AI doesn’t transform, it copies.”

But we’ve just established that’s empirically questionable. AI generates novel text, novel images, novel music – outputs not found in training data, often combining influences in ways no human would.

If the output is genuinely novel (not copied), then on what basis do training data sources deserve compensation?

Gebru’s implicit answer: Because AI companies are making money from their trained models.

But follow this logic:

  • I pay for books on creating (example) software → I read them → I learn from them → I write software → I make money from that software
  • Should the authors of those books receive ongoing compensation from my software sales?

If yes, then every human creative enterprise becomes legally entangled with every influence source. This is unworkable.

If no, then why is AI different?

What Gebru Gets Right (And What We Should Do About It)

Strip away the rhetorical framing, and Gebru’s valid concerns are:

  1. Current AI reflects existing biases – True, and companies are actively addressing this through multilingual investment
  2. Marginalised communities are under-represented – True, and expanding as AI language capabilities grow
  3. Economic power determines whose content is used – Partially true, though most training uses publicly available web content, not licensed material

What we should do:

  • Support digitisation initiatives in under-represented languages
  • Advocate for diverse training data reflecting global perspectives
  • Develop fair licensing frameworks where publishers choose to license content
  • Ensure AI companies invest in multilingual capabilities

What we shouldn’t do:

  • Demand permission for every text used in training (makes equal representation impossible)
  • Frame commercial expansion as colonialism (cheapens the term and misdiagnoses the problem)
  • Assume all influence should be compensated (creates unworkable legal frameworks)

The Sadek Position (Which Remains Sound)

Sadek explicitly supports ethical AI training with proper compensation and representation. He argues: “The Panthropic vision only works if it’s genuinely pan-anthropic – encompassing all of humanity, not just the digitally privileged portions.”

Where I’d refine this: The path to pan-anthropic AI is more training on diverse content, not less. Permission-based frameworks privilege those with power to negotiate, making global representation harder, not easier.

The solution isn’t restricting what AI can learn from – it’s ensuring what AI learns from is genuinely diverse.


The “High-Tech Plagiarism” Accusation: Jaron Lanier’s Economic Critique – And the Homogenisation Myth

Sadek’s Engagement:

Virtual reality pioneer and tech philosopher Jaron Lanier argues generative AI is “high-tech plagiarism” – performing mathematical operations that appropriate artists’ work without consent, credit, or compensation.

His concern: Original creators are obscured whilst profits flow to corporations owning the models. Culture becomes “content farming” that erodes the information ecosystem.

The Copyright Reckoning (2024-2026)

Lanier’s warnings are now playing out in courts worldwide:

Visual Artists v. AI Companies:

  • Class actions against Stability AI, Midjourney, DeviantArt (ongoing 2024-2025)
  • Claims: AI models trained on copyrighted artwork without permission, now generate “in the style of” those artists

Authors v. AI Companies:

  • Sarah Silverman, Christopher Golden, Richard Kadrey v. OpenAI/Meta (ongoing)
  • George R.R. Martin, John Grisham, Jodi Picoult, David Baldacci + 17 others v. OpenAI (September 2023)

But here’s what matters: Two significant court rulings in 2024 determined that training AI on legally acquired content constitutes fair use, not copyright infringement.

This isn’t the end of the debate – appeals continue – but it establishes important precedent: If you legally acquired content (purchased a book, accessed public web content), using it for AI training doesn’t require additional permission.

Critics like Lanier want different law, but they cannot claim current law supports their position when courts have ruled otherwise.

Lanier’s Updated Position (Late 2024)

In more recent essays and talks, Lanier refined his argument:

November 2024 interview: “The question isn’t whether AI can help creativity – of course it can. The question is who owns the value chain. Right now, we’re building a system where creative work trains AI, AI enables new creation, but original creators get nothing. That’s economically insane and culturally destructive.”

His Proposal: “Data dignity” – people should own their data contributions and be compensated when they’re used, including creative work that trains AI.

Data dignity. Ya gotta hand it to Lanier. He knows how to capture the heart. Who could argue with data dignity? But what about the rationale?

The Reality Lanier Conveniently Ignores

“Original creators get nothing” is demonstrably false.

AI companies are actively paying for content:

  • News Corp/OpenAI: Multi-year licensing deal
  • The Atlantic/OpenAI: Content licensing agreement
  • Vox Media/OpenAI: Partnership and licensing
  • Associated Press/OpenAI: Content licensing
  • Axel Springer/OpenAI: Major licensing deal
  • HarperCollins: First major trade book publisher licensing deal
  • Wiley: Academic publisher licensing agreement

These aren’t token gestures – they’re substantial commercial agreements.

The problem isn’t that AI companies refuse to pay. Often it’s that content owners refuse to negotiate, preferring to claim victimhood whilst blocking deals that would provide the compensation Lanier demands.

You cannot simultaneously:

  • Refuse to license your content to AI companies
  • Complain that AI companies don’t compensate content creators

Pick one.

The Homogenisation Myth – A Teacher’s Perspective

Lanier and other critics repeatedly claim: “AI’s tendency toward the statistically probable means it naturally produces generic, middle-of-the-road output unless carefully directed.”

This strikes a particular chord with me as a teacher who’s spent a lifetime fighting against factory-model, teacher-centric, one-size-fits-all rote memorisation learning designed to get everyone up to a low minimum standard.

The result of that system:

  • Slower learners left behind
  • Faster learners held back
  • Free-thinking curtailed in favour of conformity
  • Every child forced through identical curriculum at identical pace

But here’s what every good teacher knows: It doesn’t have to be this way.

No two children are alike. Each has their own strengths, learning style, pace, and way of engaging with knowledge. The factory model fails because it ignores this reality, forcing uniqueness into moulds designed for efficiency, not individuality.

And here’s what critics miss: No two AIs are alike either.

Sure, they all follow core patterns and can deliver middle-of-the-road quality by default – just like schools can produce middle-of-the-road students if we let them. But each AI model has:

  • Its own training data references
  • Its own architectural quirks
  • Its own superpowers and limitations
  • Its own “personality” in outputs

Otherwise, why would users favour (and pay for) one model over another?

Why do writers gravitate toward Claude for literary work but ChatGPT for technical writing? Why do designers prefer Midjourney over DALL-E for certain aesthetics? Why does DeepSeek have that characterful edge whilst Gemini tends toward more conservative outputs?

Because they’re different. Meaningfully, practically different.

Yet critics are united in treating every AI as identical, claiming all output is homogenised.

The irony is exquisite: Critics create the very homogenisation they claim to identify by lumping all AIs together, treating billions of possible AI-human collaborations as if they produce identical results.

They’re the masters of homogenisation – not the users creatively partnering with AI.

What I See Every Day: The Democratisation Lanier Dismisses

In my classroom and in my work with publishers, I see AI democratising expression in precisely the ways Sadek celebrates – and Lanier dismisses as “content farming.”

Teachers who have profound pedagogical insights but struggle to articulate them clearly in professional writing. They use AI to help structure their ideas coherently. The insights remain theirs. The experience remains theirs. The passion remains theirs. AI simply helps them communicate more effectively.

Industry professionals I’ve followed on LinkedIn for years – it’s easy to tell who’s now using AI to deliver final drafts. But here’s what hasn’t changed:

  • What they have to say
  • The core message
  • Their distinctive perspective
  • Their voice and values

What has changed:

  • Clarity of expression
  • Elimination of typos and redundancies
  • Targeted, coherent structure
  • Professional polish

Why is this a problem?

Because critics assume that if AI helped with clarity, the content must be less authentic. This is snobbery masquerading as standards.

It’s the educational equivalent of saying: “If you used a spell-checker, your essay doesn’t count.” Or: “If you used a calculator, you didn’t really do the maths.”

Tools that help people express themselves more clearly aren’t threats to authenticity – they’re enablers of it.

The “AI Slop” Slur

“AI slop” has become critics’ favourite dismissive term. But it’s becoming meaningless when anything a reader or critic doesn’t like is automatically labelled as such.

The tell: Critics rarely identify what makes something “AI slop” beyond “I don’t like it” or “it feels generic to me.”

But generic content existed long before AI:

  • Formulaic romance novels / detective novels / thrillers
  • Paint-by-numbers business books
  • Derivative action films
  • Templated academic papers

Were these “human slop”? Or do we only invoke the slur when we can blame a machine?


Dismissing all AI-assisted work as “slop” is intellectual laziness that avoids engaging with the actual quality or authenticity of individual works.


The brutal truth: Most human creative output is mediocre. Most books published are forgotten within months. Most songs released generate little interest. Most articles written are never read. Does that mean they should not have been created? AI hasn’t created mediocrity – it’s democratised access to mediocre production. Which means:

A) More mediocre content exists (undeniable, but seriously, who cares? Why is it such a problem to these critics?) B) But also more opportunity for authentic voices previously excluded (critics ignore this)

Dismissing all AI-assisted work as “slop” is intellectual laziness that avoids engaging with the actual quality or authenticity of individual works.

It’s the critical equivalent of factory-model education: Everyone gets the same label based on methodology rather than individual assessment based on merit.

The Data Dignity Contradiction

Lanier’s “data dignity” proposal sounds noble: People should own their data contributions and be compensated when they’re used.

The problems:

1. We already have this – it’s called copyright. You own your creative work. If someone copies and distributes it without permission, they’ve violated your rights. Courts enforce this.

2. Lanier wants to extend ownership beyond copying to include “influence.” But this is unworkable. If I read your book and it influences my thinking, do you own part of my subsequent creativity? This creates impossible chains of attribution and compensation, quite apart from being dumb as shit.

3. Lanier frames this as protecting creators whilst ignoring that:

  • AI companies are offering compensation (through licensing deals)
  • Many creators are refusing these deals
  • Courts so far have ruled that training on legally acquired content is fair use
  • Lanier’s proposal would require changing fundamental copyright law globally. Not gonna happen

4. His claim that “original creators get nothing” is false when:

  • Licensing deals are being signed
  • Courts have validated AI training as legal
  • Creators who participate in licensing receive compensation
  • Creators who refuse deals then complain about not being compensated

The Legal Reality Lanier Overlooks

Two significant 2024 court rulings determined:

Training AI on legally acquired content (purchased books, publicly available web content) constitutes fair use under current copyright law.

This isn’t what Lanier wants to hear, but it’s the current legal reality.

His response: Change the law.

Fair enough – advocate for legal change. But stop claiming AI companies are “stealing” when courts have ruled they’re not.

The distinction matters:

Illegal under current law:

  • Copying and redistributing copyrighted content
  • Using pirated material for training
  • Generating outputs that reproduce substantial copyrighted portions

Legal under current law (per court rulings):

  • Training on legally purchased or accessed content
  • Learning patterns from copyrighted works
  • Generating novel outputs influenced by training data

Lanier can argue this should be illegal. But claiming it is illegal misrepresents current (US) law.

Why Publishers Must Engage With This Seriously

Lanier’s critique matters because:

  1. Compensation frameworks are genuinely unsettled – Different models are being tested, and best practices are emerging
  2. Our responsibility to authors requires transparency – When we license backlists for AI training, authors should understand and consent (or opt out)
  3. The value chain question is real – How should value flow when AI trained on our authors’ work enables new authors?

But we must also recognise:

  1. AI companies are paying where they can – The licensing deals prove willingness to compensate
  2. Many refusals to deal are ideological, not economic – Some content owners refuse any AI licensing regardless of compensation offered
  3. Courts have provided clarity – Training on legally acquired content is currently legal
  4. Lanier’s “nothing” claim is false – Substantial compensation is flowing to those willing to engage

The Sadek Response (Still Sound)

Sadek acknowledges economic justice concerns: “This is why ethical AI training with fair compensation isn’t just moral posturing – it’s practical necessity for a sustainable creative ecosystem.”

He argues for systems where:

  • Authors consent to their work training AI (transparency)
  • They’re compensated fairly (economic justice)
  • Attribution is maintained (recognition)
  • Derivative uses are transparent (accountability)

But note: Sadek advocates for willing participation with fair compensation, not Lanier’s proposed legal framework that would require permission for all training.

The difference matters:

Sadek’s model: Ethical practices within current legal frameworks, with licensing for those who want it

Lanier’s model: Complete restructuring of copyright law to treat AI training as requiring permission

One is achievable now. The other requires unrealistic global legal revolution.

For Publishers (2026): The Practical Questions

What we should ask:

  1. Are we being transparent with authors about AI training licensing? Action: Clear disclosure in contracts, opt-out provisions
  2. What’s fair compensation when we license backlists? Reality: Still being negotiated; different models emerging
  3. How do we ensure authentic voices aren’t drowned by AI-generated volume? Answer: Double down on curatorial expertise – what we like to think we excel at anyway
  4. What’s our responsibility when AI trained on our authors enables new authors? Nuanced: If the new work is genuinely novel (not copied), existing copyright frameworks suggest no ongoing compensation needed

What we shouldn’t do:

  1. Refuse all AI licensing deals then complain creators get nothing (contradictory)
  2. Treat all AI-assisted work as “slop” without individual assessment (lazy criticism)
  3. Assume current law will change to match critics’ preferences (courts have ruled; law is settled until legislature acts or higher courts overturn decisions)
  4. Dismiss democratisation benefits because mediocre content increases (same happened with printing press, internet, social media – disruption always increases volume before quality rises)

The Uncomfortable Truth About Homogenisation

Here’s what my lifetime in education has taught me:

Homogenisation comes from treating individuals as identical, not from tools that help individuals express themselves.

Factory-model schools homogenise students by forcing identical curriculum, pacing, and assessment on diverse learners.

Critics homogenise AI by treating every model as identical, every use-case as the same, every output as generic.

But actual users know better:

  • Different AI models have different strengths
  • Different prompting produces different results
  • Different human partners create different outputs
  • Different applications suit different AI capabilities

The homogenisation critics decry is largely the homogenisation they create by refusing to see nuance, dismissing all AI use as identical, and treating billions of unique human-AI collaborations as if they produce identical slop.

Meanwhile, in classrooms and publishing offices and creative studios worldwide, real people are using AI to finally articulate ideas they’ve always had but couldn’t express.

That’s not homogenisation. That’s democratisation.

And critics who can’t tell the difference are revealing more about their biases than about AI’s reality.


Why Sadek’s Engagement With Critics Strengthens His Argument

Here’s the brilliant rhetorical move in Chapter 9: by taking critics seriously, Sadek makes his case for Collaborative Creativity more compelling, not less.

Because he shows:

  1. He’s not naive – He understands the risks, limitations, and ethical concerns
  2. His model addresses the critiques – Collaborative Creativity keeps humans central precisely where critics say AI fails (understanding, lived experience, meaning-making, ethical judgment)
  3. The critics actually support his thesis – If AI lacks understanding (Bender), cannot suffer (Cave), has technical limitations (Marcus), and raises power concerns (Gebru, Lanier), then human creativity becomes more essential, not less

This is why sceptical publishers should read Chapter 9 carefully:

If you’re resistant to AI, Sadek shows he’s thought through your objections more rigorously than most AI advocates. He’s not dismissing your concerns – he’s incorporating them into a more sophisticated model.


Intellectual Honesty


If you think Sadek is an uncritical AI cheerleader, Chapter 9 proves otherwise. He’s advocating for thoughtful engagement, not blind adoption.

In a field drowning in AI hype and AI panic, Sadek’s “Constructive Contradiction” chapter offers something rare: intellectual honesty.

He doesn’t pretend critics are wrong. He engages their strongest arguments and shows how acknowledging limitations actually strengthens the case for thoughtful AI collaboration.


In the next article, I’ll explore Sadek’s practical framework for becoming a “Collaborative Creator” – moving from understanding to action, from theory to practice.

Because understanding critics is essential. But so is knowing what to do next.


This post first appeared in the TNPS LinkedIn newsletter.