A single contract cancellation. A cascade of unanswered questions. And an industry that has spent years debating AI in the abstract, now confronted with the real thing.


What We Actually Know

Let’s begin with what the facts actually are, as opposed to what the social media hysteria and LinkedIn re-shares would have you believe.

Mia Ballard’s horror novel Shy Girl was self-published in 2025 to significant commercial success. Hachette acquired it for what was described as a major 2026 release. The contract was subsequently cancelled following what Hachette described as a thorough review, prompted by evidence brought to the publisher by the New York Times.

The allegation: undeclared AI use in the manuscript.

The author’s position: she did not write the book using AI. A freelance editor she engaged may have used AI tools – without her knowledge or consent. She is pursuing legal action and has said little more publicly for that reason.

That distinction matters enormously, and most of the coverage has glossed over it.

Here’s the thing: If an author wants to write something with AI, good luck to them. As an industry we have told ourselves every single day, five times before breakfast, that AI will never write as good as a human, no-one will ever buy anything written by AI, and nobody wants AI-writing.

So what’s the problem? No-one buys the AI-written book. No publishers contract the book. And we certainly don’t need to label anything as AI-written because the quality is so bad its obvious from one single, fleeting glance.

And that of course is why we are all running about like headless chickens screaming that the sky is falling. Because we know our fantasy narrative of bumbling LLMs mechanically stringing words together is just that: fantasy.

Which brings us to today. A self-published book written with AI sold so many copies that a respected Big 5 publisher acquired the title, not knowing or realising it had, allegedly, been written with AI.

Hachette demands authors declare AI has not been engaged during the book’s writing. Pretty dumb, if you ask me, but it’s their company. They make the rules.

And an author who writes a manuscript using AI and declares otherwise has committed a straightforward contractual breach. End of story.

Except it’s not. It’s just where the story gets interesting. Because as above, the author denies the accusation. It wasn’t her. It was Colonel Mustard with the lead piping in the ballroom. Or rather, her editor, behind closed doors, without the author knowing.

The author is suing Hachette, arguing she has honoured her contractual agreement. And if what she says is true – that her editor did it – then she is right. The fault lies with Hachette. And Hachette’s contract.

An author whose third-party editor introduces AI into the process without disclosure is a victim of a different kind of failure entirely – and one that current publishing contracts are wholly unequipped to address.

Whether the allegation against Ballard is proved, disproved, or settled somewhere in between, these questions remain. They will produce more cases like this. The industry is not ready.

The Question Nobody Is Asking About The Acquisition

Here is the detail that has attracted almost no commentary: readers, apparently, had been flagging concerns about the text – stilted phrasing, repetitive motifs – before Hachette acted. Online communities had been circulating their suspicions for some time before the NYT, clearly still smarting from WashPo getting that Watergate story, brought its evidence to the publisher. Pulitzer Prize material, for sure!

But forget the NYT a moment. The uncomfortable question here is about Hachette’s acquisition process. Did anyone there actually read the manuscript with critical attention? Or did the self-publishing sales figures do the reading for them?

I ask this not rhetorically but from experience. In 2011, one of my own self-published titles was making waves in the UK – sales that were, at the time, genuinely unusual for a Kindle title in a market where the store had only opened the previous year. I was cold-called by a major NY agency. They wanted the book and my next three. The terms: take the book down from sale immediately. Wait eighteen months for it to reappear. Accept a “substantial” advance in exchange for 15% royalties – against the 70% I was earning across Kindle, Apple and Kobo. Put my children’s books on hold; this was an adult crime thriller and that was what they wanted. And accept changes for the American market that would apply to the UK edition too.

Other agents had rejected the book countless times, so writing the rejection letter was one of the more clarifying moments of my professional life.

The point is not self-congratulation. All along time ago now. The point is that this is how acquisition culture works at the top end: commercial signal detected, terms dictated, the work itself secondary to the opportunity. And gatekeepers, don’t deny it. Celebrity-written novels and royal memoirs are not bought on literary merit.

Whether Hachette’s acquisition of Shy Girl followed a similar logic, I cannot say. But if readers with no professional stake in the outcome were identifying textual problems that concerned them, the question of what Hachette’s editorial process actually looked like is something the brass at Hachette need to be addressing.

Traditional publishing’s central value proposition over self-publishing, or so it tells us, has always been editorial rigour. Gatekeeping, in the neutral sense: the assurance that a manuscript has been read, assessed, and shaped by professionals before it reaches the public. If that process failed here – if the acquisition was driven by numbers rather than reading – then the subsequent crisis is not Mia Ballard’s failure alone.

The Detection Chimera

The evidence that triggered the controversy apparently included AI detection tool outputs and reader-identified stylistic patterns – the kind of “tells” that have circulated on social media for the past couple of years. Certain word choices. Repetitive structures. Flatness of voice.

As I wrote for TNPS in April 2025, in the context of the em dash panic, this is a chimera.

AI detection tools produce false positives at a rate that should make any publisher deeply cautious about acting on them. Stylistic “tells” attributed to AI are in many cases simply the markers of inexperienced or unpolished human writing.

Or even polished and experienced authors. I hate the em dash with a vengeance, but I know authors like Anne R. Allen who not only deploy them with elegance and charm, but even know the secret of how to make an em dash in MS Word, something that has thankfully eluded me for decades.

The thing is, the continuum from Grammarly to Hemingway app to full AI generation is not a series of clean steps – it is a blurred gradient, and nobody has a reliable instrument for measuring position on it.

The industry is reaching for a technological solution to what is fundamentally a problem of trust, disclosure and contractual clarity. Those are not problems that a detection algorithm can solve.

The Contract Gap

This raises the structural problem that the Ballard case makes impossible to ignore.

Publishing contracts have begun, belatedly, to address AI. Authors are asked to declare whether AI was used in the creation of the manuscript. Hachette, in its public statement, noted that it requires submissions to be original to their authors and asks for disclosure of AI involvement. As above, fair enough. They make the rules.

But what those contracts do not address is the editorial chain beyond the author.

Freelance editors, sensitivity readers, proof-readers, developmental editors – all of these are standard parts of the contemporary writing process, particularly for self-published authors building their own professional infrastructure. If any of those people use AI tools without disclosing it to the author, and the author signs a declaration of originality in good faith, who bears the liability?

This is not a hypothetical. If Ballard’s account is accurate, it is precisely what happened. And the contracts don’t cover it. Nobody’s contracts cover it. The industry has been so focussed on policing authors that it has not begun to think about the rest of the production chain.

A Word On The Pile-On

This piece is about AI and publishing’s structural unpreparedness. But it would be incomplete without noting that the online response to the Ballard case has not been a neutral exercise in quality control.

Credible voices have raised concerns that the speed and ferocity of the scrutiny directed at Ballard – and the apparently documented racial abuse she received during the controversy – reflects dynamics that go beyond literary anxiety.

I don’t know the details, but can confidently say that publishing’s diversity problem is well-documented. Whether the same volume and venom would have gathered around a white author in comparable circumstances is a question worth asking. Not here to attempt an answer, but something to keep in mind.

The AI question and the race question are separate. But the pile-on is part of the story, and pretending otherwise would be a failure of editorial honesty.

What Should Actually Happen Now

  • The Ballard case will be resolved – legally, contractually, or through the slow dissipation of public attention. But it leaves behind questions that will not resolve themselves.
  • Publishing contracts need to extend their AI disclosure requirements beyond the author to every professional in the editorial chain.
  • Acquisition processes need to treat commercial momentum as a signal worth investigating, not a substitute for editorial scrutiny.
  • AI detection tools need to be treated as preliminary indicators at best, not as evidence sufficient to cancel a contract and end a career.
  • And the standard of proof required before a publisher takes action of this magnitude needs to be much more clearly defined than it currently is.

Underneath all of this runs a question that the industry will find most disturbing and is reluctant to confront directly: if readers bought and enjoyed Shy Girl in its self-published form – voted with their money and their time, not just to read but to give postive reviews – what exactly is the gatekeeping function that traditional publishing is claiming to protect here? What failed, and at whose hands?

The Rorschach Test

This case has functioned, from the start, as a Rorschach test.

  • AI sceptics see confirmation of their fears about fraudulent authorship.
  • AI advocates see a witch hunt built on unreliable detection tools and social media hysteria.
  • Publishers see a liability problem they hadn’t anticipated.
  • Readers see a quality control failure – though the question of whose quality control failed remains open.

What the case actually is, stripped to its frame, is the first high-profile stress test of publishing’s institutional preparedness for a world in which AI is already present at every level of the writing and editorial process.

Not coming. Already here.

Publishing has spent several years arguing about AI in the abstract – at conferences, in position statements, in broadly-worded contract clauses. Shy Girl is the moment the abstract became specific: one author, one contract, one very public cancellation, and a set of systemic failures that neither the author nor the publisher appears to have seen coming.

There will be more. The question is whether the industry uses this one to get ready.

Let me wind up with these questions asked by Christopher Kenneally on LinkedIn.

  1. The novel sold well for a self-published title. So, do readers really care if AI had a hand (digitally speaking) in the writing?
  2. If readers don’t care, should a publisher?
  3. Is it perhaps only other authors who care?

That last one really strikes a chord. Ouch!


This post first appeared in the TNPS LinkedIn Analysis newsletter.