“Nowadays, of course, if someone tells us they write books on a typewriter, we help them into their straightjacket and take them back to the asylum.”


The Atlantic this weekend carries an intriguing post by David Autor and James Manyika that presents a compelling argument against the rush towards AI automation, using the metaphor of jumping across a canyon.

The authors contend that “imperfect automation is not a first step toward perfect automation, any more than jumping halfway across a canyon is a first step toward jumping the full distance.” Instead, they advocate for AI as a collaborative tool that amplifies human expertise rather than replacing it.

The article distinguishes between two types of tools: automation tools (dishwashers, ATMs, automatic transmissions) that function as closed systems without oversight, and collaboration tools (chainsaws, word processors, stethoscopes) that require human engagement and expertise to be valuable. Critically, they argue that AI can serve both functions but cannot do both simultaneously in the same application – and that premature automation often creates worse outcomes than thoughtful collaboration.

Through compelling examples (the failure of CheXpert in radiology, and the tragic Air France 447 crash) the authors demonstrate how automation that isn’t quite ready can be more dangerous than no automation at all.

But crucially they also highlight successful collaborative AI tools like Google’s AMIE medical assistant, which achieved better diagnostic outcomes by keeping doctors in the problem-solving loop rather than replacing their judgment.


The Publishing Industry’s AI Divide

The book publishing industry finds itself deeply divided on AI, often defaulting to extreme positions: either AI will destroy creative authenticity and replace human writers, or it will revolutionise productivity and democratise publishing. What’s largely missing from this polarised debate is the nuanced middle ground that The Atlantic article advocates. The recognition that AI’s value lies not in replacing human creativity and expertise, but in amplifying it through thoughtful collaboration.

You all know my position by now. I love AI as a collaborative tool and see it as a threat only in the same way as I see anyone who can do my job better than me, or will do the same job for less reward. No job comes with a lifetime guarantee. Few jobs stay the same.

But many of us in the publishing industry seem not see shades of gray, just black and white, and in doing so we miss seeing the collaborative opportunities and, to come back to the Atlantic‘s metaphor, how to safely cross the canyon and reach the greener grass on the other side.

Like the medical professionals who achieved 85% better outcomes when collaborating with AI versus working alone, publishing professionals may find that the real transformation comes not from AI taking over creative tasks, but from AI helping humans do those tasks more effectively.


Four Perspectives on AI Collaboration in Publishing

The Author’s Perspective: From Blank Page to Collaborative Canvas

For authors, AI collaboration tools represent the evolution of the word processor – that paradigmatic collaboration tool mentioned in The Atlantic piece.

At which point, let us remember that many in the publishing industry today will be word-processor natives, that never knew, or at least never experienced in employment, the antediluvian world of typewriters and replacing ink ribbons, untangling jammed keys, trying to align a sheet of paper and correcting typos with Tippex or Liquid Paper, as correction fluid was known in the UK and US respectively.

For them, it will be hard to imagine a world where there was no spell-check and grammar checks. No coloured lines beneath typos and questionable grammar. No internet. No email. No online buying. No ebooks. No digital audio. No Netflix. No online sales. No smartphones. Come to that, no dumbphones, just landlines connected to the wall.

How did we ever manage? How we all love progress! After the event, that is.

But how many at the time, if we are old enough, welcomed those changes? All that expense to replace a perfectly good typewriter with a desktop word processor the size of a car, and then you had to buy a dot-matrix printer and, worst still, you had to learn all this word processing nonsense about styles and layouts and work out what crazy new keys like ALT and CTRL and F1-12 might do, and you had to learn about Windows and folders and file names. And just when you think it couldn’t get any worse, you had learn to use a mouse and chase a cursor around a screen… No wonder home PCs and word processors never took off.

Nowadays, of course, if someone tells us they write books on a typewriter, we help them into their straightjacket and take them back to the asylum.

Here’s the thing: Just as word processors automated mundane tasks (spelling, formatting, basic grammar) while providing “a blank canvas for writers to express ideas,” and made the physical act of typing smoother and quieter, AI writing tools can handle research synthesis, structural suggestions, and iterative editing while leaving the creative vision, voice, and storytelling decisions firmly in human hands.

Successful AI Collaboration for Authors:

  • Research assistance: AI can rapidly synthesise background information, helping authors understand complex topics without replacing their interpretive analysis
  • Developmental editing: AI can identify plot inconsistencies, pacing issues, or character development gaps, but the author decides how to address them
  • Style refinement: AI can suggest alternative phrasings or highlight repetitive language patterns, while the author maintains their distinctive voice
  • Market research: AI can analyse genre conventions and reader preferences, informing but not dictating creative choices

The Automation Trap for Authors: The danger lies in treating AI as an oracle that can generate publishable prose wholesale. This represents the kind of premature automation The Atlantic warns against – asking AI to “jump the canyon” of creative writing when it’s only capable of assisting with specific elements. Authors who rely too heavily on AI-generated content risk producing work that lacks authentic voice, genuine insight, and the human experiences that make literature meaningful.

Like the radiologists who couldn’t tell when to trust CheXpert versus their own expertise, authors need clear frameworks for when to accept AI suggestions and when to rely on our creative instincts. The key is maintaining what the article calls “human expertise” throughout the process – the author’s unique perspective, emotional intelligence, and understanding of our intended audience.


The Publisher’s Perspective: Editorial Judgement in the Age of AI

Publishers occupy a unique position in the AI debate because their core value proposition – editorial judgement – represents exactly the kind of “expert decision making” that The Atlantic identifies as AI’s collaborative sweet spot. Publishers don’t just produce books; they make high-stakes, one-off decisions about which manuscripts to acquire, how to position them in the market, and what level of investment each title deserves.

AI as Editorial Amplification:

  • Manuscript evaluation: AI can rapidly analyse submission quality, genre fit, and market potential, but human editors must till weigh cultural relevance, authentic voice, and long-term brand alignment
  • Market analysis: AI can process vast amounts of sales data and trend information, but publishers must interpret this data within the context of their specific catalogue and strategic vision
  • Content development: AI can identify structural issues or suggest improvements, but editors must balance these suggestions against the author’s artistic intent and the book’s overall vision
  • Risk assessment: AI can model various scenarios for print runs, marketing spend, and pricing, but publishers must factor in intangible elements like author platform, seasonality, and competitive landscape

The Collaboration Advantage: The most successful publishers will likely be those who use AI to enhance rather than replace editorial intuition. Like the medical professionals in the PNAS study who achieved better outcomes by reviewing AI suggestions before making final decisions, publishers can use AI insights to inform their decision-making while maintaining ultimate responsibility for those choices.

This approach addresses what The Atlantic calls the “expertise development” problem. If publishers simply automated manuscript selection or market analysis, they would lose the experiential learning that develops editorial judgement. But by using AI collaboratively, publishers can actually accelerate their learning: seeing patterns and connections they might otherwise miss while still exercising the critical thinking skills that define good publishing.


The Retailer’s Perspective: Recommendation Systems and Human Curation

Book retailers – whether physical bookshops or online platforms – face perhaps the clearest automation-versus-collaboration choice in the publishing ecosystem. Amazon’s algorithmic recommendations represent one of the most successful examples of AI automation in bookselling, yet independent bookshops continue to thrive by offering human curation and personal recommendation services.

Where Automation Works:

  • Inventory management: Automated systems can track stock levels, predict demand, and optimise ordering with minimal human oversight
  • Basic recommendations: “Customers who bought this also bought” algorithms effectively automate simple correlation-based suggestions
  • Price optimisation: Dynamic pricing algorithms can respond to market conditions faster than human decision-making
  • Fraud detection: Automated systems excel at identifying suspicious purchasing patterns or account behaviour. (Here, as we all know, platforms like Amazon are always behind the curve, so this ought to be welcomed, but probably won’t be if the letters AI appear in the debate.)

Where Collaboration Excels:

  • Complex recommendations: While AI can identify purchase patterns, human booksellers understand context – recommending books based on life circumstances, reading goals, or emotional needs that don’t appear in transaction data
  • Community building: AI can identify potential reading groups or author event audiences, but humans create the actual community connections
  • Discovery and serendipity: AI can suggest books similar to previous purchases, but human curators can introduce readers to entirely new genres or unexpected connections
  • Cultural interpretation: Human booksellers can contextualise books within current events, social movements, or local community interests in ways that pure data analysis cannot

Let me pause to introduce a personal note here. It’s been at least five years since I last visited a bookshop, on my last brief foray back to the First World, and fifteen years since I visited bookstores regularly. Mostly I prefer reading ebooks, but the bookstore experience is up there in the top five of things I miss from the UK, along with Costa coffee, cod and chips, monuments and London museum.

The “bookstore experience”, note, not the books. The human element, when done well, is our our industry’s trump card. But done badly, a human-run bookstore with staff who know nothing about the books they are selling, is going to be beaten by the algorithm. I’m reminded of going into WH Smith many years back, before ebooks and mobile reading, when books were made of paper, and, unable to find it on the shelves in the dedicated books section, asking for a copy of Mary Shelley’s Frankenstein. “What that?” asked the assistant. I thought I’d been misheard. “Frankenstein? Mary Shelley? A Gothic romance,” I explained patiently. “The music section’s upstairs,” said the assistant. I never shopped there again.

The Hybrid Model: The most successful retailers increasingly adopt hybrid approaches. Online platforms use AI to handle the massive scale of global recommendations while employing human curators for featured selections, seasonal lists, and thematic collections. Physical bookshops use AI-powered inventory systems to ensure they stock what customers want while relying on staff expertise for hand-selling and personalised recommendations.

This mirrors The Atlantic‘s point about word processors: the automation handles routine functions (stock management, basic correlations) while human expertise focuses on the complex, contextual work that defines great bookselling.


The Consumer’s Perspective: Discovery, Trust, and Reading Experience

Consumers – readers and listeners – experience the automation-versus-collaboration tension most directly through book discovery and reading tools. Their perspective reveals why The Atlantic‘s warning about “cognitive offloading” is particularly relevant to publishing.

AI Tools That Enhance Reading:

  • Personalised discovery: AI can help readers find books aligned with their interests, reading level, and available time
  • Reading assistance: AI can provide context for historical references, explain complex concepts, or translate foreign phrases without breaking narrative flow
  • Social reading: AI can facilitate book club discussions by suggesting questions or connecting readers with similar interests
  • Accessibility: AI can provide audio descriptions, adjust text size and contrast, or offer alternative formats for readers with different needs. I’m guessing AI can also do a lot more with the industry’s latest lovechild, audiobooks, but I have too little experience there to venture suggestions.

The Risk of Over-Automation: Readers face the same risks The Atlantic identifies in other domains. Just as novice professionals may be misled by AI tools they don’t have expertise to evaluate, readers might become overly dependent on AI-curated recommendations, losing the ability to discover books through browsing, word-of-mouth, or serendipitous encounters.

More concerning is the potential for AI to automate the reading experience itself. AI-generated book summaries, for instance, might seem like helpful time-savers but could undermine the very purpose of reading – the slow, contemplative engagement with another person’s ideas and experiences. This represents the kind of premature automation that The Atlantic warns against: replacing human capabilities (deep reading, critical thinking, imaginative engagement) before we fully understand their value.

Building Reader Agency: The most valuable AI tools for readers will likely be those that enhance rather than replace the reading experience. Like the heads-up display in aircraft – which provides crucial information without taking control from the pilot – effective AI reading tools should provide context, connections, and discovery opportunities while preserving the reader’s agency and engagement.

This might mean AI that helps readers find their next book but doesn’t tell them what to think about it, or AI that provides historical context for a novel but doesn’t summarise the plot. The goal is amplifying readers’ own capabilities rather than substituting algorithmic judgement for human interpretation.

A lot of parallels here with teaching, I might add. Schools already spend far too much time telling children what to think instead of showing them how to think. AI in teaching faces many of the same dangers as AI in publishing.


Choosing Our Routes Wisely

And to go full circle, sometimes I worry AI in teaching is racing towards the edge of the canyon confident it can make the leap, but not knowing exactly how wide the canyon is.

The Atlantic‘s canyon metaphor is particularly apt for publishing because the industry faces genuine uncertainty about AI’s long-term capabilities. We don’t yet know whether AI will eventually be able to write compelling novels, make nuanced editorial decisions, or replace human reading experiences. But we don’t need to know the final destination to make good choices about the journey.

The evidence suggests that publishers, authors, retailers, and readers all benefit more from treating AI as a sophisticated collaboration tool rather than rushing toward full automation. This means:

  • Authors using AI to enhance their research, editing, and craft development while maintaining creative control and authentic voice
  • Publishers leveraging AI insights to inform editorial and commercial decisions while preserving the human judgement that defines good publishing
  • Retailers combining AI-powered efficiency with human curation and community building
  • Readers using AI tools that enhance discovery and comprehension while protecting the autonomy and depth of the reading experience

Perhaps most importantly, this collaborative approach preserves what The Atlantic calls “pathways for expertise development.” If we automate too quickly, we risk losing the very human capabilities that make books valuable in the first place – creativity, empathy, critical thinking, and the ability to find meaning in complex narratives.

The publishing industry’s AI future doesn’t have to be a binary choice between human tradition and algorithmic efficiency. Like the word processor, which automated mundane tasks while empowering human creativity, the best AI tools for publishing will likely be those that amplify distinctly human capabilities rather than replacing them.

As The Atlantic concludes, “We have a canyon to cross. We should choose our routes wisely.” In book publishing, the wisest route is surely building bridges rather than taking leaps – using AI to enhance human creativity, judgement, and connection rather than trying to automate them away.

Which is my way of mentioning the hitherto unmentioned Luddite Fringe before I finish.

The Luddite Fringe will walk tentatively up to the canyon, peer over the edge, and say “Oh dear, that’s a long way down. Let’s stay where we are. The green grass on the other side is a mirage.”


This post first appeared in the TNPS LinkedIn newsletter.