Moral indignation is cheap. The outrage at the use of AI to disadvantage Jane Friedman speaks volumes. That wicked AI! How dare it? Let’s ban this evil technology before it’s too late!


While many book publishers run about like headless chickens, fearful of confronting the challenges and potential of AI head-on, news publishers continue to use AI to build out their businesses, and at the same time to confront head-on the potential downsides of the paradigm shift happening.

Newscorp CEO Robert Thomson , speaking at a Q4 earnings call, said the company was actively talking with AI and tech companies “to establish a value for our unique content sets and IP that will play a crucial role in the future of AI.”

Per The Hollywood Reporter, Thomson allowed that AI offers “new stream of revenues” as well as the chance “to reduce costs across the business”, saying the “efficiencies will be exponential.”

But Thomson tempered the optimism with cold realities of how AI is currently developing:

“We have been characteristically candid about the AI challenge to publishers and to intellectual property. It is essentially a tech triptych. In the first instance, our content is being harvested and scraped and otherwise ingested to train AI engines. Ingestion should not lead to indigestion. Secondly, individual stories are being surfaced in specific searches. And, thirdly, original content can be synthesized and presented as distinct when it is actually an extracting of our editorial essence. These super snippets, distilling the effort and insight of great journalism, are potentially designed so the reader will never visit a news site, thus fatally undermining journalism and damaging our societies.”

Much of that applies to book publishing, but what separates Newscorp’s stance from that of most book publishers is that Newscorp is actively leveraging the multi-fold benefits of AI, while also taking care to stay in charge.

As reported previously, Newscorp is already using AI to produce 3,000 new stories a week for its many local news outlets in Australia, and most major newspapers use AI to produce and publish content that at best is overseen by humans before it goes live.

What isn’t mentioned by Thomson is how Newscorp-owned HarperCollins is dealing with the AI issue, but what we do know is that HarperCollins is not hiding away from the topic.

HarperCollins started using AI for book recommendations way back in the AI dark ages of 2017, so is no stranger to the benefits.

But speaking to Publishers Weekly’s Jim Milliott before the London Book Fair this year, HarperCollins CEO Brian Murray said:

“AI is both an opportunity and a risk. There are AI tools that may become major contributors to parts of book publishing. For example, audio production and language translation tools may enable far more books to be created in audio formats and in multiple languages than can be produced today due to higher costs (but) we draw a distinction between tools and storytelling. Creation and storytelling are going to remain human centered. But tools can help assist with efficiencies on certain types of products and processes.”

Murray adds, “There is a risk already that human centered storytelling can be crowded out with low quality machine generated storytelling. With AI generated storytelling, we are already entering some uncharted waters.”

Uncharted, indeed.

This past week Jane Friedman has been making the headlines following a flurry of imitative books, apparently AI-created, using her name and flooding Amazon.

Such is Friedman’s profile within the industry that Amazon swiftly dealt with the matter, but the dam has been broken. People – and let’s be clear this is human doing, not a 2023 version of HAL 9000 taking matters into its own hands – are abusing AI for their own benefit.

And therein lies the core conundrum no-one really wants to tackle when it comes to AI. Humans are the problem, not the technology.

Yes, there are issues that urgently need addressing around how AI bots are “trained”, and about remunerating IP holders if their works are being used in that way. But these are legal and ethical issues around IP.

Separately, there are potential challenges to people’s jobs if and when bots prove capable of doing the same job better.

But these are eternal socio-economic problems dating back to at least the Industrial Revolution. In other modern-day industries, robots are all the time doing jobs once the sole ability of human hands. Or do we think our car was built by people? That our cakes and cookies from the supermarket were made by people? That the milk in our latte is courtesy of a smiling milkmaid that sat under a cow?

As and when AI can write / translate / narrate books consumers are willing to hand over hard cash for, then writers are in the same boat as the aforementioned milk-maids, craft bakers and automotive industry workers. Face reality, learn new skills, up our game, or move on.

Society does not owe us a living. Our industry does not owe us a living. No-one has a job for life guarantee. None of us live our lives without using and consuming products that are now made by machines that have cost workers their jobs.

And none of us lose any sleep about those workers, just as none of us are out on the picket lines alongside the striking Hollywood writers and actors. Sure, we sympathise, in between the next episode of whatever is our latest (probably part AI-generated) video fix on Netflix or Prime.

But moral indignation is cheap. The outrage at the use of AI to disadvantage Jane Friedman speaks volumes.

That wicked AI! How dare it? Let’s ban this evil technology before it’s too late!

But what’s happening is neither new nor unexpected, as Friedman herself readily acknowledges. And it is only because Friedman is “one of us” and can articulate her experience in a way we can relate to, that we are so het up about this particular incidence.

The New York Times recently ran a story about unrelated AI “fake” books on Amazon – and yes, we are right to be concerned if the quality is as shoddy as we are being told, and if the material is lifted from legitimate works. But what the NYT did not tell us is how much of the NYT is written by AI. And would we even know it if it were?

The NYT just last week updated its terms to prevent AI scraping its content, but that’s a protection against third parties, not a rejection of AI, which the NYT has been experimenting with since at least 2015.

The NYT also uses AI to moderate comments on its news site.

Book publisher attitudes to AI vary. Most publishers will be using AI at some level behind the scenes, but as Thad McIlroy recently observed, they are very reticent about admitting to it. But looking the other way is not an option.

Per McIlroy: “This is not a good time to be technology-averse. Artificial intelligence stands to drive significant productivity gains in book publishing processes, from editorial to marketing. The companies that embrace AI will almost certainly pull ahead of those that don’t.”

No alt text provided for this image

Add in there writers, narrators, translators, etc. Instead of seeing AI as a threat, we need to look at how we can leverage it to up our game.

And instead of blaming AI when it is abused in unethical and possibly illegal ways, as per the Jane Friedman instance above, let’s put the blame firmly where it belongs – with the humans who are using AI in this way, and with the humans at Amazon who choose not to police their site properly.

From there we can the start to meaningfully address copyright and related issues, and open up new prospects for creators and for the wider industry.

That’s beginning to happen –

No alt text provided for this image

And Jane Friedman is among them. In Friedman’s words: “Generative AI remains a controversial topic with many unanswered legal questions. Even so, writers are using these tools now and they will use them in the future. That you can be sure of.”

Joanna Penn is another respected industry innovator who is actively using AI to develop her writing business – including the actual writing itself.

But at the same time we have the Luddites at Pan Macmillan who have issued an edict that they will not publish anything produced with AI, jockeying for position against the more enlightened souls at Bonnier UK who like the IPG, have committed to exploring AI’s full potential.

No alt text provided for this image

So let me wind up this essay with further consideration of the Pan Macmillan position.

Per The Bookseller, Pan Macmillan CEO Joanna Prior said, “We have produced a set of people-centred principles to steer our engagement with these emerging technologies, including that we do not intend to publish AI-generated books.”

In other words, bona fide authors who legitimately explore AI’s vast potential to produce better quality work faster than before, will be denied publication by Pan Macmillan and like-minded Luddite publishers for no other reason than the tool the author used.

Only a human could make such a crass decision.