One of the biggest obstacles to AI in publishing right now is that infernal legal ruling that needs striking out so AI companies can, with due respect for and partnership with human authors, take generative AI to the next level, knowing their creations will enjoy the same legal protection now being demanded by human authors and artists.


PW’s AI panel-debates have been happening this week alongside the headline news about the FTC and Amazon, and both topics are covered in the latest Velocity of Content podcast, which every Friday features PW’s Andrew Albanese offering an industry-focussed View from the Big Apple, a hebdomadal must-listen for everyone in publishing.

Albanese naturally kicked off with the FTC vs Amazon news, very much echoing the TNPS view that publishers created this Frankenstein monster and have only themselves to blame.

Albanese: “Publishers were not innocent bystanders in this. They all cashed the cheques, all signed the deals, all fed this monster that they knew was one day was going to come back and eat them.”

Over at TNPS I had of course taken those sentiments a little further.

When it comes to playing the victim card and putting on raucous demonstrations of self-righteous indignation, the publishing industry is in a historic glass house and really cannot afford to throw too many stones.

“Playing the victim as this latest case heads to trial reeks of hypocrisy when so many in the industry rely daily on Amazon for a substantial part of their revenue and profits. And all the more so if we take a step back and ask ourselves at what point Jeff Bezos put a gun to our collective heads and forced us into this relationship?

“Oh yeah, that’s right. He didn’t. He just pursued the same unsavoury (and perhaps at times unethical, or even illegal) business tactics big publishers and big book retailers regularly resorted to.”

But the real meat of the Velocity of Content podcast lay in Albanese’s summary of the PW debate sessions about AI, the latest threat to publishing complacency that far too many in the industry are all-too-quick to cast as the enemy.

So much to discuss in this arena, and rest assured I’ll be returning to this fast-evolving topic time and again, but for this essay I’ll limit myself to a couple of contentious points Albanese raises in the short span of the linked podcast. (Just 15 tiny minutes! Christopher Kenneally, this is outrageously too short to cover a full week’s industry developments!)

The PW debates successfully separated the various key AI elements that pertain to the publishing industry: AI as a valuable tool; AI as a threat to authors’, artists’ and translators’ jobs; AI as the Moriarty of copyright abuse and IP theft; and AI as the industry’s elevator to new heights of achievement.

The issues of alleged or actual copyright and IP abuse are well-rehearsed, with strong arguments across the spectrum, that will ultimately be resolved in a court of law, at least in the USA. Fair use or plagiarism are legal definitions and need legal decisions, and in turn we will need law changes to encompass new ways of doing thing not envisaged when the copyright laws were passed.

But the real legal issues yet to be decided are a) whether copyright can exist for something not created by a human, and b) whether the law’s job is to protect jobs.

Thus far, the pendulum swings towards the need of human involvement in order for copyright to be valid. A 2016 law ruling, upheld by appeal in 2018, noted that animals in general, and a snap-happy monkey in particular, could not own copyright. And this case is now being used to argue AI creations cannot be copyrighted. But the reality is, this is a nonsensical argument, and we only need to look at photography law on copyright to see why.

When you take a photograph then, at its simplest, a button is clicked and a photo appears by chemical or digital magic. A baby could do it, and under the law that baby would then own copyright of the image chemically/digitally created. Given a baby’s physical and mental development, it’s safe to say the monkey, in the 2016 case aforementioned, did more work than the baby. So the law is actually nothing to do with meaningful human input based on intelligence, and everything to do with the law being an ass.

Disclaimer here: Babies are not unintelligent. As an advocate of baby-learning and an if-I-ever-get-time-to-finish-it author of a book titled “Childhood: the Highest Stage of Evolution,” I constantly argue the distinction between intelligence and knowledge, and that babies are simply brains on legs, little bundles of super-intelligence (natural intelligence, not artificial!) that they will mostly have knocked out of them by the factory model education system they will inevitably be subjected to.

But to the point: even if we allow that copyright requires a human element, still we have the beyond stupid legal ruling at end August 2023 asserting that a work created by artificial intelligence without any human input cannot be copyrighted under U.S. law.

By denying registration, the Register concluded that no valid copyright had ever existed in a work generated absent human involvement, leaving nothing at all to register and thus no question as to whom that registration belonged.”

In this case, a piece of art designed by AI was denied copyright because it was “generated absent human involvement”, which goes to the heart of the generative-AI debate, because no AI has ever created anything “absent human involvement.”

There are currently any number of lawsuits alleging AI is using human-produced content (visual and text) to train on, and totally safe to say the AI in the above case did not suddenly magic itself out of the ether and create a piece of art with no prior training based on the works of human artists that preceded it. In the same manner, no text-based AI creation magics itself out of thin air without first having been trained on human-produced material and without first being instructed, or prompted as we must now say, to create something.

The prompter is no different from the photographer who positions the camera, arranges the lighting, and clicks the button.

The chemical/digital magic the camera then works is no different from the generative-AI magicking a new picture or arrangement of words to create a new content product. In both cases the results are based on the prompts and parameters set by the human prompter/photographer.

What is different is that the actual chemical/digital image produced in the photograph is, entirely, created by machine. The machine itself is not trained on past works of art or past photographs, or even the photographer’s knowledge and experience. The actual act of creation is purely that of the machine.

Yet the photograph is copyrightable and that copyright belongs to the prompter. An AI creation that by definition requires far more human input to exist, is not. The prompter is denied copyright, As above, the law is an ass.

As I explored in a TNPS post back in June, “Learning and training by emulation is as old as the arts themselves.”

None of which should detract from the many other legal cases where actual non-permissioned infringement of copyright is alleged, and the many debates about how creatives might be compensated if and when AI uses their content for training purposes.

We should be clear that these are quite separate issues legally, but that go to the heart of the aforementioned judgement, because the very fact there are so many allegations (as yet unproven in court) of IP theft and such, simply strengthens the case for AI content being copyrightable. Why? Because all these cases are saying, quite clearly, that all the AI content now being created is based on human input. The issues are around permission and compensation are distinct legal issues that need to re dealt with But the fact that humans were involved ought not to be in question, except in the mind of Judge Beryl Howell.

Okay, so let’s return to the Velocity of Content podcast, and pick up where we left off with Andrew Albanese, where on two interconnected points I have to take issue.

Albanese says: “That still does not solve the real problem of AI, which is the spectre of these machines replacing human creators. Even if you could do it, a collective licencing system could not divvy up enough fractions of pennies from whatever use AI would be making to compensate any creator enough to losing their work to a machine.”

A little further on he says: “People will always want human excellence. They don’t want to read books written by machines. Nobody wants books to be divorced from human authors.”

Here’s the thing, Andrew: If that’s the case, where’s the problem? The “spectre of these machines replacing human creators” is just a spooky ghost story if, as you say, “People will always want human excellence. They don’t want to read books written by machines. Nobody wants books to be divorced from human authors.”

All we have to do is ensure AI-created content is labelled as such (Amazon has already made some moves in that direction) and then let the consumer decide. These nasty, second-rate AI-produced books will languish in the nobody-loves-me section of any bookstore that allows them in, and pretty soon the AI-prompters will get the message and find a new hobby. Problem solved!

Clearly Andrew spent too much time with Markus Dohle during the debates, and some of Markus’s affably amusing “we know best” qualities rubbed off.

Markus Dohle back in May: “Readers need guidance and orientation provided by publishers and retailers who help them to find their next best read.”

A warning to others in the vicinity. Gatekeeping is contagious!

Andrew Albanese, of all people, should (and does) know better. He’s witnessed any number of publishing professionals tell us what people don’t want, only to be proven wrong time after time. Nobody wants self-published books. Nobody wants ebooks. Nobody wants subscription. Back before Andrew’s time, nobody wanted paperbacks.

And I’m pretty certain that back in the 1440s, Markus Dohle’s great-great ancestor and neighbour of the illustrious Herr Gutenberg, was peering over the yard fence saying, “Johannes, mate, don’t waste your time with that moveable type printing craziness! Readers need guidance and orientation provided by parchment and vellum scribes and illuminators with only the finest duck quill pens to help them to find their next best read.”

But let’s be fair to Markus. Christopher Kenneally ran with the headline, “Markus Dohle assured the industry that AI will not prove the death of publishing,” and nothing to disagree with there.

Quite apart from the fact that Dohle has already told us – and a judge – that subscription is what will bring the industry to its knees, the threat from AI to publishing is not an existential one. Quite the opposite. AI promises to take our industry to new heights of creativity and revenue potential, if only we will stop hiding behind knee-jerk reactivity, look at what’s on offer, and accept that, with change, comes casualties and adjustment.

Yes, jobs will be lost, and yes, some companies will not survive. That’s life. That’s business. And that’s publishing.

The reality is that publishing jobs do not come with life-time guarantees. Never have, never will. And the sooner we stop treating our own industry as some special case and face the reality of technological progress, the better.

As I said in April, “Consumers buy into what they want and what they are happy with. Booklovers don’t buy audiobooks to keep narrators in a job. They buy audiobooks because they love the product. No-one buys the next JK Rowling book because they think she needs the money.

“In the same way, consumers don’t buy food to keep farmers employed, and they don’t buy cars to keep assembly-line workers employed.

“How many publishers’ offices are today lined with typewriters because publishers were concerned about the people who worked in the typewriter factories? How many publishers insist authors mail in their submissions to keep the postal workers in their jobs?”

The bottom line is, publishers will follow the money. Sentimental twaddle about how much they care for author careers and author compensation is easily dismissed when we look at author royalty rates and author contracts. Any author is only as good as their last book’s sales numbers.

Markus Dohle was telling us in 2021, as the pandemic publishing bonanza arrived, that there was never a better time to be in publishing. Well, which company CEO would not be saying that as profits rose and the bonuses stacked up?

Dohle is still saying it today, Andrew Albanese reminds us. “Dohle loves to say this is the best time for publishing since Gutenberg.” Yet curiously, PRH royalty rates didn’t change to reflect the enormous profits Dohle was then bringing in, and somehow I doubt the new CEO, Nihar Malaviya, will be springing any author-friendly surprises on us at Frankfurt.

The biggest obstacles to AI in publishing right now are a) the Luddites opposing AI because they don’t understand it and want to appear author-friendly, no matter that their royalty rates are identical to every other publisher and their contracts do the author no favours, and b) that infernal legal ruling courtesy of Judge Beryl the Peril that needs striking out so AI companies can, with due respect for and partnership with human authors, take generative AI to the next level, knowing their creations will enjoy the same legal protection now being demanded by human authors and artists.

As summarised by Kenneally, Albanese said: “If there was a single, top-level takeaway from the wide range of speakers, it is that AI can be good for the book business, despite serious questions and potential challenges.”

And that is spot on.