Can the use of AI ever be anything other than an extension of a writer’s own talent? And if the quality endures, should the provenance matter at all?


Recent debates across social media have, apparently, suggested that a peculiar punctuation choice—the em dash—might reveal a text crafted, at least in part, by artificial intelligence.

Critics claim that an overabundance or idiosyncratic use of this “M-sized” dash is symptomatic of the bland, machine-generated prose produced by tools such as ChatGPT.

Critics have even dubbed it a “ChatGPT hyphen,” suggesting that writers who use it excessively may unwittingly betray their reliance on AI for drafting.

For those unfamiliar, the m-dash is that over-stretched hyphen in the above paragraph, that you won’t be seeing in the rest of this post because, full disclosure, I absolutely hate the bloody thing in my own writing.

Appreciating the M-Dash’s Artistic Merits

But I know many authors love it, and as a one-time editor I have come to appreciate its artistic merits in works by other authors, just so long as I don’t have to ask Google, or nowadays AI, how in hell I get MS Word to make one appear for me. Full disclosure again: I copy-pasted the two examples above.

Yet, many professional writers and academics assert that the em dash has long been a cherished tool, used with deliberate elegance by the likes of novelists and journalists. Some years back I was privileged to be the editor on a series of books by author Anne R. Allen, someone who elevates the use of the em dash to an artform, but still, not for me.

My point being, this very same punctuation has graced literary masterpieces and lively newspaper columns for decades, making it questionable to single it out as an unequivocal sign of automated writing. But as per a report in the Washington Post, this is the latest nonsense being cultured by folks who think the film M3GAN is a true story.

Detecting AI: Science, Art or Both?

The search for a silver bullet that can definitively distinguish AI-generated text from human-crafted prose is fraught with challenges. Some early attempts at detection noted that certain word choices – like the overuse of “delve” – or the “GPT amount of em dashes” could serve as hints. One attic-dweller even messaged me on LinkedIn to say it was obvious a particular post had been written with AI because there were no typos. Bloody cheek!

The reality is, there are no foolproof methods of detecting AI-assisted, although often I read industry posts that leave me in little doubt have been not so much AI-assisted as copy-pasted word for word.

As an old-school writer with enough years under my belt to remember typewriters, the idea of copy-pasted anything and putting it out under my own name is anathema. I often have cause to rework industry press releases, but I think I can confidently say that regular TNPS readers know within a few lines if a post is written by me or not, even when drawing heavily on a press release.

Threat or Liberation?

Developments in AI are moving fast, and, leaving aside legal and ethical considerations for a second, what we are seeing with OpenAI and the Ghibli Studio debate is just how far AI has come. I’m inclined to wonder, if an AI is still producing output that is recognisably AI, how old that model is. Because I have no doubt that in the next year or so AI will be able to write as well as pretty much anyone.

Do I feel threatened by that prospect? No, rather its liberating. But that’s the topic for another op-ed.

For now, let’s just say that the intricacies of generative language models mean that output is greatly shaped by user prompts, sample texts and iterative edits. Even representatives from within the AI community acknowledge that while some stylistic quirks may emerge, there is no hard-and-fast rule that can reliably pinpoint AI involvement. Ultimately, text detection is as much an art as it is a science, burdened by the risk of false positives and the constant evolution of language.

Adapt and Learn

And let me add here that writers have, are and will not just adapt, but learn from AI.

A case in point: subtitles here at TNPS. Go back over my historic posts and subtitles are as rare as left-handed lapwings. I sort of knew they were good to have. White space and all that. But it was just too much effort and risk to spend time going back over a final draft and throwing sub-headers in. Back in my real journalist days, when I had to answer to an editor, such things were left to sub-editors. In the humour of the trade, what do sub-editors do? They write sub-‘eadings, of course!

But as I began to use AI for my school work and asked for drafts for lesson plans, etc, I became wholly enamoured by the way AI could effortlessly throw sub-headings into the mix, and nowadays I hand over my final TNPS drafts to AI to do just that. I’ve also added a few words to my writing vocabulary that AI got me thinking about. ‘Underscore’ instead of ‘underline’, for example.

As best I can tell, no jobs have been lost in the field of sub-heading writing, and last I looked, the word ‘underscore’ was not owned by any studio, so I’m not losing any sleep over my use of AI. And no, I’m not going to label my work AI-assisted because an AI programme wrote my sub-headers.

The Value of the Final Product

The debate over whether AI-generated text should be labelled touches on a deeper question: does the provenance of a draft matter when the final product resonates with its intended audience?

In our modern publishing ecosystem, it is rare for a book, article or essay to emerge as the sole work of one individual; editors, proofreaders and even fact-checkers all contribute to the finished piece. If an initial draft – whether spawned by AI, human ingenuity, or a collaboration of both – ends up polished and authentically reflective of the writer’s distinctive style, why should its origins be held against it?

For many in the industry, the key measure of success is clarity, engagement and the integrity of the final voice, not the means by which the rough ideas were generated.

To Label or Not to Label? Transparency Versus Prejudice

There is growing pressure from some quarters to explicitly label texts that have benefited from AI assistance. Proponents argue that transparency allows consumers and fellow professionals to form their own judgements about quality and authenticity. Detractors, not least moi, argue that such labelling risks creating a stigmatised “other,” as if the involvement of AI inherently mars the creative process.

Article content

In an era when numerous published works are already the result of collaborative and multi-layered production, my position is that a rigid dichotomy between “AI” and “human” writing might be less relevant than the quality and originality that readers ultimately experience. There’s a reason exams and competition entries are marked blind: so the responses and content is given fair consideration and evaluation. We should treat books the same.

A Balanced Perspective for the Future

For publishing professionals, the real challenge lies in adapting to a landscape where artificial intelligence is an increasingly useful assistant rather than a disruptive adversary. While it is tempting to search for stylistic crutches – like the emphatic placement of an em dash – to expose machine-generated text, such criteria are both reductive and unreliable.

Instead, publishers and writers alike should focus on ensuring that every piece of work, regardless of its genesis, carries a distinctive human touch. If the final product speaks with clarity, authenticity and the intended tone, then the process by which it was born warrants less scrutiny.

As the discussion deepens – touching on issues of intellectual ownership, authenticity and creative integrity – it might be worth pondering further: Can the use of AI ever be anything other than an extension of a writer’s own talent? And if the quality endures, should the provenance matter at all?


This post first appeared in the TNPS LinkedIn newsletter.