No AI…
“Astrid takes another swing with her baseball bat. Splinters of plastic and metal leap into the air trailed by motes of acrid dust that gently sting her palette. The hiss and whirr of an air-conditioning unit putters into silence. A silence that will condemn this data centre to a slow heat-death. Not the heat death of the universe, but the death of too much heat. Bits and bytes boiling in their own algorithmic hell.”
So starts a dystopian sci-fi novel written by… AI. It concerns the plight of a newly created underclass – graphic designers, writers and voice artists fighting against their silicon replacements. The novel becomes a best-seller and is quickly adapted into a hit streaming series. Astrid, the unemployed artist, is a modern-day heroine.
This was all foreseeable or, at any rate, predicted to a high degree of machine-learned certainty. The prompt was created by the publishing house’s new Head of Creativity, an AI bot which had ascertained that this was a topic that might sell thanks to patterns discerned by – I think you can tell where this is going.
Such scenarios no longer seem outlandish. The relentless advance of Large Language Models shows no sign of abating.
So where does this leave those who work in what are called – somewhat oxymoronically – “the creative industries”? Are we saddle-makers in the age of the car?

The art of the possible
One place to start is not to ask: “Where are we now?” or “Where are we likely to be?” but “What is actually possible?”
I recall a conversation I had with a friend many years ago. I blithely remarked that no human creation could supersede us because we would be the creators. It would always be inferior. He replied: “But the technology already exists.” In other words, human intelligence is possible so why couldn’t it exist in another, improvable, form?
So, whenever someone argues against machines becoming smarter than us because of some imagined constraint – energy, the way its configured, not enough data to train it on – I always come back to this answer. We exist. We are conscious. Why can’t something else be?
False dawn?
There is disagreement amongst those who are most qualified from a technical perspective.
Nobel Laureate Geoffrey Hinton (often known as the “Godfather of AI”) thinks AI can already reason and exhibits signs of consciousness. As far as AI achieving human-like intelligence is concerned, Michael Woolridge (Professor of the Foundations of Artificial Intelligence at Oxford) thinks this might be a false dawn.
Neuroscientist Anil Seth thinks consciousness itself might require a biological element.
Gary Marcus, a US cognitive scientist and AI expert, is adamant large language models like ChatGPT cannot, by design, ever achieve human-like intelligence (he seems to have been proved right in a number of his other predictions and is well worth seeking out).
My cognitive and computer science qualifications are, let’s say, nearer zero than hero, so I don’t feel qualified to say who is right (even if I am cheering Gary Marcus and his predictions on!). The area I do know about is the writing process itself. In reflecting on what I do and how I do it, I am sceptical about AI’s ability to replace creative individuals. Here’s why:
The emotional system

When writing, I consider myself to be judging ideas using two systems – the rational (does what I’ve written make sense? Is it similar to what I’ve seen before?) and something a bit more mysterious – the emotional system. “How does it make me feel?” And, most importantly for a comedy writer: “Does it make me laugh?”
Large Language Models have been trained on vast quantities of data and are incredibly adept at finding the “best” next word. I’ve yet to find an obvious grammatical error or sentences that don’t flow. And when you research how they work, you can see why. Words and their relationships are mapped on many dimensions. Looking into “semantic space”, Chat GPT knows “The cat purred” is a good output while “I purred the cat” typically isn’t.
Some maintain this is simply probabilistic. It’s guessing – brilliantly – what the best next word should be. Others say LLMs are more sophisticated and have some comprehension at a greater level.
In either case, what they don’t currently have is access to an emotional system. An LLM can write the sentences I wrote at the beginning of this piece, but it cannot intuit how it might make a human feel.
I write “currently” because I don’t think our emotional system is the castle beyond a moat that AI cannot cross. The emotional system, too, for all its mystery has an objective basis – in nerve endings and hormones and so on. Again, “the technology already exists”. Humans could train machines to tell them how they feel when shown AI’s outputs. For example, people rating AI’s jokes or its poetry.
Or we might find we’ve granted future AI personal assistants some access to our emotions via our heartrate, blood pressure or hormone levels. Or perhaps AI itself develops its own equivalent of an emotional system.
Yet, as things stand, AI bots do not have this emotional system. And, independent of the quality of what AI produces. this seems to matter to people. When someone is shown an AI creation and asked to rate it, their rating goes down if they’re then told it was generated by a computer. There will always be the sense it’s somehow fake. If you watched a TV series you knew was written by AI, would you fully engage with it or would you think it was no more than a Potemkin village of hollow men and women?
100% human

I think this AI/non-AI divide will become starker. It’s why I’ve just chosen to emphasise my content is “100% human”. I even got a graphic designer to do artwork for this. He’s 100% human too. I expect such labels to become more popular. For writers and designers, it could become the equivalent of “100% organic” or “free from additives”. Unless and until computers can achieve genuine intelligence or emotional understanding, we will want to know that an artefact was created by someone not something.
We might even hold back our full engagement until we know something was 100% human rather than the stirrings of a data centre that’s a baseball batting away from death.
So, what prompted me to write this? It wasn’t an AI bot that had predicted what would be popular. And it never will be. It was the pure joy of trying to structure a set of thoughts in a way that might be enlightening or entertaining. It was for my own pleasure. And, I hope, yours too.

If you feel you’d like some professional support with your speech from a five-star rated writer (Trustpilot), why not click below?
