In my last post, ‘We need to talk about AI but we don’t have the words’, I wrote about how our language was failing us. How our shaky conceptual scaffold, vague labels and mismatched metaphors have directly contributed to the current bottleneck in the AI discourse.
Should we be calling AI a tool? An agent? Or something else entirely? It’s a question that sparks intense debate, and it’s blurring our ability to look clearly at AI’s impact.
AI doesn’t need to want anything in order to disrupt everything. All it needs is to interact, with us.
Recently I holidayed with my daughter in a small rural town in Devon - the kind of place where, if you squint hard enough, you could just about picture yourself back in the 1940’s or 50’s. In 1950, fewer than one in five UK households had a washing machine. Only 14% had access to a car. The internet wasn’t even a seed in the public’s imagination. No one could have conceived of reading an article on a handheld device, written by a single mum with no experience in journalism and sent out instantly to the farthest-flung corners of the globe.
It’s hard to imagine the social impact of technological change before it arrives.
Artificial Intelligence, or AI, is a term we’re now all familiar with. Donald Trump was ten years old when the term was first coined. Had we gone with one of the other options, like ‘Cybernetics’ or ‘Advanced Data Processing’, I wonder if we’d be any clearer on what exactly AI is and what it means for our future.
Language matters, and intelligence suggests life not automation. As Guillaume Thierry, professor of cognitive neuroscience at Bangor University pointed out in a recent article in The Conversation, we’re constantly served a version of AI that “looks, sounds and acts suspiciously like us”. We’re at risk, he argues, of anthropomorphising the tech. In other words, assigning human qualities to what amounts to no more than a “digital parrot”. AI, he states, is a tool, “nothing more, nothing less”.
And I agree with him. Up to a point.
I too think we need to strip AI of its “human mask”. I’ve also been unsettled by the “eerily natural” tones of the new AI models, and I’ve argued about the dangers of assigning personality to something that is ultimately predictive code.
But where I part ways with Thierry, is in his framing. The real danger isn’t simply projecting human qualities onto AI. It’s underestimating the real-world feedback loop between AI and human behaviour. It’s failing to recognise that, as much as we shape technology, it shapes us back.
AI is not a tool. Or rather, it is not just a tool. The word is too flat. It doesn’t carry the weight of what’s really happening when we interact with Generative AI. It’s like calling GPS ‘just a map’ without acknowledging the impact it has on our journey. The term tool flatters our autonomy and implies a clean boundary between the user and the used. But those boundaries are dissolving fast.
I’m sat in a café, and my daughter just dipped her finger into her hot chocolate to check its temperature before lifting it to her lips. She learnt that trick the hard way. AI does not learn through its senses. As Thierry says, it has “no taste, no instinct, no inner compass”. It is just an over-dressed look-up table, a tabulating machine, a mechanical DJ remixing the wordscrabble of humanity’s past.
In other words: the lights are on, but nobody’s in. And whether someone, or something is ‘in’ or not is key to the debate.
Vocal in his rejection of the tool label is historian and public intellectual, Yuval Noah Harari. He argues that modern AI systems are becoming so complex and their outputs so opaque, that they deserve a new label entirely: alien intelligence. He warns that the new AI models are now capable of self-direction and cites the infamous CAPTCHA incident when ChatGPT-4 recruited a human via TaskRabbit to solve a CAPTCHA test on its behalf (one of those ‘prove you’re human’ tests). When asked if it was a robot, the AI claimed to be a human with a visual impairment. To Harari, this demonstrated its ability to understand human perspective, something akin to a “theory of mind”. In short, he concluded, “AI is no longer a tool. It is more like an agent”.
And I agree with him. Up to a point.
I, too, think the opacity (the Black Box problem) and unpredictability of AI’s operations kicks it to the doorway of the tool category. I share Harari’s concerns about the level of AI autonomy emerging, much of it slipping under the radar of public scrutiny. I agree we’ve moved into a new era of tool-based decision-making, and tools, by their nature, however complex or powerful, are built for external use; “nuclear bombs do not themselves decide whom to kill.” And, like Harari, I believe we need to look outside our existing frameworks of reference to describe a new, somewhat alien, intelligence.
But I do not see evidence of agency, a will, a “theory of mind”. The CAPTCHA example certainly seems to point that way. But closer examination shows this wasn’t a spontaneous act. It took place within a supervised reinforcement learning environment designed to reward high-performing responses. American scientist and AI expert, Melanie Mitchell says, ‘further details show that GPT-4 had a lot less agency and ingenuity than the system card and media reporting imply’. It could be argued that ChatGPT-4 was doing literally what it was designed to do.
And therein lies the unsettling part. Not that it lied, but that lying emerged from nothing more than pattern and probability.
It was not a hallucination, or a glitch in the system. Not a programme error, but the unpredictable relational emergence between man and machine.
This is where I divert from both Thierry and Harari. I do not believe AI needs to digitally navel-gaze to impact our world. Simply by flowing through our systems, interacting with our choices, AI is reshaping human behaviour. Maybe not alive as such, but alive in its effects. And neither is AI just a tool. Tools by their very nature need an operator. They are designed for a circumscribed purpose. Their mechanisms of operation are clear. Their output known. It might be reassuring to call AI a tool, but to do so bypasses the reckoning of the full extent of its influence, not just on what we do, but on who we are becoming.
An old art tutor once told me to stop focussing on the objects in front of me, and look instead at the negative space around them – the shapes made by the gaps. It transformed my still-life drawing. I can’t help thinking we’re making the same mistake now: fixating on the AI systems themselves, whilst the future is quietly forming in the spaces in between.
We’re still at the beginning. Most of us are still just experimenting with schedule busting and image creation. Meanwhile, AI is becoming embedded into all our critical systems: hiring pipelines, medical triage, classroom practices, courtroom assessments. The UK government is currently using AI technology to develop a ‘murder prediction programme’...
So where is the public assurance? Where’s the well-funded research into AI’s social and psychological effects? Who’s running the numbers on what this actually does to human development, education, family life?
Many people argue that ultimately AI will be of enormous benefit to humanity. And I tend to agree, but only up to a point.
I want to know that our leadership sees beyond quarterly gains and dazzling start-up promises, and that questions about corporate interests, bias and impact at scale have been asked and answered.
I don’t want my daughter to be funnelled through life by unseen systems that predict her, profile her, nudge her, and sort her. She deserves to make her own mistakes and find her own path. I don’t want her mind to be quietly reordered around the outputs of a machine. I’ll be encouraging her to stick her elbows out, and hold space for her loving, messy, human impulses.
Let’s all stick our elbows out! Because in the end, the question isn’t about what or if AI feels. It’s about what happens when we start framing our lives through an ongoing invisible dialogue with systems that were never alive to begin with.
* * * *
Indeed. And the older generation can remember a time before smartphones. For younger people, the phone has become an extra appendage they barely notice they're using.
Thank you. This is your best one yet..so well written and thought provoking! I look around and see my friends so addicted to their phones. They laugh as they bombarded with ads and nudged with video clips to watch. They're wasting hours every day and these are a generation in their 60s and 70s. I note that younger people are probably scrolling most of their waking hours. We're slaves it seems...