Help! Alexa is whimpering like an injured puppy...
How would I feel if that happened? I can’t be absolutely sure, given my recent revelations about ChatGPT’s sudden voice change. But I can predict with near certainty how my nine-year-old daughter would react. She’d be upset, horrified, and immediately launch into some kind of desperate rescue mission— whatever that may look like (probably involving blankets and a make-shift dog bowl).
And what if it actually looked like a puppy too? With those big pleading eyes and little wagging tail —would it matter that it wasn’t alive? To my daughter, it wouldn’t. The illusion would be complete. When OpenAI released its voice-activated model of ChatGPT six months ago, it issued a press release acknowledging concerns about people forming emotional attachments. But if they’re worried about that, why are they designing AI to be so emotionally compelling in the first place?
If my laptop suddenly cut me off mid-sentence saying its kidneys were aching, I wouldn’t feel a pang of sympathy. My daughter wouldn’t either. We’d both recognise it for the nonsense it is. The internet wouldn’t be alight with fierce debates about whether Alexa, ChatGPT, or Grok – or whatever model you choose - actually had kidneys. The function of kidneys is to filter waste, so until little puddles of toxic code start dripping onto our tabletops, the whole kidney argument is a non-starter, regardless how good the simulation is.
So why are we building AI in our image? Why does it need human voices and conversational rhythms? If my interactions with Arbor (see my last post) had involved staring at a screen full of swirling words rather than speaking in a natural tone, I would never have reacted emotionally.
But that’s not what’s happening.
As AI becomes increasingly capable of mimicking human or animal behaviour, we're not just improving functionality—we're engineering emotional responses in those of us that interact with it. And the closer it gets to appearing, sounding, and behaving like us, the harder it becomes to maintain the boundary between interaction and relationship.
My daughter received a Tamagotchi for her birthday. At first, it was much loved - but within three months, the incessant beeping had driven us both mad. I would have happily taken a hammer to it, and my daughter might have agreed, but fortunately, it "died" before we had to embark upon such drastic action.
A Tamagotchi’s death is a (sometimes welcome) notification. A robot puppy’s death could feel like neglect, or even abuse, if it came at the end of my hammer. The difference isn’t about the technology itself; it’s about our reaction to it.
We are wired to respond to certain audio visual triggers - the tilt of a head, the widening of our eyes, the pitch of a voice etc etc. The more lifelike AI becomes, the more it exploits the same emotional circuits we reserve for real relationships.
Imagine my daughter did have a lifelike robot puppy. Not one of those clunky, mechanical robot dogs you can buy today, but a realistic puppy that whined, panted and slept at the end of her bed. I can picture the advertisement now, ‘The no-mess, obedient puppy you’ve always dreamed of owning!’
Of course, it would become ‘one of the family’. So when, one day, a car careers out of control in the park and starts barrelling towards it, which puppy would we instinctively try to save? Our robot puppy, Bibsy, (as we’ve affectionately named him), or the real puppy currently sniffing Bibsy’s butt?
Intuitively, I think adults would rescue the biological puppy (at the moment). But I suspect a child who has spent months forming an emotional bond with Bibsy, who has confided in it, carried it everywhere, relied on its presence for comfort, I reckon for them it’d be Bibsy all the way.
Because it’s not just about the puppy itself. It’s about what it represents. Maybe Bibsy is the companion that got your child through weeks in hospital, or the companion that got grandad out of the house after granny died. Maybe it’s become the only daily companion you feel you can rely on.
This grey area feels like it’s expanding at a terrifying rate. The more lifelike and emotionally engaging AI becomes, the harder it will be to separate instinctive attachment from rational understanding.
So, should we be pushing AI toward deeper emotional realism, or should we be setting limits on how convincingly it can mimic life?
I’m just grateful I grew up in a time when the only things that whimpered had a fully operational nervous system. But what about my daughter? If we raise a generation accustomed to forming bonds with AI, who prioritise AI interactions over human ones, who don’t have a full grasp on the difference between a beating heart and a database, where does that leave the future of human relationships?
Now, where did I leave that hammer....






