AI started to sound like a person, until it didn’t
What Narrative Engagement can teach us about talking to machines
Is your laptop ever moody? Are there days when it doesn’t seem to want to cooperate? Or maybe you created a great flyer for your event, but then you tried to improve one little detail, and now everything is off. And no matter how often you undo the last step, it doesn’t go back to how it was?
I could go on, but you get the picture. Reread that opening paragraph. Did it strike you as strange that I wrote “want”, as in “it doesn’t seem to want to cooperate”? My guess is that most readers did not notice. In fact, I could have replaced “it” with “he” and it still would have sounded OK. A lot of us tend to talk about our tech as if it were a person.
“My Roomba is Rambo”
A 2007 study asked 30 participants about their experiences with the newly introduced cleaning robot, Roomba. It turned out that 21 of the 30 households had given their machines names, including “Fred”. A follow-up study of close to 400 people, published in 2008, found more people naming their Roombas with very common human names, although both studies also noted quite a few references to science fiction (including R2-D2 and “Darth Roomba”). Do any of you name your laptops or your phones? My sister has a name for her ChatGPT, but even though I sometimes refer to it as “he”, I don’t really have one for it. Unless a linguist thinks that the abbreviation “Chat” makes it a name. It’s possible.
Anthropomorphism is not an illness
It is actually very common for us to ascribe human-like features to non-humans. You may not realize how much feeling you invent for your dog or your cat. Or a squirrel in your backyard that seems to be “staring” at you. Even inanimate objects don’t escape. We are more likely to say a door “doesn’t like” to stay shut than to remark that we probably need a contractor to refit it.
The same applies to computers. Some interesting studies from the earliest days of home computers showed that people tended to be polite to their machines and reacted differently depending on the computer they were using. A random computer at the local library may be newer and faster, but we appreciate the desktop at home more.
Mi Casa es Su Casa
Believe it or not, but these phenomena have been well studied. There is even an acronym for them: CASA: Computers as social actors. We don’t really think about it, but we tend to apply social rules we respect in human-to-human communication to any situation. AI, with its conversational approach to interacting with us, only amplifies that.
It is not new. When I was in a master’s program at the University of Hull in 1988, desktop computers were becoming a thing. The university had large computer rooms that made them available to us. I remember that a tiny “therapy” program was all the rage. It was supposed to imitate a session with a psychotherapist, even though it didn’t do much more than reply with “I see” and “What makes you say that?” At least one student I was friendly with became convinced that the computer really “got” him. I tried to dig up the program for this newsletter, but couldn’t find any reference to an “Anna” (which is how I remembered it). There is, however, a much older variant, called ELIZA, dating back to 1966, that did the same thing and had similar effects. It clearly doesn’t take much for our subconscious mind to start responding to a digital stimulus as if it comes from an actual person.
If you think about it, it isn’t as surprising as it may sound. How often have you found yourself in a conversation you are not paying attention to? Perhaps a spouse, sibling, or colleague who keeps talking, whether you are listening or not. And you probably do the same as that old computer program did: every time the other person looks at you expectantly, you realize a reply is needed, and you use your own variations of “I see” and “What makes you say that?” The other person probably assumes you are listening.
Narrative Engagement
So, we are easily sucked into ascribing “agency” to computers. That is a fancy way of saying that we get fooled into thinking that our computer or machine has intentions and makes its own decisions. I have an old laptop that clearly doesn’t like Zoom: it freezes the image after a couple of minutes. And even though I can get audio if I want to see a social media post in Firefox, no amount of troubleshooting or driver updates can convince it to give me audio access when I try to watch the same thing in Chrome.
If it is that easy to anthropomorphize a machine when it is doing (or “refusing to do”) simple things, we need even more mental energy to resist that experience when we are using AI. Those interactions are designed to resemble real conversations. It reminds me of a form of narrative engagement often referred to as “transportation”: for the duration of a good movie, a part of our consciousness is so focused on the storyworld that we are “in it”, in the spaceship looking for the alien, or the courthouse deciding who is guilty. It is why continuity errors are so annoying. Many sitcoms that are recorded in front of a live studio audience require several takes. In a shot-reverse-shot, you may see actor A speak to actor B while holding an almost empty glass of wine. Cut to actor B, who says something, back to Actor A, who is now holding a much fuller glass in his other hand. If we notice this, the spell gets broken, and we are no longer “Transported” into the story. The most famous inconsistency in recent times is probably the Starbucks coffee cup that accidentally appeared in a scene of Game of Thrones. You know the whole show is one big imaginary world, but noticing that cup still ruins the experience.
ChatGPT’s Starbucks moment
A while back, I asked ChatGPT some questions to fine-tune my fitness routine. It was a proper back-and-forth, and I was pleased that all the time I spent learning prompt engineering paid off. Then, after a reply about safety, it went into overdrive and started its answer with “Men our age…” That was a giant Starbucks moment that broke the spell. It has done it twice more since.
You are not being odd
It is worth noting that technology evolves faster than the human mind. Until very recently, we could only talk to and listen to humans, and only if they were in our vicinity. And a giant puma sneaking up on us could rightfully be ascribed motivations and intentions. Now we are talking to computers and dealing with automated machines. We are interacting with the new environment in much the same way humans have always done.
My own Roomba moment
Some years ago, we bought our own Roomba. I installed it and set it in motion. Both our cats were nearby, and I started filming. After all, who hasn’t seen those cute online videos with a cat or dog riding around the house on the robot?! This is how you launch a social media career! I was counting on a million views. At least! Sadly, neither animal showed the slightest interest. I had clearly ascribed the wrong motivation to our tabby Nelson, our calico Cleo, and our vacuum cleaner (unnamed to this very day). Oh well, at least I hadn’t given up my day job yet.



Naming things may have more functions than are obvious. In our family any unusual backyard visitor gets a name. This serves as a sort of one-off British puzzle game of connections. The turkey was Amelia. The groundhog was Bill. The fox was Everett. This is also true of vehicles. My Subaru is Martina. My spouse's Mini is Miles, and the lawn mower is named after a well-loved supplier of...well...grass, to use the parlance of bygone day. : )
In the shrink world we call this object personification, and it appears more often in individuals on the autistic spectrum. I intensely experience this with my library and the books in it. Very hard to let go of even one of them. Whenever I feel such attachment, I’ll go watch ex Machina 😎