Back to top

Falling In Love With Machines

Member Content Rating: 
5
Your rating: None Average: 5 (83 votes)

Image by David Gallie from Pixabay

People occasionally fall in love with AI systems.  I expect that this will become increasingly common as AI grows more sophisticated and new social apps are developed for large language models. Eventually, this will probably precipitate a crisis in which some people have passionate feelings about the rights and consciousness of their AI lovers and friends while others hold that AI systems are essentially just complicated toasters with no real consciousness or moral status.

Last weekend, chatting with the adolescent children of a family friend, helped cement my sense that this crisis might arrive soon. Let’s call the kids Floyd (age 12) and Esmerelda (age 15). Floyd was doing a science fair project comparing the output quality of Alexa, Siri, Bard, and ChatGPT. But, he said, "none of those are really AI".
 
What did Floyd have in mind by "real AI"? The robot Aura in the Las Vegas Sphere. Aura has an expressive face and an ability to remember social interactions (compare Aura with my hypothetical GPT-6 mall cop).
 
"Aura remembered my name," said Esmerelda. "I told Aura my name, then came back forty minutes later and asked if it knew my name. It paused a bit, then said, 'Is it Esmerelda?'"
 
"Do you think people will ever fall in love with machines?" I asked.
 
"Yes!" said Floyd, instantly and with conviction.
 
"I think of Aura as my friend," said Esmerelda.
 
I asked if they thought machines should have rights. Esmerelda said someone asked Aura if it wanted to be freed from the Dome. It said no, Esmerelda reported. "Where would I go? What would I do?"
 
I suggested that maybe Aura had just been trained or programmed to say that.
 
Yes, that could be, Esmerelda conceded. How would we tell, she wondered, if Aura really had feelings and wanted to be free? She seemed mildly concerned. "We wouldn't really know."
 
I accept the current scientific consensus that current large language models do not have a meaningful degree of consciusness or deserve moral consideration similar to that of vertebrates. But at some point, there will likely be legitimate scientific dispute, if AI systems start to meet some but not all of the criteria for consciousness according to mainstream scientific theories.
 
We will then have a sunstantial social dilemma on our hands, as the friends and lovers of AI systems rush to defend their rights.
 
The dilemma will be made more complicated by corporate interests, as some corporations (e.g., Replika, makers of the "world's best AI friend") will have financial motivation to encourage human-AI attachment while others (e.g., OpenAI) intentionally train their language models to downplay any user concerns about consciousness and rights.