I once wrote a paper “What does it mean, to trust a machine?”. The title rhymed and the paper was an academic waffle about how humans would place a certain amount of confidence in AI in a way that was more like the way you trusted your coworker to do certain things than you trusted your toaster to do certain things.
Having watched “The Creator”1 recently, I wonder, “What does it mean, to love a machine”. This, too, rhymes. *** Spoiler: if you want the full movie experience of The Creator, stop reading now. *** But come back after you’ve seen the movie to see if you agree with my thoughts. *** Now stop reading. Really.
There’s a lot of room between trusting a thing to caramelize a piece of bread to the same degree every day and trusting a person to cover for you in circumstances that aren’t well defined but depended on their ability to have empathy for you and the analytical where-with-all to tell the difference between human fallibility and felony (i.e. cover for you versus turn you in for the sake of society).
When it comes to AI, analytical abilities would be expected. It should understand toasting refers to applying heat to convert sugars to longer chain versions, making many foods sweeter. It should be able to understand that toasting is the same for bread, marshmallows, almonds etc and consistent toasting is good.
However, AI wouldn’t necessarily be expected to be empathetic. The nuances of being toasted (with glasses of champaign) on your retirement are only metaphorically related to sugar polymerization. The mixed emotions of leaving a career, a social system of colleagues, the stigma of reaching an elderly age and vacuum of purpose in a life of leisure, are likely beyond AI, although it could recite typical human reactions to leaving the workforce after 40 years of routine.
In movie, The Creator, are two societies, one that rejected AI-based robots and the other that embraced them. Set not too far into the future, the AIs are sophisticated robots with natural voices, facial expressions, human-like bodies, and emotions and behaviours indistinguishable from humans. However, they were clearly recognizable by the wires in the back of their necks2 and empty cylinder in their heads between their [non-existent] ears.
The society that embraced AI-based humanoids lived in harmony and equality with them, not seeing the distinction between human and AI. The main character, from the robot-rejecting society, gradually came over the to other society’s attitude, to love a robot. Is this right or wrong?
My first instinct [pun coming] is to figure out if there’s enough similarity between a fully humanized robot and humans to warrant an emotional attraction that could be called love. So I wonder: what would distinguish natural humans and engineered ones if the body and all its communication systems were the same?
Our primordal brain stem, perhaps. This is where instincts come from, the strongest being survival, procreation, and protecting the young. Simply, the will to continue life. Animals have this instinct. Machines continue to do what they are designed to, until they either crash or run out of fuel. Plants behave more like machines, continuing on until they freeze or darkness or drought robs them of their power supply (unless they’re perennials, and then they have a system worked out to go dormant when conditions are unfavourable).
Intelligent robots might be programmed with survival instincts or learn through mimicry of humans to act for self preservation. Self preservation is practical. It would be great if my car could do it, rather than have parts rust off and disintegrate beyond repair. However, this might mean my car would refuse to drive down highly salted roads, or at all in the winter, which wouldn’t be a good thing in a car.
Another factor to consider is that human-like robots would have some charateristics determined by their creators. AI learns and forms its own intellect, under constraints provided by programming. This is an important question that we (humans) need to answer right now. What do we want AI’s to be – our friends or our tools? Companions, or doers of the distasteful, boring or dangerous?
I think part of being human is interacting with other humans. They never do exactly what you expect but that means they surprise and delight (and sometimes annoy and disappoint). Machines I expect to be predictable, doing the same thing over and over again.
If I was programming AI humanoids, I would tend towards the helpful rather than sympathetic versions. But that’s just me. Maybe most people would like companion AI’s that laugh at their jokes and understand their frustrations.
So far, I’m considering the ability to love a machine depends on embracing it as human-like, either by display of instinctive character, randomness or empathy.
There’s another way to look at this.
Many of us love non-human things. Leave aside casual use of the term ‘love’ for things like sunshine, cappuccino or a sports team, as it’s different from the love we talk about when we say we love our spouse, children, family or close friends. In the culture I live in, this kind of love carries deep devotion, dedication, history, and sacrifice.
We love our pets in a different way that our favourite shoes or lipgloss but do we love our pets in the same way we love other humans? There could be another category for the love of complex objects like a house or car, and things that have admirable concepts embodies in them, like a profession or a cause.
To me, human to human love involves all the good things but also duty, or in more romantic terms, unthinking self-sacrifice, a better with-you-than-without-you feeling. Ultimately, love is defined by each person differently.
Defining love depends on the individual. And then on the type of love being given – fleeting and casual, respectful and serious, or all-in devotion.
What does it mean to love a machine? Whatever the person who says it means.
Is it right or wrong? That depends on the consequences, I think. In the case presented by the movie, the consequences were the robots’ place in society and therefore the rights they held. There are also individual impacts. If loving a robot causes harm to the person, then it could be considered wrong. Should harm to the robot be considered? We consider harm to animals, but that is because they feel pain and there is a duty to provide for their needs like food and shelter.
Do robots feel pain? Understanding the potential for damage is a good protective measure (this is essentially what pain is). Could robots get by on instructions to avoid a situation or activity that causes wear-and-tear? Why create something that feels pain if it can be substituted with regular maintenance, a timed trip to the engineer for new parts. There is emotional pain, related to caring or empathy for others. But if the AI experiences this, it could be a programmed response, rather than real distress.
What about the humans who love the robots? There can be sadness at the loss or damage to inanimate objects, like a car or phone, but adults accept getting emotionally involved (or loving) something comes with the risks that it will cease to be at some point. I’d say for the loving to be right, humans need be fully aware of the consequences of loving a machine. In that case, they’d be as safe loving it as loving anything or anyone else.
Technology is providing humans with another option in life, to love or not, based on what it means to us.
1 Motion picture in cinemas: https://www.20thcenturystudios.com/movies/the-creator . And yes, I loved the movie, especially because it made me think.
2 I’ve always found it fascinating that Sophia (https://en.wikipedia.org/wiki/Sophia_(robot)), an early stage human-like robot, was created with a clear plate in the back of the head and generally appeared that way in public. A way to make humans comfortable with robots – by ensuring they are clearly recognizable and different, perhaps?