Arnold Trehub states ‘Machines cannot think because they have no point of view’¹. Trehub cleverly links opinion and point of view. I now intuitively see how point of view, or a unique perspective, is necessary for opinion.
I’ve thrashed around on my keyboard for weeks, trying to articulate how human opinion differs from information provided by AI. I have no justification how I know they’re different, but I do. Because I’m human. Humans have a natural tendency to draw conclusions, have a point of view, based on whatever amount of information we have. AIs do not.
Does having an opinion make us human? No, it’s the other way around. Because we are human, we have opinions, derived from the way we process information and draw conclusions from what we’ve collected. For the most part, human’s work by adding each new bit of information on top of whatever they’ve already picked up, while AI has the capacity to catalogue each fragment of data until the entire story emerges. Thus, for people, how we incorporate each new experience depends on our previous experiences.
We’ve evolved the capacity to learn on the background of animal survival instincts. Are big dogs to be feared or petted? – depends on your past experience. Was your childhood best friend an Irish Setter, or was the first horror movie you watched Cujo, a story of a rabid St. Bernard terrorizing a family? Each of us has decades of history – song lyrics, movies, people, places, things, weather, but our memories work in mysterious ways, smashing things together, processing them through the filters of human optimism, then reprocessing until we’re convinced things were wonderful back then, and subject to random recall.
No AI would proudly claim it recalls some things and not others, glorifies the past, or has random memories pop into its processor to distract it.
Makes it sound like fun to be a human doesn’t it?
I’ll took a stab at calculating how different each person’s life experience is from the next person’s and got to infinite before I could write any thing down².
Clearly we have our own unique set of experiences. One AI would be expected to come to the same conclusion as another if they were given the same set of experiences, even if it was in a different order. Consider how the opinions of two 35 year old coworkers might be to the first snow of the year if one lived in a tropical climate for the first 34 years of their life and the other has shovelled lengthy driveways from the age of 7.
In addition to the historical context, humans interpret each event by how it will effect us. If the temperature goes down – does that mean you’ll budget more for heating, blanket the garden, or start a promotion on skis in your store? Do changes in GDP of a neighbouring country make you plan a vacation, watch the stock market, or pull up cat videos?
We form our conclusions on the basis of what evidence we have. If it’s hot today, was hot yesterday and when you were waiting in line to buy gas a few days ago, it’s been a hot summer. An AI would collect data, from the past month, or months, calculate means, variances and then compare to the past year, decade or century before deciding if it’s been a hot summer.
Humans process information as though they’re building a pyramid. Each new experience is interpreted on the background of all the previous ones (or the ones we remember). AI’s process information like Tetris. A new piece of information is allocated to a column of relevance and a conclusion is only drawn if the column is full (i.e. sufficient data to make statistically valid conclusion).
Why do we constantly form opinions, when we know we don’t know everything about the topic? Because we have to. We don’t have the luxury of waiting until we’re certain what the weather or traffic is going to be like before we go to work. We put on a summer dress and take the highway because its June and the city streets tend to be under construction in the summer. We have to give a presentation to important clients.
We don’t seek out all possible information before we decide. We get on with our life, form an opinion, and change our mind later if need be. This sounds like jumping to conclusions or being a bigot but I’m talking about the human propensity to form a working hypothesis. If we eat a turnip and then projectile vomit, we avoid turnips. Sure, we’ve only have one observation that said food disagrees with us, but won’t risk it will happen again. We don’t need statistical significance to decide the possible outcome is unpleasant and avoid turnips. And we can live without turnips, because our grandfather, who never ate them, lived to be 95.
Can the same can be said for an AI? It experiences a sequence of events and learns from each, like us. I expect AI to be objective, less invested in changing its mind with the addition of new data. It would refrain from drawing conclusions with insufficient information. It would seeks information on turnips and other factors that correlate with projectile vomiting and longevity before deciding what to eat.
The AI may be more objective, but human’s have opinions, quicker. Does that make us smarter, cooler, or more adaptable? Humans will have no problem answering that question. AIs might.
¹ ‘What to Think about Machines that Think’ (2015) Brockman, J. (ed) Harper Perennial NY pg 71.
‘What to Think about Machines that Think’ is gobsmackingly good. Making me think and ask questions and learn things I thought I knew about what it is to be one of my kind. And I’m not even a sentient machine. Who knew the place to find out about being a human was from a book about artificial intelligence? Although many contributors, such as George Church and Sean Carroll, describe humans as thinking machines.
² I geeked out on semi-math. Here’s what I’m thinking: Every human is in a different place – the living room, Antartica, or primary school where the lighting may be bright or dim, the weather rainy, foggy or gale force winds may blow, we may be alone, with our Mum or at a football stadium full of Argos fans, we could be a teenager, senior, or babe-in-arms, observing a coronation, action-thriller movie, domestic dispute or bird building a nest. And so on. Then, the next second, something could change, someone walks in the room, the car stalls, the cat meows, you throw up because you are pregnant, or there’s an earthquake.
We’ve done two seconds of the calculation. By the time we’re 35, we’ve lived a little over 1.1 billion seconds, so our experiences are different from the next persons by (however many parameters you would like to include but even if you just have two I can make my point) to the power of 1.1 billion. For fun, I input this into my calculator. The answer is ‘Infinity’. Even if we say that it takes an hour for a person to have a different experience, a 15 year old has lived over 130,000 hours, which is still an ‘Infinity’ of potential combinations different from her BFF who wears the same style clothes, has the same hairdo, piercings and speaks in the same idioms.
³ This is the mathematical equivalent of ‘I told you so’. In high school, there was a rumour that it stood for ‘quite easily done’, although it’s latin for ‘Quod Erat Demonstrandum’ which could be a good name for a metal band.