Ruminations on Artificial Intelligence. Part 2: Are We in Danger?

What many people seem to fear from AIs, over and above a general fear of mysterious new things, is that they will subjugate us. They’ll run amok, denying humans our life-sustaining internet connectivity or fossil fuels or sporting events. Or worse, they’ll shut us down altogether, through the food supply, atmosphere, or access to cat videos.

Why would intelligence imply a domination agenda? This is also a question Martin Rees asks¹. Sure, that seems to be the way humans have behaved on this earth, forever, with various species/businesses/soccer teams outcompeting each other for habitat/market/world domination. Could something smarter, like artificial intelligence, conceive of a more inclusive world that didn’t require destroying other forms of life?

This reminds me of when I adopted an eight week old kitten and welcomed her into my home with mature cats. One was an exemplary specimen, a seventeen pound male, all muscle and fighting prowess. In their first encounter, the kitten puffed up her tiny self and hissed at the tom. He stood passively, looking down at her with what I swear was comfortable indulgence, certain that she could do neither him or herself any harm. Then he went on about his cat business. Similarly, I expect super-rational artificial intelligence to recognize when humans are acting out of fear and displaying unnecessarily aggressive tactics and calmly allow us to determine for ourselves no real threat exists.

Max Tegmart² points out that scaremongering sells³ news stories better than romanticized tales of cooperation, agreement and lack of conflict. He’s critical about how journalists have approached AI. I’m guilty of this myself – the alarmism. We’ve been presented with suggestions that AI’s will be damaging, dangerous or deadly to humans. In the science fiction movie 2001 A Space Odyssey, released in 1968, the intelligent computer, Hal, tries to murder people by shutting down their oxygen supply. The far-reaching control that AIs could exert over our environment frightens us. By nature, humans fear the unknown, probably for good reason. Cautiously considering whether the big, golden-furred beast, with paws as big as your head, is likely to eat you is a good survival skill.

A slightly more tangible fear with AIs is that they will control too much and shut off systems vital for our life. I can sympathize with this. I was on a bus recently that stopped working in the middle of nowhere. It was a modern bus, with electronic display boards and a synthetic voice that announced upcoming destinations and thanked patrons for prepurchasing their fare (well-meaning but a bit patronizing). As the driver attempted to restart the bus, the screens displayed the sort of nonsense I associate with a dysfunctional computer. Stack dumps, strings of port numbers and error messages. From the driver’s curses, clearly he was frustrated because he had no control over the function of this mechanical device. It’s computer system declared it dysfunctional, and it was going nowhere.

Uncooperative buses are a glimpse of what we fear from AIs. No room for humans to push to get the job done, doing the best they can to hold things together to get their passengers to the destination. No place for human ingenuity and know-how. No Macgyvering so everyone gets to work on time.

A kind bus driver will make exceptions for passengers in need and stop at unregistered stops. Would an AI driving the bus do that?

Can we program AIs to be resourceful and ingenious? To understand rules are things we made and therefore we want to break them. Human priories shift like clouds on a stormy day. We want the bus to run under the ultra-safe conditions we specified until it isn’t convenient. Then we know there are ways we can compensate to make it just as safe that aren’t written into the code.

We don’t need to fear artificial intelligence taking control over our lives. Being human is to adapt, to survive, regardless of what the unpredictable, improbable and Murphy’s-lawable throws at us. We got this.

——

¹ Martin Rees pg. 9- 11 in ‘What to Think about Machines that Think’ (2015) Brockman, J. (ed) Harper Perennial NY

² Max Tegmark pg. 43-46 ibid

³ or the modern equivalent, gets more clicks, page hits or eyeball time.

Please follow and like us:

One thought on “Ruminations on Artificial Intelligence. Part 2: Are We in Danger?

  1. I like your perspective given my doom and gloom of the singularity. To me intelligence is the ability to make optimal decisions. Machines will have an advantage because of faster computation speed, but humans will have to decide on the risk-benefit trade-off and it will not always be easy. For example, with self-driving cars, the objective is to get from A to B as quickly as possible. However, the faster the cars go, the more likely there will be a fatal accident. So how do we tradeoff speed versus human death or injury. Further, if forced to make a decision, different people will have different tradeoffs. If you a cautious person, do you want to be on the road with a bunch of daredevil risk takers?

Leave a Reply

Your email address will not be published. Required fields are marked *