How Smart do you Want your Things to Be?

In the not too distant future, all things will be smart. All inanimate objects will have sensors that collect information, and artificial intelligence to analyze and react to the information.

A first generation example is motion sensors that have the sense to turn on lights when people, or your rottweiler, enter the room. Getting a little more sophisticated, there are devices like the Nest that learn from your habits and adjust the heating or a/c to create the most comfort in the most economical way. In the future (like next week), you may have a fridge that records the comings and goings of food over time and after it’s learned enough, places an order to the local food deliverer [probably Amazon] to restock all your staple foods, and suggest a few new offerings that it has calculated you might like based on your love of strawberries, mustard and corn chips. [shudder]

Everything, absolutely everything, will be smart. And helpful, in an artificially intelligent kind of way. In the spirit of embracing the future and getting the most out of it, I have a question for you:

What things do you most want to be smart and which do you least want to be smart?

Sure, I want world peace, cures for all hideous diseases, healthy, cheap food for everyone, but even AI is unlike to make those things something Amazon can deliver. And on the big picture down side, I don’t want my vacuum cleaner to take over ventilators at the hospital, the power grid to decide what temperature it should be in my home, or robots to terminate all human life because there isn’t enough calcium in the compost.

Smart stuff is likely to be more mundane in the near future (like next week and the one after that), so my expectations are lower.

Here are my smart wants:

I most want my electronic devices (which will be all things, including those we don’t consider to be electronic devices right now, like a hairbrush) to be smart enough to recognize me so I don’t have to remember 1,750 different user names and passwords. Of course, I expect perfect accuracy (the hair brush can tell the difference between me and my daughter, who likes to use it on the dog) and that they won’t let someone who has replicated my fingerprints, retinal pattern, voice or heartbeat to have access to my apps (especially my daughter, who is a likely candidate for trying to hack my stuff).

I want smart things to take away some of the pain I now have obtaining secure access to everything.

I least want something interfering in my learning process. I fiddle. I explore. I figure things out by taking a few steps and then pondering how it might work. After several rounds of this, if I complete a task I haven’t done before, I’m proud. This is how life happens for me, whether it’s setting up a website, doing home repairs or starting a company. The last thing I want is an AI chiming in to tell me how to do it at the first sign I’m stumped, or worse, 5 minutes before I realize I’m stumped.

To the average smart thing of the future, my message is: ‘if I want your help, I’ll ask for it’. What I don’t want is a smart-ass thing. A know-it-all thing. Any number of things could provide too much information. A hammer could disapprove of your the choice of nail. A pot could have opinions on the temperature applied or the nutritional content of the food it’s required to cook. Your CRM might remind you to call a particular client or that you’ve called one excessively, ignoring your personal ‘feel’ for how to deal with people.

Overall, I’d like smart things that see the difference between challenges that are annoying because they demand unnecessary attention (like where you put down your phone) but appreciate when a challenge is a good thing so you give it some thought (like understanding how to manage  privacy settings).

Of course, what annoys me may be a learning experience you want. Maybe you want instructions the second you pick up something because you’d rather spend your time watching videos. Or you’re a security buff and don’t want an AI to remember your passwords.

We’re all different, so it will take very smart things to please us all.

Please follow and like us:

The New Trust.

Could it be that digital solutions to everyday activities are making us a more trusting society? In this era of paranoia of big business and big government, rampant nonsense news, usurping of reality by the ebb and flow of opinion, is there goodness? Wholesome, warmth towards our fellow strangers?

I believe so.

Let’s do a few flashbacks to see how easily we accept today what we strictly controlled in the past. Consider:

Now Then
Purchasing stuff from a store Self Check Out. No one pays attention as I scan 45 items and place them in the bags I brought into the store myself and have pushed around in my cart for the last 45 minutes. Cashier can’t remember what green leafy stuff is. Calls for price check. People in the line behind you glare. Bags brought into the store subject to search, or elaborate tagging and stapling routine to ensure nothing could be added to said bags.
Purchasing stuff online We do it. We deal with vendors we’ve never heard of before. Put our credit card numbers into sites that weren’t there yesterday. Correspond with anonymous posters of used items or go meet them in vacant apartments. Pundits poo-pooed the idea that anyone would buy goods from an organization they’d never heard of. Amazon would never sell more than books because people wanted to see what they were buying. Early eBay was intimidating to many.
Cashing cheques/transferring fund If someone emails us money, we decide where and when it goes. We choose the bank account where it’s deposited. Scrutiny. Showing of ID, comparing of signatures, spelling of the name. Cheques rejected for date infractions, use of coloured ink. Banks closed at 3pm, funds held, frowns shared.
Paying bills At some time in the distant past, you knew your account number. So now you can pay the bill. Any amount you want. Paper bill required. Bottom half confiscated by bank. Top half stamped, initialed and annotated.
Transit fares Scan your pass, buy your ticket at a kiosk or online. Prepare for spot checks. Buy ticket at wicket. Show attendance when boarding bus/train. Lose transfer and have to pay double fare. Show attendant on leaving transit system or pay double fare.
Health Benefit Claims Go to the dentist, pharmacist, physiotherapist. Submit online for reimbursement of costs. Have reimbursement immediately deposited to bank account. Click ‘Agree’ to terms and condition to produce proof of payment if requested. Fill out forms. Attach receipts. Put in paper mail. Wait weeks. Wait months. Get response that indicates you failed to sign form. Start all over again. Get denied reimbursement because time limit has expired.

Some of you have never experienced what’s in the ‘Then’ column. Lucky you.

While much of what’s in the ‘Now’ column adds efficiency and convenience, it also suggests a level of trust that wasn’t there then. People don’t have to prove who they are, or that they’ve bought tickets, been to the dentist, have an account with the gas company, or purchased one bunch of broccoli rather than two. Reciprocally, we’ve learned that most vendors are honest, want to deliver goods to us and really own the electronics they’re selling.

Am I being naive? Or does new technology merely replace all the previous checks and balances provided by the seemingly draconian humans at the bank, insurance company or checkout cash? Perhaps emailing money is so fool-proof no one ever makes an error or commits fraud.

Someone’s probably done the math. The added efficiency of not collecting everyone’s proof of payment out-weighs the number of people who cheat the system. That’s kinda cool in itself. Gives me a warm fuzzy about humans – for the most part, we’re okay.

I’d like to believe we’re evolving into a more trusting society at an individual level. It feels good to be trusted and included, even if it’s by an algorithm or encryption key.

Please follow and like us:

Haptics. Or, Things We Don’t Know We Need Yet.

I just got an iPhone 7. Pretty impressed. It’s like a purring kitten in the palm of my hand. From what I read, I’m not the only one liking the haptics, or physical buzzing and shaking it performs with routine actions, like scrolling through menus or changing settings. A pleasant surprise.

 

Got me thinking about all the technology that’s crazy-sounding when it’s introduced and broadly embraced some time later. Here’s a few examples that popped into my head:

  • an electronic replacement for the yellow pages. And encyclopedias. And maps. Used to be if you needed to find something you got a big book and looked it up. Now we google.
  • one device that holds a thousand or more books in less space then a paperback.
  • the ability to instantly share everything we are doing with literally everyone in the world, via text, photo, or video. (I suspect hologram and virtual attendance are coming.)
  • the ability to pass judgement on all that everyone in the world shares. (Can you imagine a world without ‘like’ buttons?)
  • getting your DNA sequenced, because you can.

I tell my students that a successful business model satisfies an unmet need. Therefore, every new thing that’s adopted into everyday lives satisfies a need in a novel way. Often it’s convenience (electronic yellow pages and electronic reader), or being social (social media), or a quest for knowledge (DNA sequencing).

Another perspective: this article suggests that technology developers, in particular phone, app and social media developers, introduce new features that try to take advantage of users. Through manipulating basic emotions such as anxiety and loneliness, people may be addicted to their communication devices. We feed on the responses to our posts, mainlining the likes, sharing, and other general good vibes. There’s even science that suggests notification systems elicit a basic stress response if not answered.

I get it. The tone of a text message literally makes me jump. I have to assume it’s a survival reaction to new information in my environment: my instincts insist I assess it for fear it will devour me if I don’t. (Because test messages have been know to do that.) Funny thing is, I don’t recall such an urgent reaction to a ringing phone back in the day of landlines. Without caller I.D. or voicemail, if a call was missed, you had no idea who was calling about what. We managed to continue living, assuming if it was important, they’d call back.

Have we been conditioned to over-react to our electronic notifications?

A stated business goal for many device or app developers is improved user experience. This could be code for holding your attention longer. Considering that many social media sites make money from advertising, and the fee to advertisers is higher the more eyeballs are on the site, there is some logic to why companies want users to spend more time on their sites. How does increased convenience, which presumably relates to less time spend doing something, fit into this scheme.

Back to my iPhone. Some of the rationale1 for the haptics is to simulate the feel of pushing a button (electronic buttons1 have reacted that way since forever, so pushing buttons must be very important to people2). Eliminating push buttons is good for manufacturers because it makes for fewer moving parts, allowing a more reliable device as moving parts are harder to fix with a software update. The haptics made the earphone jack go away (space thing), which has the benefit of allowing less dust into the guts of the phone. The haptic functionality is open to third party developers who can dream up as many different uses for a wiggly phone as their imagination will allow – so there’s all kinds of new needs we don’t know we have to be satisfied.

When my shiny new phone first purred in my hand, I thought of artificial intelligence. Could this be some far-sighted approach to prepare us to accept machines as interacting members of the household, or society?

In this description of Apple’s haptic technology we learn that the devices are engineered to deliver signals to our fingers by sending misleading messages that mimic the push of a button. The technology uses knowledge of how our brains interpret forces directed onto our fingertips to simulate the button-pushing-feel. This is revolutionary. The visual and tactile are together, like they are in the real world, which makes what happens on my phone more real that ever before. Another reason not to put it down.

Haptics are currently used to give a more life-like experience to video games and to training simulation where touching a real thing is less than desirable, such as medical procedures and handling of dangerous substances3. I’m curious to find out what far flung way haptics will be a vital part of everyday life 10 years from now.

I’m so excited about this new technology, I’ve glossed over where I started with this post. Why do developers introduce new products and features that we don’t know we need or want, but can’t live without a little while later: A sinister plot to take advantage of our primordial urges and to get us to buy more stuff? Or visionary anticipation of the benefits of new technology?

Maybe I’ll ask Siri.

———-

1 Tapping a button, whether on a touch screen or through a cursor delivered click, makes the button do a flashy thing, which simulates a push of the button. Other surrogate reactions are noises.

2 The desire to push buttons stumps me from an evolutionary perspective. I’ve seen explanations related to curiosity and being in control, and testing rules but am not satisfied.

3 For example, when learning to handle radioactive waste or do open heart surgery.

Please follow and like us:

Software Updates: A User’s Perspective

It time for another humorous, if somewhat pointed, look at modern technology, specifically software updates and the (mixed) messages that come with them.

Take for example:

This software update will fix a few security issues.

What it seems to mean:

  • The update will break all my preset passwords, requiring re-input into ‘settings’.

But, I don’t know all my preset passwords. Either I dig into that secure location where I keep the paper record of the passwords (although I’d don’t have such a thing because it’s a giant security risk), or request password resets, which requires changing the same password on six other devices. Where I can’t remember how to find the setting because…

We’re changed our look.

What it seems to mean:

  • Everything on the website/app looks completely unfamiliar. I’m disoriented.
  • If the background was white, now it’s navy blue. The rounded font now is square. The logo is different so I’m not even sure if I have an account, which doesn’t matter because the last security patch erased my password.
  • I can’t finding things by their location on the screen, because that’s changed too. The menu has moved from the right sidebar to three lines disguised as a decorative doodad at the top of the page.
  • The marketing team must have decided to rename all the critical functions, so looking for functions by name is pointless.
  • Shutting down is impossible? The capability has been removed. Who’d want to stop using this brilliant software, anyway?
  • There’s new functionality, preset to the most intrusive level, so that I suddenly have strange icons clogging my screen when I’m trying to call a critically important client with information they wanted five minutes ago.

All this because…

The software (operating system, word processing, presentation software) is licensed to you free of charge.

What it seems to mean:

  • The software developer is in command but assures me I’m a valuable customer.
  • I’m inundated with ‘update your software’ messages. A screen pops up while I’m in the middle of doing something I’ve chosen to, like email my ailing mother, text my member of parliament about internet privacy concerns, or read my son’s report card.

All I have to do to use my free software, on my device, is continually dismiss messages from the software developer. I don’t update because I’m afraid I’ll need a bunch of time to reload my passwords and figure out where all the options are …see above.

Don’t get me wrong, I’m glad software is updated all the time, otherwise we might be stuck with that annoying paperclip of advice, have our identities stolen, or be able to brew a pot of tea between page loads. Ever advancing software functionality has changed everything over the years in wonderful ways. Embedded video. Autofilled fields. Hyperlinks to automatically put events in your calendar or phone the new restaurant that delivers to your house at the touch of a screen.

Why am I complaining? Humans hate change. C’mon, even those of us who are addicted to change actually hate change if it messes with our routines. Routines make life simple. I don’t want to have to think about where to find the menu on my favourite website because I have better things to do. Like vote on a new logo for my favourite coffee shop.

It’s like old slippers: comfy, cosy, threadbare, faded with a sole that flops around, half unglued. If anyone has the nerve to replace them with a sleek new pair, complete with ultra comfortable memory foam insoles, I’m not happy. Not because the new slippers aren’t nice, have additional features and the old ones were about to disintegrate, but because my brain has to adjust.

Perhaps the answer is software updates so frequent and subtle that we never consciously notice the continuous, small changes. On that point, what did I notice, just yesterday, but a certain browser advertising its features, including continuous updates. If the approach to updates is a marketing point, I’m not the only one who finds the current, prevalent process aggravating.

That’s the miracle of software: if you don’t like the way something works, give it a few months and it will likely change. The update is coming.

Please follow and like us:

Stop Helping, It’s not Helpful.

There is a fine art to understanding how, if, and when a customer wants to be helped. We’ve all experienced it: the difference between the poorly timed, inane, nagging questions and a salesperson who comes to your side just as a question about a product forms in your mind, adds insight to your shopping quest, and has you smiling at the check-out desk. Or the professional who distinguishes between when you’re in a hurry to find one, specific thing, vs. a leisurely browse that might see you buy an entire cartload of items.

The internet has taken the challenge of good customer service to a whole new level. It’s making me crazy. Why? Because pop-ups. There are many fine examples of using the internet to deliver better information about products, and ways to make products more accessible, both financially and physically. However, more thought could go into the implementation of some web browser popups.

Here’s a list of various pop-ups that miss the mark, at least for me:

  • Offering your newsletter before the site has even fully loaded. I don’t know who you are, what you do, or if I’ve clicked on a link by accident. So no, I don’t want your newsletter. Ditto alerts, updates and notifications. If you waited a bit, I’d be more likely to say yes. So wait a bit. In person, this would equate to a person with a clip board, standing at the store entrance, demanding ‘Do you like our store?’
  • Trigger happy sidebar ads, especially ones that scroll down the page with you. If I I’m interested in what you are selling, I’ll click on it. If I click by accident because of poor page design, I will hate your company for the rest of my life. It’s like a sales person holding up jackets when you’re browsing shoes and repeating: “How about this?” “How ’bout this?” “How bout this?” “How ’bout tis?”
  • Chat with an associate before I’ve even read a sentence. Put the dialog box away until an appropriate time to suggest it. Yes, it’s great you have people or bots to answer questions, but why do you have a website in the first place? So people can read about your company/product. This is especially true for logging into email and being offered chat with my friends. If I wanted to chat, I’d open a chat app. I’ve opened an email app, so guess what I want to do?
  • Why are the only two choices for getting rid of an ad that I don’t want to see: it covers the page1, or, it’s offensive? I have reasons for not wanting to see the ad. Maybe it reminds me of my ex-husband or dearly departed pet. Do you really want to push that negative association on me, so I can forever be repulsed by whatever is being promoted?
  • I don’t want extra windows to pop open with suggestions for helpful things like saving my passwords, adding people to my contacts, creating events in my calendar, or downloading an app to make what I’m doing easier2. It would be easier if I wasn’t constantly interrupted with popups trying to do things other than the one I’m trying to do. This is like trying to buy milk and bread while an over-zealous salesperson offers to determine my shoe size, the colour of my aura, or what my family history reveals about the perfect pet for me.
  • Requiring sign-in three screens into a site. There should be a flag (maybe like the toxic waste symbol) for sites that require creating an account to access the info they’re offering. Spending time on a landing page to get excited enough about the content to ‘click here to download’, only to find out that you need to surrender enough personal information for military clearance, is poor communication. Facebook and LinkedIn landing pages make it very clear that you are going nowhere without an account. It’s like getting to the check out at a store with some fabulous finds and discovering that the marked prices are only available to members. Who have signed up. With their personal information.
  • There should be a special place in hell for ads with a hard to find, or absent, close window ‘X’. This is the equivalent of a salesperson who doesn’t understand ‘I’m just looking’ as the signal to GO AWAY but instead follows you around the store, quipping useless information with each item you look at, oblivious to each new sneer.

Maybe everyone except me else loves pop-ups because they provide useful information. Most of us have things to do and don’t want extraneous pop-ups filling out lives with the need to swath though screens, like an explorer with a scythe in the jungle, to see what we came to see.

I might like pop-ups better if they added value. I am curious to know what conclusions fancy algorithms draw from my various searches and posts, akin to the fascination my rational self has with having my fortune read. A clever observer of people can conjure an accurate reading by observing and responding to their subject’s cues.

Know your client. In the modern era, that has to be done without invading privacy, which is how any good human salesperson has always done it – respecting the client’s preferences. The challenge is doing the same online. I’m sure someone or something will figure it out. Soon. Please.

——

1I’m probably being too honest because I won’t click on ‘it covers the page’ unless it covers the page. It only covers a third or a quarter of the page, so I don’t click.

2I realize this may not be the fault of the designer of the website I’m perusing. It’s the helpful operating system on my device. Still, back off.

Please follow and like us:

Privacy. Nothing to Fear but Fear Itself. And Third Party Use of Data.

‘What do we fear could happen if we put our personal information online?’ A question I came across while researching internet privacy. Simple but brilliant, because the answer didn’t come easily to me. Like washing my hands before meals, I do it, but why?

Should I be concerned that I’ve made numerous posts on Facebook about my love of beer, my Twitter account reflects an interest in rock music, and how these leisure activities align with the professional profile I try to maintain on LinkedIn?

If I ran for Prime Minister, which I wouldn’t, what might someone turn up to incriminate me? Not that I’ve done anything terrible. That’s the thing. Fear may be rooted in how some mundane piece of information could be spun. With a little information, say that I’m an avid poker player, what horrible portrait could be drawn of me – the gambling addiction? Or my fascination with guns. (I played paint-ball war games once in 1986.)

We all have our hobbies. Many people fear that their, ahem, socially-shared, social interactions (i.e.. partying) will be frowned upon by future employers. Stories of job interviews ending in a request for Facebook passwords still float around, despite the clear invasion of privacy. Snapchat, with posts that disappear without a trace unless someone downloads them, may resolve the drunken photo-share problem. Social media is worrisome because of the foreverness of it. Can something we did years ago, that everyone’s forgotten about because it isn’t a habitual activity, come back to haunt us?

Not only can we fear the past being exhumed, there’s little to protect us from the practice of tracing our day to day web browsing activity. On average, I go to 20 different sites in a day. What does my cumulative surfing activity tell a keen marketing algorithm? The practice of tracking user activities (searches and website visits) may provide smarter observations about our tendencies than we can come up with ourselves. Is this a valuable service or an annoyance of spam and suggestive selling?

Some fears are rooted in reality. Identity theft. Credit card fraud. Or being sold something you don’t need because you’re vulnerable, like forest fire damage insurance. Don’t you feel bad for people who make a silly mistake and get caught on social media, like calling in sick to work when they aren’t, or ruining a surprise proposal or party. We all have lapses in judgement occasionally.

Privacy is a fundamental right. If I don’t want you to know ‘that’, then it’s my right to keep ‘that’ private. But often, it isn’t on web forms. How many have you filled out where a phone number is a required field even though you can’t see the need for one, but can’t place your order without it? More annoying is the site that insists you create an account, or ‘sign up’, with the requisite disclosure of personal information. I say NO to those sites because I’m convinced they get more out of me becoming a member than I do.

Most of us know it’s possible to track websites visited and location through the GPS on mobile phone. However, in one study, while 90% of a group of experienced internet users say they know what a cookie is, only 15% can actually answer questions correctly that demonstrate they really know what cookies are1. We may be vaguely aware that online actions are traceable, don’t know what does it really means, or what could someone do with the information. Facebook reportedly2 looks into browser history to target ads to users. If an organization is profiting by selling information about me, without my knowledge, that does not sound right.

Back to the original question – how much harm can be done if a company knows I’ve researched hemorrhoids, looked up recipes for grasshoppers, visited six shoe shopping sites, and watched way too many cat videos? It might be embarrassing, but it won’t ruin my love life, empty my bank accounts, or set fire to my car. Still, I’m uneasy about what’s being done with my personal information, because I don’t know what’s being done with it. I’m not alone. This study3 suggests only 28% of people in a group of about 1500 agreed with the following statement: ‘what companies know about me from my behavior online cannot hurt’.

I don’t have the answer to ‘what do I fear will happen if my personal information is online’. I don’t need to. I wash my hands, without knowing if a bacteria, virus or fungus is lurking, waiting to infect me, or how serious an infection it might cause. Similarly, I’m concerned that something sick and disabling might be done with my online personal information, so I’m cautious of what I share.

1 from Luzak, J. (2014) Privacy Notice for Dummies? Towards European Guidelines on How to Give “Clear and Comprehensive Information” on the Cookies Use in Order to Protect the Internet Users’ Right to Online Privacy J Consum Policy 37:547-559

Please follow and like us:

Artificial Intelligence Part 3. Randomness: A Human Advantage.

Arnold Trehub states ‘Machines cannot think because they have no point of view’¹. Trehub cleverly links opinion and point of view. I now intuitively see how point of view, or a unique perspective, is necessary for opinion.

I’ve thrashed around on my keyboard for weeks, trying to articulate how human opinion differs from information provided by AI. I have no justification how I know they’re different, but I do. Because I’m human. Humans have a natural tendency to draw conclusions, have a point of view, based on whatever amount of information we have. AIs do not.

Does having an opinion make us human? No, it’s the other way around. Because we are human, we have opinions, derived from the way we process information and draw conclusions from what we’ve collected. For the most part, human’s work by adding each new bit of information on top of whatever they’ve already picked up, while AI has the capacity to catalogue each fragment of data until the entire story emerges. Thus, for people, how we incorporate each new experience depends on our previous experiences.

We’ve evolved the capacity to learn on the background of animal survival instincts. Are big dogs to be feared or petted? – depends on your past experience. Was your childhood best friend an Irish Setter, or was the first horror movie you watched Cujo, a story of a rabid St. Bernard terrorizing a family? Each of us has decades of history – song lyrics, movies, people, places, things, weather, but our memories work in mysterious ways, smashing things together, processing them through the filters of human optimism, then reprocessing until we’re convinced things were wonderful back then, and subject to random recall.

No AI would proudly claim it recalls some things and not others, glorifies the past, or has random memories pop into its processor to distract it.

Makes it sound like fun to be a human doesn’t it?

I’ll took a stab at calculating how different each person’s life experience is from the next person’s and got to infinite before I could write any thing down².

Clearly we have our own unique set of experiences. One AI would be expected to come to the same conclusion as another if they were given the same set of experiences, even if it was in a different order. Consider how the opinions of two 35 year old coworkers might be to the first snow of the year if one lived in a tropical climate for the first 34 years of their life and the other has shovelled lengthy driveways from the age of 7.

In addition to the historical context, humans interpret each event by how it will effect us. If the temperature goes down – does that mean you’ll budget more for heating, blanket the garden, or start a promotion on skis in your store? Do changes in GDP of a neighbouring country make you plan a vacation, watch the stock market, or pull up cat videos?

We form our conclusions on the basis of what evidence we have. If it’s hot today, was hot yesterday and when you were waiting in line to buy gas a few days ago, it’s been a hot summer. An AI would collect data, from the past month, or months, calculate means, variances and then compare to the past year, decade or century before deciding if it’s been a hot summer.

Humans process information as though they’re building a pyramid. Each new experience is interpreted on the background of all the previous ones (or the ones we remember). AI’s process information like Tetris. A new piece of information is allocated to a column of relevance and a conclusion is only drawn if the column is full (i.e. sufficient data to make statistically valid conclusion).

Why do we constantly form opinions, when we know we don’t know everything about the topic? Because we have to. We don’t have the luxury of waiting until we’re certain what the weather or traffic is going to be like before we go to work. We put on a summer dress and take the highway because its June and the city streets tend to be under construction in the summer. We have to give a presentation to important clients.

We don’t seek out all possible information before we decide. We get on with our life, form an opinion, and change our mind later if need be. This sounds like jumping to conclusions or being a bigot but I’m talking about the human propensity to form a working hypothesis. If we eat a turnip and then projectile vomit, we avoid turnips. Sure, we’ve only have one observation that said food disagrees with us, but won’t risk it will happen again. We don’t need statistical significance to decide the possible outcome is unpleasant and avoid turnips. And we can live without turnips, because our grandfather, who never ate them, lived to be 95.

Can the same can be said for an AI? It experiences a sequence of events and learns from each, like us. I expect AI to be objective, less invested in changing its mind with the addition of new data. It would refrain from drawing conclusions with insufficient information. It would seeks information on turnips and other factors that correlate with projectile vomiting and longevity before deciding what to eat.

The AI may be more objective, but human’s have opinions, quicker. Does that make us smarter, cooler, or more adaptable? Humans will have no problem answering that question. AIs might.

QED³.


¹ ‘What to Think about Machines that Think’ (2015) Brockman, J. (ed) Harper Perennial NY pg 71.
‘What to Think about Machines that Think’ is gobsmackingly good. Making me think and ask questions and learn things I thought I knew about what it is to be one of my kind. And I’m not even a sentient machine. Who knew the place to find out about being a human was from a book about artificial intelligence? Although many contributors, such as George Church and Sean Carroll, describe humans as thinking machines.

² I geeked out on semi-math. Here’s what I’m thinking: Every human is in a different place – the living room, Antartica, or primary school where the lighting may be bright or dim, the weather rainy, foggy or gale force winds may blow, we may be alone, with our Mum or at a football stadium full of Argos fans, we could be a teenager, senior, or babe-in-arms, observing a coronation, action-thriller movie, domestic dispute or bird building a nest. And so on. Then, the next second, something could change, someone walks in the room, the car stalls, the cat meows, you throw up because you are pregnant, or there’s an earthquake.

We’ve done two seconds of the calculation. By the time we’re 35, we’ve lived a little over 1.1 billion seconds, so our experiences are different from the next persons by (however many parameters you would like to include but even if you just have two I can make my point) to the power of 1.1 billion. For fun, I input this into my calculator. The answer is ‘Infinity’. Even if we say that it takes an hour for a person to have a different experience, a 15 year old has lived over 130,000 hours, which is still an ‘Infinity’ of potential combinations different from her BFF who wears the same style clothes, has the same hairdo, piercings and speaks in the same idioms.

³ This is the mathematical equivalent of ‘I told you so’. In high school, there was a rumour that it stood for ‘quite easily done’, although it’s latin for ‘Quod Erat Demonstrandum’ which could be a good name for a metal band.

 

Please follow and like us:

Look at Tech, It’s Growing Up.

I don’t like being called a geek, but being thrilled to attend the Toronto Tech Summit where I was titillated by the frontiers of new technology is pretty geeky, isn’t it?

Friday’s event (April 8, 2106) was a well organized and thought out conference, with high quality speakers and good breadth to the program. The event claims a focus on customer experience or ‘crafting experiences through technology’¹. Not too long into the first session, it hit me:

Tech² is growing up. Leaving that awkward teenage phrase of ‘no, I’m totally different’, to resemble a young adult who want to make good in the world, but have their own ideas about how to achieve it.

One of the speakers³ asked ‘how do we business in Canada?’. Business as a verb. Yes. The conference was about the business made possible by technology, not how to turn technology into business.

Maybe everything I see looks like a business strategy lesson right now, but I was blown away how each talk could illustrate a concept from a quintessential strategic management textbook.

Tech has grown into an enabling component of every product and service, so it’s not surprising that I imagined writing a different chapter of a strategy manual with each presentation at the Tech Summit. All the better because every story was about a cutting edge business. How exciting… tech is no longer separate, it’s integral. Not renegade and unruly, but maverick and enlightened. Less Sex Pistols and more U2.

Here are the business lessons I took from some of the presentations at the Toronto Tech Summit:

No business conference is complete without a presentation about the Internet of Things. Sachin Mahajan from Telus eloquently laid out evidence that this is an industry entering the growth phase following its introduction. Large companies, like Google, IBM and Apple are investing heavily in the area, as are venture capitalists. The business is nascent, so there are few industry standards – another hallmark of an early growth stage industry, as is knowing little about the verticals that will serve the industry.

FreshBooks – a general audience pause while we all roll our eyes because its accounting software – is in a more mature industry – enterprise software. Avrum Laurie described their process for agile design. Process innovation, the textbook says, is a hallmark of a maturing industry. Yup, integrating real-time design innovation and customer feedback may be new tech, but process innovation, to decrease waste and remain competitive, is old school, cost-focused strategy.

Classic diversification strategy was presented by Bowie Cheung of UberEats. Lots of great strategic moves here. Uber’s mission is to deliver everything to people – I’m paraphrasing and may not have got the words exactly right but clearly she was talking about new business units. What does a company do to grow? Build on its existing knowledge base. Use what it knows in new ways. In Uber’s case, deliver food to people instead of giving them a ride using essentially the same driver and car base they’ve established. Makes sense, so far. But delivering food from restaurants isn’t a new thing. Can Uber make it better? The roster of restaurants is UberEats’ differentiating factor, allowing them to realize economies of scale in making their dishes for a wider customer base, with distribution enabled by the Uber app and quick delivery. I particularly liked the idea of being able to track delivery through the app. How many times have you paced, ravenous, wondering where the heck your pizza was? Uber answers. This could be a key success factor.

The customer experience/care panel asked traditional questions about client demographics. I had to wonder when the talk turned to the use of chatbots in retail. The essence of the concept was that instead of lifting a finger to click on opinions or pull down menus, the AI would ask which option the customer preferred. Could we all become so lazy? But I can see it being the new normal or industry standard.

Other delicious morsels of business strategy I heard:

  • The requirement for organizational structure, especially as a startup grows, was attest to by Paul Grey of KiK. KiK is a social media platform used by a particular demographic.
  • Differences in new entry costs between hardware and software was a theme from Wesley Yun from GroPro.
  • Diversity in all businesses has value for the organization and is not just good corporate social responsibility, said Nada Basir from the U of Waterloo business school.
  • Another example of cost focus strategy from the mature business of online auctions, methods to reduce costs by changing currency offering.
  • And the importance of corporate culture for delivering anything in business.

As an old person, always excited about new technology, I felt right at home with the new generation. Because they’re practicing business just the way we did when I was young.

——

¹ http://www.torontotechsummit.com

²I’ve always defined technology as inclusively as possible, encompassing software, hardware and the combination, and newly engineered physical and biological things. I was glad to hear one of the speakers say the same.

³here http://www.torontotechsummit.com is the list of speakers. apologies in advance if I don’t attribute every phrase I heard correctly.

 

Please follow and like us:

Ruminations on Artificial Intelligence. Part 2: Are We in Danger?

What many people seem to fear from AIs, over and above a general fear of mysterious new things, is that they will subjugate us. They’ll run amok, denying humans our life-sustaining internet connectivity or fossil fuels or sporting events. Or worse, they’ll shut us down altogether, through the food supply, atmosphere, or access to cat videos.

Why would intelligence imply a domination agenda? This is also a question Martin Rees asks¹. Sure, that seems to be the way humans have behaved on this earth, forever, with various species/businesses/soccer teams outcompeting each other for habitat/market/world domination. Could something smarter, like artificial intelligence, conceive of a more inclusive world that didn’t require destroying other forms of life?

This reminds me of when I adopted an eight week old kitten and welcomed her into my home with mature cats. One was an exemplary specimen, a seventeen pound male, all muscle and fighting prowess. In their first encounter, the kitten puffed up her tiny self and hissed at the tom. He stood passively, looking down at her with what I swear was comfortable indulgence, certain that she could do neither him or herself any harm. Then he went on about his cat business. Similarly, I expect super-rational artificial intelligence to recognize when humans are acting out of fear and displaying unnecessarily aggressive tactics and calmly allow us to determine for ourselves no real threat exists.

Max Tegmart² points out that scaremongering sells³ news stories better than romanticized tales of cooperation, agreement and lack of conflict. He’s critical about how journalists have approached AI. I’m guilty of this myself – the alarmism. We’ve been presented with suggestions that AI’s will be damaging, dangerous or deadly to humans. In the science fiction movie 2001 A Space Odyssey, released in 1968, the intelligent computer, Hal, tries to murder people by shutting down their oxygen supply. The far-reaching control that AIs could exert over our environment frightens us. By nature, humans fear the unknown, probably for good reason. Cautiously considering whether the big, golden-furred beast, with paws as big as your head, is likely to eat you is a good survival skill.

A slightly more tangible fear with AIs is that they will control too much and shut off systems vital for our life. I can sympathize with this. I was on a bus recently that stopped working in the middle of nowhere. It was a modern bus, with electronic display boards and a synthetic voice that announced upcoming destinations and thanked patrons for prepurchasing their fare (well-meaning but a bit patronizing). As the driver attempted to restart the bus, the screens displayed the sort of nonsense I associate with a dysfunctional computer. Stack dumps, strings of port numbers and error messages. From the driver’s curses, clearly he was frustrated because he had no control over the function of this mechanical device. It’s computer system declared it dysfunctional, and it was going nowhere.

Uncooperative buses are a glimpse of what we fear from AIs. No room for humans to push to get the job done, doing the best they can to hold things together to get their passengers to the destination. No place for human ingenuity and know-how. No Macgyvering so everyone gets to work on time.

A kind bus driver will make exceptions for passengers in need and stop at unregistered stops. Would an AI driving the bus do that?

Can we program AIs to be resourceful and ingenious? To understand rules are things we made and therefore we want to break them. Human priories shift like clouds on a stormy day. We want the bus to run under the ultra-safe conditions we specified until it isn’t convenient. Then we know there are ways we can compensate to make it just as safe that aren’t written into the code.

We don’t need to fear artificial intelligence taking control over our lives. Being human is to adapt, to survive, regardless of what the unpredictable, improbable and Murphy’s-lawable throws at us. We got this.

——

¹ Martin Rees pg. 9- 11 in ‘What to Think about Machines that Think’ (2015) Brockman, J. (ed) Harper Perennial NY

² Max Tegmark pg. 43-46 ibid

³ or the modern equivalent, gets more clicks, page hits or eyeball time.

Please follow and like us:

Ruminations about Artificial Intelligence. Part 1: Humans are Smarter because We’re More Primitive

I liked the book ‘What to Think about Machines that Think’ immediately. Along with the jaunty title, it has a snappy structure – approximately 185 mini essays, brain bytes, by sage people about AIs (artificial intelligences). Each contribution is 3 or 4 pages long which is apparently how a thought is when written down.

The essayists responded to ‘What do you think about machines that think?” I’m making my way through and have read mostly entries from engineers and physicists. This book is the most fertile source of thought stimulation I’ve encountered in a long time. Each contribution is wonderful and I’m riffing off of most of them.

‘Contemplation of artificial intelligence makes us ask who we humans are’¹ Murray Shanahan writes. One of the book’s themes is ‘who are we’, although it’s a desire to set ourselves apart from AI’s that’s triggered the existential question in this case.

How are we different from thinking machines? Steven Pinker suggests the way that AI’s think is nothing special², its a series of logical conclusions. A simple example is the hierarchy of suggestions you get when start to enter a URL into your search engine. It may seem like the interface ‘knows you’ and can anticipate your interests, but really, the suggested sites are based on simple statistics about your previous behaviour. Similarly, your wise grandmother might have seemed to know things about you when you were a child that you didn’t know yourself. And she’s smarter than a rudimentary AI. She watched your reactions in a number of situations and recognized the trends like the search engine, but unlike the software, she understands human nature, and what was motivates you. When it comes to human nature, we’re often very predictable. Shakespeare provides good evidence to support this. Although he wrote centuries ago, his portrayals of young lovers (Romeo and Juliet), corrupt, yet ambitious leaders (Macbeth), and crafty business people (Merchant of Venice), ring as true today as they did when the plays debuted.

Emotion could be our defining feature. An interesting observation by Steven Pinker, ‘Being smart is not the same as wanting something’² could suggest our primal ancestry will set us apart. Was this the author’s intent? The idea of motivation, of driving force, ambition, compulsion, fills my heart with pride for humankind. Machines don’t strive to excel, or make heroic efforts to do things. They do what they’re programmed to. They achieve goals. If the goal is to maintain a temperature of 22 degrees in a room, they induce the heating elements and cooling vents of the HVAC system to warm or chill the air when a deviance from the desire temperature occurs. Machines don’t care that the three year old twins have a fever and are malnourished because their father is unemployed. AI still keeps the temperature at 22 degrees. A human superintendent knows the fragility of toddlers and the added stresses of poverty and secretly tweaks the heating system to divert more heat to protect the young, even if their mother can’t afford it.

Humans have survival instincts, very strong ones, which may set them apart from AIs. Does an AI even care if it’ll be turned off tomorrow? I suspect that depends on what it believes it needs to do the next day but I’m sure it wouldn’t fight to the death to protect itself, unlike most people who would sacrifice everything to be sure they get out of bed tomorrow, even if it’s to face the same old dripping tap, sour milk, and demonically possessed boss.

Is it instincts that set us apart from AI’s? We still have a primitive area in our brain responsible for instinctive or involuntary actions. My own option, based on observing people is that this primitive brain controls more of our behaviour than we are aware of. If that’s the case, it could distinguish from AIs.

We honour and hold in high esteem leaders who are intuitive – those that make logical leaps most of us are afraid to pursue. Are these intuitive leaps instances of higher thought – processing so fast that only the outcome is important? That would be AI-ish.

I consider instincts and intuition closely related, although many would not³. Instincts are subconscious – leading us to perform acts without deciding to do so. We act instinctive to pull our hand out of a flame or to veer the car clear of an oncoming truck on the highway. When the adrenaline wears off, we’re proud of our quick thinking. Intuition is generally considered more conscious, related to thought. However, an intuitive action or decision is one that ‘comes from the gut’ or ‘feels right’. Whether it’s to take a different route home or hire the kid with no experience, when we realize the benefits of the choice, we learn to ‘trust our intuition’. So, is intuition higher thinking than instinct? Some explain intuition as a subconscious compilation of knowledge gathered in the brain. Could it be that intuition is the instinct of thought?

This is my premise: Human’s are different from AIs because we evolved from a less evolved species and we do things that don’t reduce to a series of logic equations. AIs are cool. We made them, so they have the potential to be ok. Or at least as ok as run-away trucks, fires, demonically possessed bosses and new hires from hell. But don’t worry. We know how to disconnect their power supply, at least on the AIs.

—-

¹ Murray Shanahan in Brockman, J. (ed) (2015) What to Think about Machines that Think Harper Perennial NY pg. 1-4

² Steven Pinker in Brockman, J. (ed) (2015) What to Think about Machines that Think Harper Perennial NY pg. 5-8

³I have to giggle. One site I found that explained the difference between instinct and intuition used human mate choice as an example of something decided intuitively because it was the culmination of too many thought processes to be reduced to explanation. If ever there was a decision that biologists could explain at an instinctive level, it’s mate selection. Ha-ha. Geek moment.

Please follow and like us: