How Smart do you Want your Things to Be?

In the not too distant future, all things will be smart. All inanimate objects will have sensors that collect information, and artificial intelligence to analyze and react to the information.

A first generation example is motion sensors that have the sense to turn on lights when people, or your rottweiler, enter the room. Getting a little more sophisticated, there are devices like the Nest that learn from your habits and adjust the heating or a/c to create the most comfort in the most economical way. In the future (like next week), you may have a fridge that records the comings and goings of food over time and after it’s learned enough, places an order to the local food deliverer [probably Amazon] to restock all your staple foods, and suggest a few new offerings that it has calculated you might like based on your love of strawberries, mustard and corn chips. [shudder]

Everything, absolutely everything, will be smart. And helpful, in an artificially intelligent kind of way. In the spirit of embracing the future and getting the most out of it, I have a question for you:

What things do you most want to be smart and which do you least want to be smart?

Sure, I want world peace, cures for all hideous diseases, healthy, cheap food for everyone, but even AI is unlike to make those things something Amazon can deliver. And on the big picture down side, I don’t want my vacuum cleaner to take over ventilators at the hospital, the power grid to decide what temperature it should be in my home, or robots to terminate all human life because there isn’t enough calcium in the compost.

Smart stuff is likely to be more mundane in the near future (like next week and the one after that), so my expectations are lower.

Here are my smart wants:

I most want my electronic devices (which will be all things, including those we don’t consider to be electronic devices right now, like a hairbrush) to be smart enough to recognize me so I don’t have to remember 1,750 different user names and passwords. Of course, I expect perfect accuracy (the hair brush can tell the difference between me and my daughter, who likes to use it on the dog) and that they won’t let someone who has replicated my fingerprints, retinal pattern, voice or heartbeat to have access to my apps (especially my daughter, who is a likely candidate for trying to hack my stuff).

I want smart things to take away some of the pain I now have obtaining secure access to everything.

I least want something interfering in my learning process. I fiddle. I explore. I figure things out by taking a few steps and then pondering how it might work. After several rounds of this, if I complete a task I haven’t done before, I’m proud. This is how life happens for me, whether it’s setting up a website, doing home repairs or starting a company. The last thing I want is an AI chiming in to tell me how to do it at the first sign I’m stumped, or worse, 5 minutes before I realize I’m stumped.

To the average smart thing of the future, my message is: ‘if I want your help, I’ll ask for it’. What I don’t want is a smart-ass thing. A know-it-all thing. Any number of things could provide too much information. A hammer could disapprove of your the choice of nail. A pot could have opinions on the temperature applied or the nutritional content of the food it’s required to cook. Your CRM might remind you to call a particular client or that you’ve called one excessively, ignoring your personal ‘feel’ for how to deal with people.

Overall, I’d like smart things that see the difference between challenges that are annoying because they demand unnecessary attention (like where you put down your phone) but appreciate when a challenge is a good thing so you give it some thought (like understanding how to manage  privacy settings).

Of course, what annoys me may be a learning experience you want. Maybe you want instructions the second you pick up something because you’d rather spend your time watching videos. Or you’re a security buff and don’t want an AI to remember your passwords.

We’re all different, so it will take very smart things to please us all.

Please follow and like us:

The Evolution of the Evil Scientist. Part 1: The Money

When I was a little girl, the professor from Gillian’s Island was my hero. He was smart, unassuming, and solved a lot of problems. I deduced that scientists were incredible. As clever as physicians, with the power to save lives, but much cooler, as they shunned the limelight. Later in TV history came McGuyer, who fixed an awful lot of problems with duct tape and scientific knowledge.

Nowadays, scientists1 often seem to be on the evil side of the human equation. ‘They’ have conflicts of interest, because their primary activity – research – could be supported by a commercial interest, either a large corporation or their own startup company. Everything is questioned for the agenda. This post considers the impact of a scientists’ source of funding, and a second one will examine public disclosures that scientists now are almost required to make.

A major shift over the past 60 years has occurred in the way scientific research is supported. This page shows the switch in dominance of research paid for by the government versus private industry in the US. Until the late 1970’s, most research was supported by the government. In 2012, only 30% was government funded while more than 60% is industry sponsored. The pressure on academic researchers to commercialize the findings of their research has made for more industry ties – either by licensing to an existing business or encouraging researchers to create their own startup.

While this may cause some to throw up their arms in alarm and shout invectives about the corruption of research and polarized agendas and corporations paying for the results they want, for most of history, private individuals were benefactors of scientists. In other words, someone rich, on whose favour the scientist depended, doled out the money to feed and house the scientist, while they grew hundreds of bean plans, or gazed through a telescope at the stars and then disappeared into a dank library to do endless pages of mathematics. We’ve built our understanding of countless things, like the structure of the galaxy, human anatomy, and the theory of evolution, on the basis of privately funded research. Did any of the individuals who supported the scientists try to influence the conclusions of their research? Probably, but the passage of time, and the work of other scientists differentiate between influence and true findings.

I think part of what keeps science unbiased is that there are people, scientists, that live to investigate, to answer unanswerable questions. They don’t care who makes money out of it. They want to toil in obscurity, reasoning and experimenting things out. Problem is, we all have to eat. So what’s a scientist to do? Most are not in it for the money.

I’ve lived through several rounds of precipitously declining government funding for research, where university administrators warn the faculty (where a good percentage of scientists reside) of declining grants and encourage them to make friends with business people, as a means to survive in the research style to which they have become accustomed. Heck, I’ve even put on events myself to encourage industry sponsors and researchers to chat and form alliances.

To make this discussion more complicated, there is basic scientific research, of the type that asks and answers abstract questions, say about gravitation waves or behaviour of silicon in solid state, and may one day allow better satellite tracking or microelectronics. Then there is applied science, such as new drug testing. Although basic research, or what’s call curious-based research2, is generally of less interest to industry representatives3, a significant portion of development in the pharmaceutical industry has to be done in collaboration with medical researchers. Physicians who do research as well as care for patients, are the ones with the training and opportunity to work with the relevant patients.

As a recent example of how pervasive this sort of support of medical research is, this paper discusses the number of US physicians who report some kind of payment from an industry sources, including research support, consulting fees, or just sandwiches at a conference. The study found that almost half of physicians studied received some kind of payment, with the overall average being a bit over $5000 in a year.

It’s a tricky relationship, between physician researcher and pharmaceutical company. Doctors are the best ones to know which patients need new solutions for their medical conditions. Pharmaceutical companies understand how their new drugs work. They need each other, the doctors and the drug makers, and we need them to need each other, so we can benefit from the new drugs. I can’t imagine a physician that would knowingly harm a patient, particularly to get research money, as the point to research is to discover useful new ways to make people better. It would be like a chef accepting sponsorship to make foul-tasting food. On the other side of the equation, there’s so much potential for conflict of interest, both perceived and real, and some history of abuse of the system.

For both basic and medical scientists, often the choice is to get involved with a big business and accept their backing, or stop doing research. Research on zero dollars a day doesn’t work.

Why has this reality lead many to decide scientists are evil? The scientists I know are noble people who prefer to devote themselves to finding the truth, often the truth of discovering better medicines, over capitalistic gains. Am I naive?

I’m a scientist, and I am defending my colleagues, my tribe. But I have no agenda. Except the agenda I’m suggesting is the one of most scientists: Truth.

——-

1Using a very broad, inclusive definition of scientist here which includes the natural, social and applied scientists.

2This seems like a bit of a cruel joke, because ‘curiosity-based’ research, is far from a frivolous, random or carried out by people skipping through meadows, chasing shiny butterflies, activity.

3A couple of upcoming technologies, artificial intelligence and the Internet of Things, contradict this statement, since we appear to have ideas for implementation of the technology as fast as we can understand it.

 

 

 

Please follow and like us:

The New Trust.

Could it be that digital solutions to everyday activities are making us a more trusting society? In this era of paranoia of big business and big government, rampant nonsense news, usurping of reality by the ebb and flow of opinion, is there goodness? Wholesome, warmth towards our fellow strangers?

I believe so.

Let’s do a few flashbacks to see how easily we accept today what we strictly controlled in the past. Consider:

Now Then
Purchasing stuff from a store Self Check Out. No one pays attention as I scan 45 items and place them in the bags I brought into the store myself and have pushed around in my cart for the last 45 minutes. Cashier can’t remember what green leafy stuff is. Calls for price check. People in the line behind you glare. Bags brought into the store subject to search, or elaborate tagging and stapling routine to ensure nothing could be added to said bags.
Purchasing stuff online We do it. We deal with vendors we’ve never heard of before. Put our credit card numbers into sites that weren’t there yesterday. Correspond with anonymous posters of used items or go meet them in vacant apartments. Pundits poo-pooed the idea that anyone would buy goods from an organization they’d never heard of. Amazon would never sell more than books because people wanted to see what they were buying. Early eBay was intimidating to many.
Cashing cheques/transferring fund If someone emails us money, we decide where and when it goes. We choose the bank account where it’s deposited. Scrutiny. Showing of ID, comparing of signatures, spelling of the name. Cheques rejected for date infractions, use of coloured ink. Banks closed at 3pm, funds held, frowns shared.
Paying bills At some time in the distant past, you knew your account number. So now you can pay the bill. Any amount you want. Paper bill required. Bottom half confiscated by bank. Top half stamped, initialed and annotated.
Transit fares Scan your pass, buy your ticket at a kiosk or online. Prepare for spot checks. Buy ticket at wicket. Show attendance when boarding bus/train. Lose transfer and have to pay double fare. Show attendant on leaving transit system or pay double fare.
Health Benefit Claims Go to the dentist, pharmacist, physiotherapist. Submit online for reimbursement of costs. Have reimbursement immediately deposited to bank account. Click ‘Agree’ to terms and condition to produce proof of payment if requested. Fill out forms. Attach receipts. Put in paper mail. Wait weeks. Wait months. Get response that indicates you failed to sign form. Start all over again. Get denied reimbursement because time limit has expired.

Some of you have never experienced what’s in the ‘Then’ column. Lucky you.

While much of what’s in the ‘Now’ column adds efficiency and convenience, it also suggests a level of trust that wasn’t there then. People don’t have to prove who they are, or that they’ve bought tickets, been to the dentist, have an account with the gas company, or purchased one bunch of broccoli rather than two. Reciprocally, we’ve learned that most vendors are honest, want to deliver goods to us and really own the electronics they’re selling.

Am I being naive? Or does new technology merely replace all the previous checks and balances provided by the seemingly draconian humans at the bank, insurance company or checkout cash? Perhaps emailing money is so fool-proof no one ever makes an error or commits fraud.

Someone’s probably done the math. The added efficiency of not collecting everyone’s proof of payment out-weighs the number of people who cheat the system. That’s kinda cool in itself. Gives me a warm fuzzy about humans – for the most part, we’re okay.

I’d like to believe we’re evolving into a more trusting society at an individual level. It feels good to be trusted and included, even if it’s by an algorithm or encryption key.

Please follow and like us:

Legal Persons.

Sounds awkward.
Is awkward, potentially.
The European parliament is apparently considering declaring robots with AI legal persons1.
Thinking my way around this.

Corporations were declared legal persons generations ago.
According to Wikipedia, in 1886.
Yet we still have problems with the concept.
Making the engines of capitalism into legal persons may be the root of the ‘them’ in us.

The documentary, The Corporation, suggests that the difference between corporate ‘legal persons’ and the rest of us is that corporations have no moral or ethical conscious. They have rights but fewer obligations. To me, this ought to be yin and yang. If you have the right to something (like say a drivers license), you have the obligation to control it (and drive carefully).

As most things do, the state of being a corporation started innocently enough. A bunch of individuals got together and directed their interests (social, financial and/or personal) to a joint cause. When the cause got big enough, it took on a life (not a casual use of the word) of its own. It needed to be a separate entity, legally as well as autonomously. Since a collective decided what it would do, no one person was responsible, but the entity needed to be liable for its actions.

Controversy has arisen with modern corporations. Some have polluted our environment. Others taken advantage of people in developing nations for cheap labour. More recently, corporate support of political agendas2 calls into question the justice of a powerful, but unemotional entity, influencing human activities. The profit agenda of corporations is seen as their over-riding motivation, devoid of compassion.

Starting to sound scarily like artificial intelligence? Powerful. Devoid of human emotion. Mission driven.

Before we go there, consider another aspect of legal personhood. Various governments have recently declared various animals as legal persons3. Making animals legal persons protects them from acts of violence and neglect. Previously, because animals were considered possessions, they could be treated in any way their owner saw fit. Our modern sensibilities want more humane treatment, so animals have become legal persons in some countries. This allows third parties to step in and defend them, if necessary.

And let’s not forget that there were times in history when various people didn’t have the same status as others. Not so long ago women were ‘allowed’ to own property and vote (rights of ‘personhood’). Throughout human history, various groups have been ‘freed’ from slavery by other groups (from Roman times to more recently), granting the freed rights of personhood.

Granting basic rights like protection from harm and freedom to chose to those who are deserving is a good thing. But dodging responsibility in the name of adhering to a mission like maximizing profits doesn’t seem right.

What of the personhood of AI’s? Will it protect them from harm or allow them to game the system?

The arguments before European parliament to declare AI’s legal persons are motivated by giving them responsibility. No one holds the wind responsible for felling a tree and crushing a car. We call that a natural disaster. However, if an AI miscalculates GPS coordinates and sends a lifesaving package to the wrong province, we do want to hold someone responsible, whether it’s the programmers, owners or contractors of the AI. This was part of the original philosophy of making corporations legal persons. If the outcome of their actions required someone taking responsibility, it should be the collective that directs the thing.

Back to AI. Yes, we want some kind of accountability for what AI’s can do. After all, we don’t expect a herd of random robots running around without reason. Someone will deploy them and give them an assignment. And if that assignment runs amok and does some kind of damage, whoever sent the AIs should be responsible, even if it’s a corporate legal person.

Do AI’s need protection? I can imagine a day, a few decades from now, when people will feel protective about some AI’s and concerned when other AI’s are not treated well. Maybe the AI’s are left out in the rain, or aren’t consulted about best practices in machine lubrication.

Many people fear AI’s. They see a day when the power of the AI could subvert us or turn off our life support for good, because we are purposeless. They might have the right to do so if they are legal persons. And decided we were less than legal persons.

I have great faith in humans to manage our creations and ensure our survival, but also to treat those around us properly. Sometimes, it takes a while to figure out what that is. Ask a woman or a minority group. We can be slow arriving at justice. Maybe an AI could help us with that.

1 Reported in this CBC article.

2 http://www.npr.org/2014/07/28/335288388/when-did-companies-become-people-excavating-the-legal-evolution

3 For a completely less than comprehensive look, here‘s a blog post I did.

Please follow and like us:

Do I Know What’s Good for my Cat?

Recent moves by various governments have declared dolphins, dogs, cats, chimpanzees, and even animals in general, sentient beings1.

What does this mean – the definition of sentience is consciousness of sensory perceptions, but how does it specify the way animals should be treated? A declaration that animals are sentient, like humans, provokes visions of trying to get dolphins to vote or providing chickens with flying lessons if they want. We’d never force them, of course.

Most of us want to do the right thing by animals. Many scientists study sentience, consciousness, sapience, and/or intelligence in humans and various animals. If a crow recognizes a man who feeds him, is the bird self aware or intelligent or has it learned to associate the smell of the man’s cologne with tasty treats (sentient) or does it contemplate if it is taking advantage of the man as it peaks fruit from his hand (sapience)? I need to read a stack of books and papers at least three feet tall to understand how these terms differ from one another when applied to animal behaviour. I respect the experts, but would like to understand this at a non-expert level.

I did some research on the emerging laws related to animal rights and the answers surprised me. Generally, the idea is to give animals more rights than inanimate objects, and to stop them from being used solely for human entertainment. These new laws and declarations are one part getting the laws out of the dark ages and one part enlightenment.

Why was I surprised? There’s some hype attached to the announcements about the new laws. Somehow2 the concept that animals can sense and are aware of their surroundings, which is a definition of sentience, transmuted into animals having emotions. Certainly animals can feel things. However, it isn’t necessarily the same as a cat feeling sad because it’s raining and there’s no birds to watch or a horse being anxious that its rider has had a few martinis, again, and might need counselling for addiction. If we accept the definition of emotion as an instinctive or intuitive, rather than reasoned, interpretation of a situation, I’d point to the keen instincts animals have. And that the sad cat knows hunting (food) is less effective in the rain and the horse instinctively fears a reckless rider for its own safety.

Do animals love and hate? A dog gets excited when it sees its human, either because of love or the expectation of treats, and a dog growls when a stranger skulks around the yard, but does it hate the intruder or is it defending its territory.

The new laws are to stop animals from being handled in ways that injury them. Previously, animals could be treated in the same way as all types of human possessions (this is the dark ages stuff). No one cares if you hit your table. A lot of people care if you hit your dog. The change in laws make it easier for officials to intervene if someone is doing something harmful to their sentient possessions (animals). Changing the laws so we cause no physical pain to animals seems like the right thing to do.

More interesting are the decisions to stop using animals for our entertainment3 – which I call the enlightened part. In the absence of direct pain and suffering, how do we tell if the animals are being treated well? We associate a cat purring with contentment, but they also purr when they are in pain. So if the cat purrs when stuffed into a silly outfit, is it thrilled or distressed? Is there something wrong with training an elephant to sit on a tiny stool while wearing a flowery hat? Or a walrus to clap it’s flippers and bark for fish? Taking a slightly riskier stance, why shouldn’t we drug tigers into passivity and make them jump through hoops, if the fringe benefits that come with that is a rich diet, medical care, and comfy accommodations? None of these actions are natural but putting on a suit and going to a job interview where we try to say all the right things, regardless of what we really think, is unnatural too.

Wearing a suit is voluntary. The confinement of animals in the circus and other entertainment domains is not. How do we get informed consent from a killer whale? Many people might claim they are trapped in jobs, unable to escape the drudgery or demeaning tasks because they need to make a living.

How do we tell when we’ve gone too far with animals? With sea animals, there are scientists who study the social structures the animals live in and compare it to what we provide. And yes, there is a difference between sea worlds and the wild. One point of evidence that the animals are being treated unfairly is a decreased life expectancy. This seems like a reasonable metric, but when I look at my domestic cats, who are kept indoors because this is verified to increase their life expectancy, I wonder if it’s right. My cat is convinced he should be outside, yet I imprison him in the house, based on the assumption that it’s better for him. I have some guilt, because I reap advantages, with lower vet bills. No worms, fleas, or stitches to repair battle wounds from the neighbouring tom, raccoons, dogs or cars. Domestic cats live longer inside, but are they living a fulfilling life? They don’t breed or do other natural things like hunt nor are they allowed the full range of a territory or their natural habitat.

Questions plague me:

  • How to know what animals really want?
  • What’s best for animals, which may not be what they want?
  • How human-like are they, anyway?

We should only impose our values on the animals when we know they want the same things we do. We could defend all we do as sapience, or wisdom, a quality that presumably sets us apart from other species (hence the name: Homo sapiens). We understand the consequences of our actions, and therefore agree to get vaccinated, even if it is transiently unpleasant.

Humans don’t do whatever we want, we often do what’s good for us, like go to the dentist, eat vegetables and learn mathematical formulas. My cat hates going to the vet and howls like a wild animal when I put him in the car. I don’t think this qualifies as mistreatment, even thought he clearly thinks it does. On the contrary, many decisions to give animals more rights insist on good veterinary care, although the animals dont’ seem to like it.

I haven’t got a witty conclusion to this post. The new laws to treat animals as sentient are to prevent cruelty. We’re striving for enlightenment in our interactions with animals, but I think we have a long way to go to figure out what that means. I’m looking forward to the day when I can communicate with my cat (maybe through artificial intelligence), and let him make the decision to get in the carrier and endure the vet’s prodding, as a alternative to dying prematurely of a preventable disease.

——

1Here’s a few examples:

This blog post http://barkpost.com/good/oregon-court-finds-dogs-are-sentient-beings/ discusses a recent decision by an Oregon court to treat dogs as sentient. It’s a great post for making sense of the law.

A similar story comes from New Zealand http://www.animalequality.net/node/703 , where the law was amended in 2013 and this is interpreted to recognize all animals as sentient, like humans, which provides officials greater power to protect animals in situations of abuse.

And then there was a declaration of by a group of scientists in 2012 http://www.earthintransition.org/2012/07/scientists-declare-nonhuman-animals-are-conscious/ that non-human animals display consciousness.

2 Not too difficult to imagine how if you consider how easily knowledge is perturbed from the truth and circulated as un-facts on the internets.

Please follow and like us:

What if your Doctor was an AI?

This year’s MedEdge Summit. York Region – MedTech conference was affirming. Inspiring in a creative kind of way, rather than the eye-opening, mind-numbing advances in technology that make you think you’ve been hiding in a foxhole with a metal bowl on your head for the past 70 years. We’re on the right track with our crazy, cutting-edge technology to doing something useful. Things are falling into place, in a crooked, bottom of a kaleidoscope pattern.

Ok, there was still some wild stuff discussed, like patients being in charge of their own medical data and sequencing an entire human genome in a few days. Those are the beginning of approaches that I think will turn out well in the long term.

Today, even if we’re presented with our entire genomic sequence on a silver tray, no one knows what most of it means. But we will some day. Until then, we’ll keep doing those studies that give us a bit more evidence what the sequence of 11q21.3 means if you have green eyes and are good at cricket.

Right now, having people control their own medical records is like giving a four year old the keys to the car. Most of us don’t have the training to understand the data we’re presented with. As a small segue, when I was 15, I found my medical file open on a desk. A nurse chided me for reading it. I looked at her in wonder, ‘but, it’s about me.’ She snapped the file shut. In retrospect, we were both right. She took it away because it was written in a language that would confuse or misguide most people, so it wouldn’t be to the patient’s benefit to see it. But, information about me should be my property. That’s seems to be how modern privacy laws are playing out.

Let’s get to the exciting stuff: artificial intelligence. I can see it being useful in medicine because AI could provide the kind of assistance only AI is capable of. An enormous number of researchers are learning new things about human health all the time. Expecting your friendly family doctor to read 100’s of papers a day, while he or she works full time meeting with patients, assessing their conditions and suggesting a growing number of preventative approaches, is just crazy. They are only human.

Enter Artificial Intelligence. It’s particularly good at assimilating vast quantities of information that arrive over long periods of time. It doesn’t need to wrack its brain to put together one study published in South Africa in 2009 with another one from Sweden in 2016 to collect information about a rare disease. That’s easy-peasy for AI and the basis of how we learn about human health. Dozens of separate studies, done in different ways, by different people, come together to lead us to new knowledge. Rarely does one report change medical practice. AI also can provide us with the benefits of analyzing the activities of billions of people. Rumour1 has it that Microsoft was able to find common symptoms that people searched on before they were diagnosed with pancreatic cancer. AI can provide an up-to-the-microsecond summation of all that’s relevant to a patient’s condition.

Great, as far as it goes. But it stops at the sum of all human knowledge and behaviour. Could AI possibly deal with uncertainty and lack of answers better than the current, malpractice-avoidance approaches? AI probably isn’t capable of caring or being sympathetic. In my experiences, this has been all but beat out of the current medical system, with quotas to deliver, expectations to manage, and routinized care. I long for the time when the doctor put the chart down, smiled and said, ‘you’ll be fine. It’s just a bug/growing pains/aging/over exertion/gas/random. Come back and see me in a week if it isn’t better.’

How is AI going to provide us with common sense, perspective, or talk us down from the fear we are dying of an incurable but totally improbably disease? Maybe it can. To my way of thinking, many of the situations where patients need to be told things are ok are based on the natural variability of the human body. Guidelines usually have a range for things like blood pressure, heart rate, levels of cholesterol and more. What does it mean when someone is outside the normal range? More tests can be done for explanations that might be pathological. When those turn up negative, the physician is left with no explanation and the possibility of natural variation. The doctor may have a hard time saying so, just in case there’s something going on. AI could at least quantify the answer with something like ‘there’s only a one in 500 chance of this’, or ‘a one in 248 chance of that’.

How will AI deal with situations when patients need to be consoled? We all die eventually and at some point many of us will need to be told we have a terminal or very serious condition. Will AI develop algorithms to read a person’s expressions and body language so it can tailor its delivery to each patient, or will it defer to its human equivalent? Let the doctor do what may have attracted them to medicine in the first place – care for their patients.

Please follow and like us: