This year’s MedEdge Summit. York Region – MedTech conference was affirming. Inspiring in a creative kind of way, rather than the eye-opening, mind-numbing advances in technology that make you think you’ve been hiding in a foxhole with a metal bowl on your head for the past 70 years. We’re on the right track with our crazy, cutting-edge technology to doing something useful. Things are falling into place, in a crooked, bottom of a kaleidoscope pattern.
Ok, there was still some wild stuff discussed, like patients being in charge of their own medical data and sequencing an entire human genome in a few days. Those are the beginning of approaches that I think will turn out well in the long term.
Today, even if we’re presented with our entire genomic sequence on a silver tray, no one knows what most of it means. But we will some day. Until then, we’ll keep doing those studies that give us a bit more evidence what the sequence of 11q21.3 means if you have green eyes and are good at cricket.
Right now, having people control their own medical records is like giving a four year old the keys to the car. Most of us don’t have the training to understand the data we’re presented with. As a small segue, when I was 15, I found my medical file open on a desk. A nurse chided me for reading it. I looked at her in wonder, ‘but, it’s about me.’ She snapped the file shut. In retrospect, we were both right. She took it away because it was written in a language that would confuse or misguide most people, so it wouldn’t be to the patient’s benefit to see it. But, information about me should be my property. That’s seems to be how modern privacy laws are playing out.
Let’s get to the exciting stuff: artificial intelligence. I can see it being useful in medicine because AI could provide the kind of assistance only AI is capable of. An enormous number of researchers are learning new things about human health all the time. Expecting your friendly family doctor to read 100’s of papers a day, while he or she works full time meeting with patients, assessing their conditions and suggesting a growing number of preventative approaches, is just crazy. They are only human.
Enter Artificial Intelligence. It’s particularly good at assimilating vast quantities of information that arrive over long periods of time. It doesn’t need to wrack its brain to put together one study published in South Africa in 2009 with another one from Sweden in 2016 to collect information about a rare disease. That’s easy-peasy for AI and the basis of how we learn about human health. Dozens of separate studies, done in different ways, by different people, come together to lead us to new knowledge. Rarely does one report change medical practice. AI also can provide us with the benefits of analyzing the activities of billions of people. Rumour1 has it that Microsoft was able to find common symptoms that people searched on before they were diagnosed with pancreatic cancer. AI can provide an up-to-the-microsecond summation of all that’s relevant to a patient’s condition.
Great, as far as it goes. But it stops at the sum of all human knowledge and behaviour. Could AI possibly deal with uncertainty and lack of answers better than the current, malpractice-avoidance approaches? AI probably isn’t capable of caring or being sympathetic. In my experiences, this has been all but beat out of the current medical system, with quotas to deliver, expectations to manage, and routinized care. I long for the time when the doctor put the chart down, smiled and said, ‘you’ll be fine. It’s just a bug/growing pains/aging/over exertion/gas/random. Come back and see me in a week if it isn’t better.’
How is AI going to provide us with common sense, perspective, or talk us down from the fear we are dying of an incurable but totally improbably disease? Maybe it can. To my way of thinking, many of the situations where patients need to be told things are ok are based on the natural variability of the human body. Guidelines usually have a range for things like blood pressure, heart rate, levels of cholesterol and more. What does it mean when someone is outside the normal range? More tests can be done for explanations that might be pathological. When those turn up negative, the physician is left with no explanation and the possibility of natural variation. The doctor may have a hard time saying so, just in case there’s something going on. AI could at least quantify the answer with something like ‘there’s only a one in 500 chance of this’, or ‘a one in 248 chance of that’.
How will AI deal with situations when patients need to be consoled? We all die eventually and at some point many of us will need to be told we have a terminal or very serious condition. Will AI develop algorithms to read a person’s expressions and body language so it can tailor its delivery to each patient, or will it defer to its human equivalent? Let the doctor do what may have attracted them to medicine in the first place – care for their patients.
1Maybe not rumour, here’s the New York Times article about the paper: http://www.nytimes.com/2016/06/08/technology/online-searches-can-identify-cancer-victims-study-finds.html?_r=0