AI, Common Sense, and the Art of Medicine
Note: I am RuralDocAlan’s wife and we write cooperatively. We talk about the art of medicine often and many of his posts here on Substack touch on the subject. But since this article on how AI misses real expertise is my story, we have posted this article under my name.
I’ve been writing for a number of years about what has happened to the art of medicine as corporate entities take over and destroy the relationship between a physician and a patient.
When I was a graduate student at Carnegie Melon, we students took part in the experiments of John Hayes and Linda Flower on what they called protocol analysis. With this approach, subjects are given a writing project and told to say out loud what is going through their mind as they perform the task. At that time there was a lot of blowback about protocol analysis changing the thought processes of those doing the writing. In any case, I have a very vivid memory of being given a memo to edit with instructions to verbalize about what I was thinking while I was making revisions in the text.
As if it were yesterday, I remember talking as I revised the memo and in one fell swoop deleting two or three paragraphs in the memo, commenting “This is garbage.” One of my fellow students wanted to know how I came to the conclusion the paragraphs were garbage. Well, I really couldn’t tell you. It was true, the paragraphs were garbage, but I couldn’t tell you where that thought came from. I did not have a conscious give-and-take mental discussion about why these paragraphs were garbage. I just knew it and could only verbalize my conclusion.
This kind of knowing has been considered the black box of thinking. On some level in the non-verbal part of our minds we can assess a situation without having a give and take conversation in our head about how we came to the conclusion.
At that time I was at Carnegie Mellon, Herbert Simon was busy building expert systems from the top down by asking experts questions and plugging the answers into a computer. On the other hand, Hubert Dreyfus was arguing that real expertise is not rule-based.
“The capacity of experts to store in memory tens of thousands of typical situations and rapidly and effortlessly to see the present situation as similar to one of these, apparently without resorting to time-consuming feature detection and matching, suggests that the brain does not work like a heuristically programmed digital computer applying rules to bits of information.”
For a more detailed explanation of the notion that expertise resides outside the rule based checkbox, you might look at Bent Flyvbjerg and his interview with the Dreyfus brothers on their concept that real expertise is not rule-based.
Karl Pribram attributed this ability to recall information without thinking about it to having a holographic brain.
The art of medicine lies in the expertise of experienced physicians who can recall that one incident 40 years ago that appears to have similarities to the patient before them.
As I’ve watched the development of AI, I have been waiting for someone to notice that computer-generated answers are not the whole story. And yes, this includes evidence-based medicine.
Lo and behold, along comes Yegin Choi with a Ted Talk about how AI, for all its amazing accomplishments, does not always exhibit common sense. She has several concerns about AI, including that the size of the systems are so large only a few companies can afford to develop AI. This puts the control of AI in the hands of a few people, which she regards as a problem. She admits that AI is incredibly impressive in what it can do, but it can also suggest incredibly stupid answers to requests about how to do something because AI has no common sense.
AI, she says, has no understanding of human values. For example, when AI is given the information that it takes 5 clothing items 5 hours to dry completely in the sun, and then is asked to figure out how long it would take 30 items of clothing to dry, AI might answer 30 hours to dry 30 items of clothing. Ms. Choi suggests that making bigger and bigger AI systems will not solve the problem of AI’s lack of human understanding.
My point? I do not believe AI has solved the black box problem for AI any more than we have solved the problem in humans. Indeed, Yegin Choi seems to have some of the same concerns I have always had about what expertise really is, especially in medicine. A physician’s art extends far beyond evidence-based medicine because an experienced primary care physician, for example, has what is essentially years of black box knowledge to use when working with patients.
Common sense doesn’t magically arise out of computer algorithms. Nor does expertise, according to the Dreyfus brothers. Any physician with any experience can tell you this, but the current tidal wave of unshackled adulation for AI drowns out all—do I dare say it—common sense.
Some researchers will admit there is bias even in scientific research. There’s the obvious bias of omitting various subjects from a study. It’s well known that for years heart study subjects consisted mostly of men. This is an overt bias.
The Attorney General of Texas recently filed a suit against Pfizer for fraud in using relative risk instead of actual risk in its promotion of its vaccines. Pfizer’s use of relative risk instead of actual risk in promotional materials in 2020 hugely exaggerated the benefit of the vaccine.
But there’s a more subtle bias no one talks about. We know that every single patient is an individual and some patients may not fit the profile for the subjects being recruited for a research study. This wouldn’t matter if the medical community hadn’t become so enamored of rules (unlike Dreyfus) and applies those rules to patients whether the rules fit the patient’s situation or not.
Before the corporate medical fixation with numbers and rules, physicians established a personal relationship with their patients and worked with the patient to see if the rules applied to the patient. Today, if your patient doesn’t fit the checkbox of rules, the patient is declared well. When often they are not. As any experienced primary care physician could tell you.
AI is not equipped to trace the single occurrence of an event 30 years ago which never becomes a language statement to be collected and turned into an algorithm. As Ms. Choi says, building bigger and bigger AI machines will not bring common sense to AI.
It’s time to recognize that expertise in the art of medicine does not reside in collected data, which leads to rules. There is no recognition of the real source of expertise in medicine, the ability to recall a single incident from 30 years ago which has similarities when the patient is sitting in front of the physician today.
There’s the old saying about computers: garbage in, garbage out. I am not suggesting that what is fed into AI is garbage and what comes out is necessarily garbage except for AI’s lack of common sense. However, I strongly believe that much of human expertise, medical or otherwise, resides in our black box memory, and because of this, never becomes data that can be fed into AI. Think about it. My real knowledge of why those paragraphs were garbage was never available to me, but I knew those paragraphs were garbage. The only data that would go into AI could be my statement that the paragraphs were garbage. The assessment of the paragraphs, where the real expertise resides, was unavailable for feeding into an AI algorithm.
As Ms.Choi says, we don’t know what’s under the hood of AI. Just as we don’t know how physicians instantly pull up recognition of a past experience with a patient. And this information cannot be transformed into data, at least for now.
Until we understand black boxes, we don’t know the real source of the art of medicine, but that’s no reason to pretend data driven AI contains any notion of the art of medicine.