2 Comments

Thanks C.C. At the risk of repeating myself, it's NOT the answers that require thought and nuance, which by the way AI is NOT very good at. It is the questions. I recently read that AI could diagnose Rolling eye disease. This is apparently rare.

Let me give you some examples of tough questions. I've had a few down syndrome patients. Often they have low white counts. When they came in with "normal" white counts they were sick. I'm not sure whether AI could manage that, UNLESS the person asking the question asked it within that context. So, if the people making those decisions and asking those questions actually decided to input that information, I'm not sure what AI would do. Another example, a young woman on OCs (hint) comes in with a backache. Will AI ask her about her about shortness of breath, and if she says "No" will AI rephrase and ask her again in a different way? By the way when I rephrased the question, "Are you short of breath when you go up the stairs?" She said, "YES" and that lead to the chest CT which diagnosed her almost lethal pulmonary embolus. So, in a system that struggles with context and nuance will either of these people get diagnosed? Will that matter to anyone? We are soooo hung-up on "evidence-based medicine," which will be programmed into AI. Will these people be missed by AI, and if they are, will that matter to AI or anyone? By the way, look at the NEW evidence for bacterial vaginosis, a 180 degree spin. So much for algorithm based medicine.

Expand full comment

In this case I would translate "AI" as Algorithm Installed. The diagnostic tree of allopathic care already lends itself to deciding which drug to prescribe. It must be a tempting step to fill the gaps in personnel with AI, but it's already clear that the best doctors are the ones who take the time to think more deeply in their intakes, rely less on ordering tests and more on careful listening and observation. I can't see AI replacing humans--the insightful kind, anyway.

Expand full comment