Throughout the United States, healthcare systems are faced with widening gaps in patient care. Patients and providers face long wait times, existing care is remarkably fragmented, costs are increasing remarkably, and many services are simply inaccessible. We have a glut of healthcare machinery, but we don’t have good outcomes for the amount of money we spend each year on healthcare. Increasingly, AI is being promoted as a healthcare problem solver. But AI isn't going to solve the huge underlying problems in our dysfunctional healthcare system, problems only the U.S. legislature can solve.
Take Big Pharma CEO salaries. Obviously, there is a lot of money to be made in certain sectors of healthcare. As I’ve written before, we have Mr. Bancel, of Moderna making $398 million profit on the first year of the COVID Pandemic and Mr. Andrew Witty, CEO of UnitedHealth making $23 million from the same pandemic. Hundreds of healthcare “leaders” are making millions of dollars each year. Furthermore, CEOs have now proliferated by 3200 percent while the physician arm of healthcare has not increased.
These continuous consolidations of healthcare organizations prevent efforts to provide seamless, patient-centered care, threaten the maintenance of accurate comprehensive medical records, and draw enormous amounts of money out of a healthcare system already floundering from huge costs which have absolutely nothing to do with necessary medical care. Inevitably healthcare will face a two-headed crisis with patients increasingly mistrusting medical organizations, as we have seen over the last four years, and practitioners losing (and with good reason) even more confidence in the systems meant to support them. Can AI solve this problem?
According to many analyses, "Big Pharma" CEO salaries are considered to be very high, often significantly exceeding the average employee pay at their companies, with some critics arguing that these salaries are excessively large and raise concerns about the affordability of medications for patients; this is especially evident when comparing CEO pay to the median employee income within the industry.
Google AI response to search for “Are Big Pharma CEO salaries too high?"
But does this solve the problem?
Rural U.S. healthcare has suffered greatly from these healthcare "leaders." Rural obstetrics and mental healthcare have increasingly disappeared from the rural healthcare scene. In North Dakota, we have tried for the last five decades to stave off the rural healthcare crisis, especially in obstetrics. For 15 years I was the director of the southeast quarter (the most populated quarter) of the University of North Dakota (UND) family practice residency program to prepare family practice doctors to practice obstetrics well in rural ND.
The plan was well-intentioned and one of the few plans in one of the few states which could actually foresee the impending rural healthcare crisis and actively try to prevent it. We were under the impression that all we needed to do was to train the doctors to do well. Later, when I moved to a small community to practice what I had been preaching and teaching, I realized that teaching the doctors was about 50 percent of the problem associated with rural obstetrics.
On the plus side, the people in the communities we served received the care well, and they were happy to use our services. On the negative side, we had remarkable problems with the medical malpractice carriers, some of the nurses, the hospital owners, CEOs, some health insurers, and the American College of Obstetricians Gynecologists (ACOG). By the time we hit all these brick walls, it was nearly impossible to provide the obstetric care which we had hoped to provide. I gave up.
Providing AI telemedicine to serve rural areas has been put in place. The promotion for the service is said to offer the opportunity for developing a strong patient relationship. Does it? I think not. There’s a difference between seeing a doctor via telemedicine and sitting across from the doctor in the exam room and looking the doctor in the eye. The kind of patient/physician relationship we know is a key factor in a patient's developing trust in a doctor to provide good patient care. I think not. There’s a difference between seeing a doctor via telemedicine and sitting across from the doctor in the exam room and looking the doctor in the eye.
Mental healthcare in rural areas is another large problem. Mental health care is almost impossible to provide in rural areas, especially on an emergency basis. Typically, the worst problems present on Saturday night or Sunday morning. In our ER, we had a list of 10 places we could call to refer our patients needing mental health interventions. These patients can become very difficult to manage as they can be violent, suicidal, and dangerous, especially with meth withdrawal.
Often, they were hallucinating. Many times they were suicidal, which adds yet another layer of problems. After spending four hours between midnight and 4:00 a.m. calling these ten places, many of the state institutions did not answer the phone at night. Sometimes jail is the only place the patients can be sent. This is actually the last place I wanted to send these patients because the sheriff and deputies aren’t trained to manage the hallucinating patients.
These patients are often violent and dangerous, but they still have real healthcare needs. Although I have been lucky to never be involved with a patient death in these situations, deaths can occur even in places designed to cope with these problems. These patients present layers and layers of risk to numerous people during the time I’m on the phone for four hours trying to find a placement for them. Much mental health care, especially in rural areas, is currently offered by means of telemedicine. Nonetheless, AI has little to offer here unless AI could be trained to do the 4 hours of calling numbers on the list to arrange for a transfer. Even so, AI can't make someone answer the phone in the middle of the night or manage the patient and nurses in the ER while I am on the phone.
The typical government response to these healthcare problems which have been around for years is fear-mongering, lying, and threatening more mandates. Belligerance is not the answer to our healthcare system problems.
Rather than blaming RFK Jr. for promoting vaccination choice for measles outbreaks, AI provides CDC recommendations when asked about the measles vaccine for the elderly:
The CDC recommends that older adults who were born after 1957 get the MMR vaccine if they don't have proof of immunity. This includes people who were vaccinated before 1968, when the vaccine was less effective.
Who should get the vaccine
People who were born after 1957 and don't have proof of immunity
People who were vaccinated before 1968, when the vaccine was less effective
People who aren't sure if they've been vaccinated
How many doses
People who are low risk may need one dose of the MMR vaccine
People who are high risk may need two doses of the MMR vaccine at least 28 days apart
Other ways to confirm immunity
Check your vaccination records
Get a blood test to measure antibodies to the measles virus
Safety of the vaccine
It's not harmful to get an additional dose of the measles vaccine, even if you've been vaccinated before.
Protect yourself and others
Vaccinations are a proven and safe way to prevent disease. They protect you and your loved ones by decreasing transmission to those with weak immune systems.
Google AI in response to “measles vaccination of the elderly” search
In this case, AI has demonstrated the ability to generate a lot of “answers,” some of which are questionable. The older generation of patients who had measles and didn‘t die from measles are actually dying from old age. Not only have we had a lifetime immunity, but also we don’t transmit measles, as those who are vaccinated can.
Recently, Dr. Edwin Leap, an ER doctor, wrote an article about what is called ER boarding. Dr. Leap’s ER has 31 full beds with one MD and one NP. In addition, there are 17 patients in the waiting room. What’s boarding you ask? When patients who should be transferred out of the ER can’t be transferred because there is no place to go, something must be done with them. Patients who should be transferred to acute care beds can't be because while there are vacant beds, there are no nurses and staff available to care for patients in those empty beds. The acute hospital beds are termed functionally full. Incidentally when the state officials come around to help with the boarding problem, their advice was to admit the boarders to the hallways of acute care.
In my book, Modern Medicine: What You’re Dying to Know, I wrote about the folly of nursing quotas and ratios passed by our legislators. Back then, the notion of a primary care nurse had become popular. Hospital administrators were letting orderlies and CNAs go to save money. Nurses were then required to take over the lesser skilled jobs of the orderlies and CNAs. The nurses with this new, expanded job list were euphemistically termed primary care nurses. Interestingly, we had thought our Modern Medicine book would offend attorneys the most. In focus groups, we found the angriest group were nurses. They had wholeheartedly accepted the new job description and duties as a plus.
To my way of thinking, AI is highly overrated as a solution to the longstanding dysfunctions in the U.S. healthcare system, at least for the time being. Is AI going to help me deal with a violent, dangerous, suicidal patient threatening the nurses and his or her own life at 3 o’clock on Sunday morning. Maybe AI can make 10 phone calls and find a place for this person to go and maybe not. In any case, I doubt AI will be able to protect the patient and the nurses from harm while I'm on the phone.
It’s been reported that AI can read an x-ray better than the radiologist , but this misleading. What if the patient's symptoms do not fit the evidence--based pattern? Dr. Hans Duvefelt writes often about finding the causes of mysterious symptoms which don't fit the recognized patterns. AI may read the x-ray, but AI can't interpret the reading when the patient's symptoms don't fit the pattern. Physicians are the ones to make the diagnosis from the symptoms,
After having survived four years of medical school, four years of residency, and four decades of practice, I know that the difficult part of solving a problem is not the answer, but it is the question. AI cannot formulate questions, but merely give answers to questions.
The notion that doctors can be replaced by AI at this time seems relatively absurd. It is my understanding that the model at present would be for the nurses to ask the right questions, to pump the questions into AI and for AI to deliver an answer. Some insurance companies admit they reject claims without reading them. There's no point in using AI to review claims if the claims aren't read anyway.
I read an article a few days ago in support of AI where the patient had come in with shortness of breath. The EKG was normal. The chest CT was normal. The labs were normal. The patient died a few hours later. This patient was apparently seen by a real provider and the diagnosis was missed.
After the dust settled, the real diagnosis was ruptured aortic aneurysm. What’s missing in this history is pain. I have treated several patients with a ruptured aortic aneurysm and I can tell you that 100 percent of the time they presented with remarkable pain.
As a matter fact, the pain is so remarkable that you can almost make the diagnosis just walking into the room and seeing the patient writhing around in the bed the way people do when they have unbearable pain. So unless AI can actually ask the questions, it will have to rely on somebody else asking the right questions of patients and providing AI with the right input. If the knowledge of the physician isn't in the electronic medical record, AI isn't going to be able to find it and craft an AI response. The expert knowledge of a physician is not based upon rules, but rather on the ability to recall a single incidence of a diagnosis 30 years ago. AI has no access to this information, so AI results become biased with lack of access to the information that makes the physician an expert at diagnosis.
In the meantime, my advice to the state and federal government regarding healthcare is to stop being belligerent about the longstanding problems in our dysfunctional healthcare system and to stop depending upon AI to solve those problems. AI cannot solve these longstanding problems. Only our legislators have the ability to correct some of our healthcare financing and delivery problems. As Elizabeth Rosenthal said in her book, An American Sickness, consumers need to demand change from their legislators. She believes the only way to get the corporate profiteering out of the healthcare system is for individuals to demand far-reaching reform from their legislators.
It's time to pressure our legislators to solve these longstanding healthcare system problems and put healthcare decisions back in the hands of physicians.
Thanks C.C. At the risk of repeating myself, it's NOT the answers that require thought and nuance, which by the way AI is NOT very good at. It is the questions. I recently read that AI could diagnose Rolling eye disease. This is apparently rare.
Let me give you some examples of tough questions. I've had a few down syndrome patients. Often they have low white counts. When they came in with "normal" white counts they were sick. I'm not sure whether AI could manage that, UNLESS the person asking the question asked it within that context. So, if the people making those decisions and asking those questions actually decided to input that information, I'm not sure what AI would do. Another example, a young woman on OCs (hint) comes in with a backache. Will AI ask her about her about shortness of breath, and if she says "No" will AI rephrase and ask her again in a different way? By the way when I rephrased the question, "Are you short of breath when you go up the stairs?" She said, "YES" and that lead to the chest CT which diagnosed her almost lethal pulmonary embolus. So, in a system that struggles with context and nuance will either of these people get diagnosed? Will that matter to anyone? We are soooo hung-up on "evidence-based medicine," which will be programmed into AI. Will these people be missed by AI, and if they are, will that matter to AI or anyone? By the way, look at the NEW evidence for bacterial vaginosis, a 180 degree spin. So much for algorithm based medicine.
In this case I would translate "AI" as Algorithm Installed. The diagnostic tree of allopathic care already lends itself to deciding which drug to prescribe. It must be a tempting step to fill the gaps in personnel with AI, but it's already clear that the best doctors are the ones who take the time to think more deeply in their intakes, rely less on ordering tests and more on careful listening and observation. I can't see AI replacing humans--the insightful kind, anyway.