Thursday, June 1, 2023
HomeOpinionsToo much AI has big drawbacks for doctors — and their patients

Too much AI has big drawbacks for doctors — and their patients

Artificial intelligence in medical care is here to stay — but it can do more harm than good, especially if those implementing it lose sight of the essential importance of a doctor’s clinical judgment.

As a primary-care physician, my job is to evaluate and re-evaluate a patient in an ongoing personalized way even the best AI could never attain. 

Here’s an example: An 80-year-old patient of mine with chronic heart failure drank and ate too much on a recent Caribbean cruise and ended up in a hospital, his lungs filled with fluid.

A cardiac echo revealed an ejection fraction (how well the heart is pumping) of only 15%.

In fact, a recent study concluded AI might have assessed that ejection fraction better than the cardiologist who did so, and this assessment is clearly going to be an important role for AI.

But the actual management of the patient went well beyond a simple number.

And repeated reassessment was required to initiate the correct therapeutic response each time his blood pressure dropped or he gained a few pounds or became slightly short of breath.

No AI could have handled it.

No AI could have managed this patient.

This particular patient didn’t like to complain, and years of experience guided me in how to factor in his personality in a way no AI could have considered.

If my patient had consulted the popular AI app ChatGPT for quick answers in real-time, many of those answers wouldn’t have had the nimbleness to help him. (Yes, some are pushing such uses of Chat GPT.)

Remember, AI is limited by the amount of info you put into it.

Some in the medical field are pushing for using AI tools like ChatGPT to diagnose patients.
Some in the medical field are pushing for using AI tools like ChatGPT to diagnose patients.
Christopher Sadowski

Instead, I practiced the art of medicine.

I kept adjusting his blood-pressure medicines and his diuretics.

With less resistance to pump against, his heart function improved to an ejection fraction of more than 30% and prolonged costly hospitalizations were avoided.

AI could be there at the back end to accurately reassess heart function but could never have managed the patient along the way as I could.

There are other anticipated roles for AI too.

Insurance companies and healthcare systems can save money in the short run by implementing AI to replace traditional functions.

One of these is pre-certifications or pre-authorizations, where a patient requires special permission from his or her insurer to perform a specific test or treatment that may go beyond standard protocol.

I may want to order an MRI, for example, because of the slightest tingling or weakness in an extremity that could be indicative of a much larger problem.

This might not reach AI’s criteria, but how am I going to argue effectively with a computer rather than an insurance company’s medical director? 

Or slight shortness of breath and fatigue might not reach an insurance company’s AI criteria for approval for a stress echocardiogram even though I feel it’s indicated.

Or a calcium-scoring CT scan to look for coronary artery disease may be turned down because an algorithm determines the patient is too young.

All these issues are dependent on the particular patient, and it is my role to advocate for them with the insurer and its medical director.

The rise of AI in medicine could lead to reduction in quality of care.
The rise of AI in medicine could lead to reduction in quality of care.
AP Photo/Michael Dwyer, File

But there is growing pressure for cost-saving AI to take over approvals and rejections, which adds another thick level of bureaucracy to an already-arduous process.

Indeed, increasing use of AI in medical practice threatens to superimpose a one-size-fits-all model that’s been growing since the day the Affordable Care Act passed.

True, there are estimates AI will lead to $1.3 billion in savings to health-insurance companies this year. 

But at what cost to quality of care?

We must consider that short-term savings can cause longer-term losses as diagnoses are missed or treatments are delayed.

Cigna is already using algorithms to mass reject health claims (without even actually reading them), and few are appealed, ProPublica just found

How can any doctor find the time or wherewithal to appeal?

And then there’s malpractice. Practicing physicians like me are concerned we will be held to a standard set by artificial intelligence.

What if I disagree with a computer analysis but am later proven to be wrong?

This will leave me and doctors like me open to liability and push us further in the direction of rigid robotic care to avoid being sued.

And conversely, increased use of AI to diagnose or decide on clinical care puts a hospital or other health system in a position of liability when the AI isn’t up to the task or decides wrong.

“With increasing integration of artificial intelligence and machine learning in medicine, there are concerns that algorithm inaccuracy could lead to patient injury and medical liability,” a recent Milbank Quarterly article noted.

But the problem with the authors’ solution of expanding a “liability framework” to cover the addition of AI is that it means yet another layer of bureaucracy between the patient and the actual care he or she requires.

AI padding the interface is a bad solution to the current health-care bloat.

Artificial intelligence, when used properly, is going to be a useful healthcare tool.

But doctors must control it rather than the other way around to safeguard the essential doctor-patient relationship.

Marc Siegel, MD, is a clinical professor of medicine and medical director of Doctor Radio at NYU Langone Health and a Fox News medical analyst.

Source link



Most Popular

Recent Comments