Home > News > Techscience

Can AI Enhance Physician Performance?

ZhangMengRan Mon, Apr 08 2024 11:09 AM EST
6609ffd4e4b03b5da6d0c19f.jpg Source: National Academy of Medicine, United States

One of the most hyped promises of medical artificial intelligence (AI) is its ability to assist human clinicians in interpreting images such as X-rays and CT scans more accurately, thus improving diagnostic reports and enhancing the performance of radiologists.

But does it really live up to the hype?

A collaborative study from Harvard Medical School, Massachusetts Institute of Technology (MIT), and Stanford University suggests that the effectiveness of using AI tools for image interpretation may vary among clinical physicians.

In other words, whether it's beneficial or futile, humans still hold the reins at this stage. The research indicates that individual variations among clinical physicians significantly influence the interaction between humans and machines in ways not entirely understood by AI experts. This analysis was recently published in the journal Nature Medicine.

Considering Physician Factors

The research indicates that in certain circumstances, the use of AI may interfere with the performance of radiologists and affect the accuracy of their interpretations.

While previous studies suggested that AI assistants indeed improve physicians' diagnostic performance, they tended to treat physicians as a homogeneous group without considering differences among them. In clinical practice, each physician's judgment is crucial to the patient.

In contrast, this new study focuses on individual factors of clinical physicians—specialty areas, years of practice, previous experience with AI tools—and analyzes how these factors play out in human-machine collaboration.

Researchers analyzed how AI affected the performance of 140 radiologists in 15 X-ray diagnostic tasks, where physicians needed to reliably identify notable features in images and make accurate diagnoses. The analysis involved 324 patient cases with 15 different conditions.

To determine how AI impacted physicians' ability to identify and correctly identify issues, researchers used advanced computational methods to assess changes in performance with and without AI.

Results showed that the effects of AI assistance varied inconsistently among radiologists, with some showing improved performance due to AI, while others experienced "deterioration."

Assistant Professor Palanapoor Laupcohl from the Biomedical Informatics Department at the Royal College of Medicine, Bravonick Institute in the UK, confirmed the research team's findings, stating, "We shouldn't view physicians as a uniform group and only consider the 'average' impact of AI on their performance."

However, this finding doesn't imply that doctors and clinics should refrain from adopting AI. Instead, the results indicate a need for a better understanding of how humans and AI interact and the design of carefully calibrated methods to enhance rather than impair human performance.

AI Assistants Still Hard to Predict

Given that imaging is considered one of the clinical areas where AI can provide the most assistance, the findings of this study carry significant weight.

A notable finding from this discovery is that in radiology, AI unexpectedly affects the performance of human doctors.

For instance, factors such as years of experience in radiology, specialization in chest radiology, and previous use of AI devices didn't reliably predict the impact of AI tools on their work performance, contrary to the researchers' expectations.

Another finding challenging common perceptions is that clinically poor-performing doctors didn't consistently receive help from AI. Overall, regardless of AI's presence, radiologists with lower baseline performance still performed poorly. The same held true for those with higher baseline performance—they consistently performed well regardless of AI.

However, it's certain that more accurate AI improves radiologists' performance, while mediocre AI may decrease the accuracy of human clinical diagnoses.

The significance of this finding lies in the necessity of testing and validating the performance of AI tools before clinical deployment to ensure that inferior AI doesn't interfere with human clinical judgments, potentially delaying patient diagnoses.

Impacting the Future of Clinical Medicine

Clinical physicians possess varying levels of expertise, experience, and decision-making styles. Therefore, ensuring that AI reflects this diversity is crucial for implementing targeted treatments. Individual factors and changes should be key to ensuring AI progress, rather than factors that disrupt and ultimately affect diagnoses.

Interestingly, this finding doesn't explain why AI has different impacts on the performance of human clinical physicians. However, as AI's influence on clinical medicine becomes increasingly profound, understanding these reasons becomes crucial. AI experts are still working on this.

The research team suggests that the next step should involve testing the interaction between radiologists and AI in simulated real-world experimental environments, with results reflecting actual patient populations. Besides improving the accuracy of AI tools, training radiologists to promptly detect inaccurate AI, review, and question AI tools' diagnoses is also crucial.

In other words, before AI assists you, you need to improve yourself.