Diagnostic errors, Second opinions, and Efficacy in Medicine and Radiology

Last week, I had my regular medical check-up. After running some tests, the doctor concluded that everything seemed fine, but she would anyway send all the samples and data to a specialized clinic, just to be sure. I must admit I wasn’t exactly thrilled to pay two medical bills, but I had to change my mind after reading an article published in the Journal of Evaluation in Clinical Practice.

The new study from Mayo Clinic shows how diagnostic errors are common in primary care practices, and it reaffirms the importance of second opinions in medicine. The researchers found that only 12 out of 100 initial diagnoses from primary care physicians coincide with second opinions from specialty consultations, and other 21 are in strong disagreement. In the patient cases analyzed, the research team noticed that, for example, acute myelogenous leukemia and malignant lymphoma were mistaken for body aches and weight loss, respectively. A patient initially diagnosed with simple fatigue, was then found to suffer from heart failure.  

These are mistakes that can be fatal, and the most common way to minimize them is to ask for a second opinion. This practice is already common in radiology and pathology, e.g. through the ACR’s RADPEER™ program. This peer-review process helps to minimize the occurrence of misdiagnosis, either as false positive or false negative In a perfect world, there aren’t any false positive- nor false negative-cases, but solely true positives (and true negatives). In terms of percentages, this means 100% true positives and 0% false positives. However, this not possible in the real world: errors can be reduced, but they can’t be fully eliminated. Further, we can estimate the clinical performance quality. To do that, we can plot the rate of true positives (or sensitivity) against the rate of false positives (1-specificity). What we get is called “receiver operator characteristic” (ROC) curve, which looks like that:

roc

In series 1, the physician is not able to diagnose cases better than pure chance, i.e. you flip a coin to decide if the patient has a condition or not. Series 2 shows a medium performance, and series 3 a better one. Series 4 is even better. However, if we want to detect all (100%) true positives, we have to accept 25% of false positives . In an ideal world, the curve immediately rises from 0 (0% false positive) to 1 (100% true positive). How can we measure how “good” a ROC curve is? How can we determine the diagnostic performance of a ROC curve? We can measure the “area under the (ROC) curve”, AUC. The closer the AUC value is to 1, the better the diagnostic performance is.

For example, the AUC values for the detection of medial meniscus tear, posterior cruciate ligament injuries, or chondral injuries in knee MRI are 0.75, 0.98, and 0.57, respectively, as calculated in a study from 2008. In this case, radiologists perform poorly for chondral injuries, while they are close to perfection in the diagnosis of posterior cruciate ligament injuries.

What about my clinical results? This time, my case fell into the 12% of diagnoses confirmed. However, some years ago, in another Mayo clinic, thousands of miles away from Minnesota, I was misdiagnosed. If you want to know more, stay tuned! 

Share on facebook
Share on twitter
Share on linkedin
Share on email

Related Posts

About Us
The ScanDiags software service, developed by Balzano Informatik AG, provides AI for augmented diagnosis from musculoskeletal MRI.

Let’s Socialize

Share on facebook
Share on twitter
Share on linkedin
Share on email
Popular Post