Friday, November 24, 2006

Expertise, Again

Over at Knowledge and Experience there's a post about fetal monitoring, which happens all the time, even though its value hasn't ever been demonstrated. This is in the news since The New England Journal of Medicine just published a study showing that fetal oxygen monitoring makes no difference to the health of the baby. And this study will probably be the end of fetal oxygen monitoring, one of the few times when a technology hits the dust bin.

But as the post points out, fetal heart monitoring is nearly omnipresent even though its value is equally dubious. In the case of fetal heart monitoring, its use became so widespread so quickly that everyone was doing it before its value could be studied. And no it is too late to turn back.

The post links to an editorial noting the limits of fetal heart monitoring. The limits are well known - its no better than listening to the fetus with a stethoscope - so why is it still done?

The editorial gives a few reasons. One is that physicians simply aren't trained to use stethoscopes to listen to babies. So they have to rely on the fetal monitor instead. Another reason is fear of lawsuits - but as the editorial points out, doctors who rejected fetal heart monitors in favor of more old fashioned methods could still argue that they are meeting current recommendations.

A final reason, the one that interested me the most, is that some physicians claim that the fetal heart monitors do provide valuable information - it just takes an expert to properly interpret the results.

This interests me because of the book I recently finished teaching: Bishop and Trout's Epistemology and the Psychology of Human Judgment. Bishop and Trout argue in favor of algorithms that can do a better job than experts in diagnosing disease, determining which parolees are likely to commit more crimes, etc. In turns out that experts aren't nearly as good as they think they are. Worse, when this is pointed out to them, and they are supplied an algorithm that will improve their results, the experts still find reasons for deviating from what the algorithm recommends.

On a psychological level this makes perfect sense. We'd hate to think that a stupid algorithm can do better than we can, and we'll also think that we can recognize special circumstances where the algorithm will fail. Unfortunately, both of those beliefs are false. The algorithm can do better than we can and, on the whole, when we deviate from the algorithm we do worse than if we hadn't.

The case of fetal heart rate monitoring seems to be a perfect example of this. So some doctors continue using the monitors, knowing full well that studies have shown their limitations. But they continue to use the monitors claiming that they have special expertise in reading the results. As Bishop and Trout argue, we should be especially skeptical of such claims.


Blogger M. A. Gaddis said...

"the experts still find reasons for deviating from what the algorithm recommends"... years ago, an algorithm was ignored to build a "town" instead of a prison. One that encouraged education instead of punishment. Someone eventually listened, check this blog:

9:42 PM  

Post a Comment

<< Home