Showing posts with label clinical skills. Show all posts
Showing posts with label clinical skills. Show all posts

Wednesday, June 15, 2022

Diagnostic time out

What is a diagnostic time out? Succinctly defined, it’s a deliberate exercise in differential diagnosis and systematic clinical reasoning in the care of an individual patient. But wait, I hear someone say… isn’t that what we do already? Well, no. We’re all familiar with the traditional model for clinical reasoning that we’re taught in medical school but those of us in the real world of practice nowadays, if we’re honest, realize that it seldom happens. There’s just not enough time when you’re forced to see too many patients each day. And hospitalist incentives, with their emphasis on speed and quick adoption of specific diagnostic labels, run in opposition. What do we as hospitalists do instead? Well, aside from all the care pathways and metric incentives that tell us what to do, we rely on clinical instincts and rules of thumb. Because they bypass formal analysis, they save time. They serve as cognitive shortcuts. We call these heuristics. This method of thinking (fast, instinctive, intuitive) is sometimes known as system 1 thinking. It has the advantages of being efficient and fast and sometimes, in critical situations, life saving. But it comes at the cost of a certain error rate. In order to better understand the process of system 1 thinking we have given the various heuristics names and categories. I recently listed some of those in this post


If system 1 is our usual measure of processing to get around time constraints the alternative is system 2: formal clinical reasoning .  System 2 thinking was the topic of a recent paper in CriticalCare Clinics. Although based on a survey of people working in a NICU the article has general applicability. The authors contrast system 1 and system 2 thinking in this manner:


Dual process theory holds that individuals engaging in medical decision-making use one of 2 distinct cognitive processes: a system 1 process based on heuristics – the use of rapid pattern recognition and rules of thumb – or a system 2 process, based on deliberate analytical modeling and hypothesis generation. While invoking system one processes individuals can think fast and reflexively and can even operate at a subconscious level, using pattern recognition to sort vast amounts of clinical information quickly before an illness script that allows for the rapid elaboration of a differential diagnosis. In contrast system 2 processes require focused attention and are purposefully analytical, relying on deliberate counter-factual reasoning to generate hypotheses regarding the pathophysiologic mechanisms by which a patient’s symptoms are produced.


The authors introduced the concept of the diagnostic time out to describe this shift of thinking because it requires deliberate effort. It’s not going to arise spontaneously in the natural course of the ward routine. (The authors were not the first ones to use this term). The diagnostic time out can be considered the cognitive equivalent of the better known procedural time out.


Why is a diagnostic time out needed? Research on diagnostic error has indicated that while some instances are due to system problems (such as failure to communicate test results) most are cognitive errors. These can be linked to the heuristics of system 1 thinking. The diagnostic time out, or the deliberate exercise of system 2 thinking, is a way to complement these cognitive shortcuts with a more analytical process.


Some opinion leaders in the field of diagnostic error have suggested universal adoption of system 2 thinking. This is problematic due to time constraints. Besides, there are some essential benefits of system 1 thinking, particularly in acute life-threatening situations. The real trick is how best to selectively employ system 2 thinking. In other words what are the situations in which system 2 thinking should be used? The authors suggest handoff situations in complex patients including ER to hospitalist, off service/on service and ICU to ward transfers.


How does it work? The authors propose a template but it’s really just the traditional clinical reasoning process. One of their points really got my attention: during the time out diagnostic labels should be removed and replaced by signs, symptoms, manifestations and clinical concerns. This of course is the opposite of what your coders and hospitalist leaders want you to do.


What are some of the barriers to implementation? In addition to time constraints, fear of ambiguity is an important factor. We are afraid to admit what we don’t know. One thing you will never hear a hospitalist say out loud is “I’ll have to think about that.”


Saturday, June 11, 2022

A little more on metacognition

This article from Academic Emergency Medicine, published in 2002, remains applicable today. It makes the point that heuristics in medicine are valuable even though they can lead to error. The article also makes the statement:


The increasing use of clinical decision rules, as well as other aids that reduce uncertainty and cognitive load, e.g., computerized clinical decision support,will improve certain aspects of clinical decision making, but much flesh-and-blood clinical decision making will remain and there will always be a place for intuition and clinical acumen.


It presents an exhaustive list with detailed descriptions of the various cognitive shortcuts.


Indulge me in a little metacognition

I found an interesting post about cognitive shortcuts in medicine. I have a minor objection to the title of the post which is Cognitive Errors. Cognitive shortcuts, known as heuristics, which are examples of fast instinctive thinking (system one), often lead to error. In some cases, however, they can be useful because they are efficient and time saving. There is an up side as well as a down side to system one thinking in medicine.


Let’s go down the list. I’ve skipped some of them.


The first example given is affective error. This refers to an emotional response surpassing objectivity.


Next is aggregate bias. I struggle with this one. The author says that the aggregate bias is the belief that data in the aggregate don’t apply to the patient in front of you. My understanding (maybe I’m wrong) is that aggregate bias, otherwise known as the ecological fallacy, is the opposite. That is, it refers to inappropriate application of population data to an individual. It has more to do with treatment decisions than diagnostic error. Remember, one of the first principles of evidence-based medicine is that clinical reasoning decision making starts with the unique aspects of the individual patient. After looking at a variety of references, it would appear that both definitions have been used. Most medical references define aggregation bias the way the blog author does. Those outside of medicine define it as inappropriate extrapolation.


The ambiguity effect is really a bias against ambiguity. So we tend to stick with things we are more familiar with. That may cause us to ignore other possibilities and take too narrow a view of things. As originally conceived it had to do with probability. That is, people have a tendency to gravitate toward choices in which the probability is known or explicitly stated. Of note, the ambiguity effect was first described by Daniel Ellsberg.


The anchoring heuristic is one of the better known cognitive biases. This refers to the tendency to stick with one’s initial hunch despite new evidence to the contrary. You may be so proud of your initial hunch that you ignore new information. Confirmation bias and diagnostic momentum are related concepts.


Ascertainment bias, as the author points out, is an umbrella category. It encompasses a lot of stereotypes and biases. In essence it’s just—-well, bias. It’s not very useful as a unique category in discussions of cognitive error.


Availability bias is one of the better known cognitive shortcuts. This refers to the influence of prior experience. This causes bias toward the first thing that comes to your mind. For example, if you’ve been burned by having missed a case of aortic dissection you may tend to be over concerned about aortic dissection in every future case of chest pain. The flip side is you may fail to consider things you haven’t seen in a long time.


Base rate neglect is a cognitive shortcut that may be considered harmful and wasteful in ambulatory medicine but may be your friend in the arena of hospital and emergency medicine. It’s a failing to consider the true prevalence of diseases in clinical reasoning. It ignores the old aphorism “common things happen most often.” In the high acuity world of the hospital, where you really need to be risk-averse, base rate neglect may be beneficial. Put another way you and and your patient may be better off if you consider worst case scenario.


Then there’s belief bias. I’m not sure this belongs in a discussion of diagnostic shortcuts as it has more to do with treatment recommendations. I cringe when I hear somebody say they “believe“ in a particular treatment, implying that belief surpasses reasoning from evidence .


Blind spot bias is similar to the Dunning Kruger effect in which we think we're smarter than we really are. Humility is the remedy here. Does this lead to a form of cognitive shortcut? Maybe in that we fail to pause and consider carefully that we might be wrong.


Confirmation bias is akin to anchoring. This is the tendency to be selective in what type of accumulating evidence you consider. That is, you consider mainly evidence that supports your original hunch.


The framing heuristic is another well known shortcut. We are biased toward diagnostic possibilities in accordance with the way the initial presentation is framed. Though it can be useful it restricts our differential diagnosis in a way that excludes a wide range of possibilities. Not every returning travel with fever has a parasite, for example.


The gamblers fallacy, according to the blog author, is “the erroneous belief that chance his self correcting.“ This is a cognitive error that tends in the opposite direction to the availability heuristic.


The order effect is something I was vaguely aware of but had not considered as a cognitive error category. It refers to the tendency to focus on information that is proximate in time and to do so at the expense of the totality of events over time. This typically occurs at the point of hand off in a patient who has had a very long hospital course.


Premature closure is just what it says. It’s a tendency for thinking to stop once a tentative diagnosis has been made. It overlaps with other categories such as anchoring. There is probably a subtle difference between premature closure and anchoring. Anchoring implies an emotional attachment to a diagnosis whereas premature closure implies diagnostic laziness.


Representativeness restraint has also been known as a representativeness heuristic. It is a cognitive shortcut characterized by focusing too much on the prototypical manifestations of a disease. This may cause the clinician to miss atypical presentations.


Search satisfaction is another example of laziness in clinical reasoning. It’s a tendency to stop searching once an answer has been found. The author gives the example of missing a second fracture on an x-ray once the first one is identified.


Sunk cost fallacy is a type of emotional heuristic as well as diagnostic laziness. It is the tendency to ignore new information and not consider alternative diagnoses once the original diagnosis has been arrived at after a great time effort and expense (the sunk cost).


Sutton’s slip might be the dark side of Sutton’s law (going where the money is). Pursuing the obvious might lead to error because of other possibilities being ignored.


Zebra retreat is the avoidance of rare diagnoses to a fault. It’s an opposite of base rate neglect.



Monday, December 17, 2018

The importance of serial physical exams in sepsis



Purpose of review: Monitoring of mental status and peripheral circulatory changes can be accomplished noninvasively in patients in the ICU. Emphasis on physical examination in conditions such as sepsis have gained increased attention as these evaluations can often serve as a surrogate marker for short-term treatment efficacy of therapeutic interventions. Sepsis associated encephalopathy and mental status changes correlate with worse prognosis in patients. Evaluation of peripheral circulation has been shown to be a convenient, easily accessible, and accurate marker for prognosis in patients with septic shock. The purpose of this article is to emphasize the main findings according to recent literature into the monitoring of physical examination changes in patients with sepsis.

Recent findings: Several recent studies have expanded our knowledge about the pathophysiology of mental status changes and the clinical assessment of peripheral circulation in patients with sepsis. Sepsis-associated encephalopathy is associated with an increased rate of morbidity and mortality in an intensive care setting. Increased capillary refill time (CRT) and persistent skin mottling are strongly predictive of mortality, whereas temperature gradients can reveal vasoconstriction and more severe organ dysfunction.

Summary: Monitoring of physical examination changes is a significant and critical intervention in patients with sepsis. Utilizing repeated neurologic evaluations, and assessing CRT, mottling score, and skin temperature gradients should be emphasized as important noninvasive diagnostic tools. The significance of these methods can be incorporated during the utilization of therapeutic strategies in resuscitation protocols in patients with sepsis.

Thursday, March 15, 2018

The master clinician’s approach


Wednesday, November 08, 2017

How to take a high yield history


Tuesday, October 03, 2017

First glance heuristics


Friday, February 24, 2017

Point of care echo and diastolic dysfunction


In this study emergency physicians were competent in the identification of diastolic dysfunction after limited training but not in the classification of DD.

Thursday, February 23, 2017

Point of care chest ultrasound


Here is a review of its usefulness as a clinical tool, excluding cardiac applications.

Saturday, August 27, 2016

Unexplained symptoms


Sometimes it is better for the patient and more intellectually honest to not force a diagnostic label and just acknowledge unexplained symptoms.

Tuesday, May 03, 2016

Will the computer someday replace the physician as diagnostician?


With the growing enthusiasm over Watson and other forms of high technology decision support has come the nutty idea that computers may eventually surpass clinicians in the diagnostic process. Taking that idea to its full extent, in such a world the role of doctors would be restricted. The need for clinicians would be gone though we would still need providers to navigate the EMR and coordinate care (essentially secretarial duties), do procedures and maintain a “human touch” in healthcare through education, counselling and other types of social interaction. Could this ever come to pass?



It has already been the subject of an experiment, the conditions of which gave the idea the best possible chance to work in two ways. First, the experiment was conducted in what is arguably one of the most mechanistic and formulaic areas of diagnostic medicine. Second, it's been going on, repeated time and time again with generation after generation of software “improvement,” for decades. I am referring, of course, to computerized interpretation of electrocardiograms. Despite being given every conceivable chance it has failed. From a recent review on the topic:



The use of digital computers for ECG processing was pioneered in the early 1960s by two immigrants to the US, Hubert Pipberger, who initiated a collaborative VA project to collect an ECG-independent Frank lead data base, and Cesar Caceres at NIH who selected for his ECAN program standard 12-lead ECGs processed as single leads. Ray Bonner in the early 1970s placed his IBM 5880 program in a cart to print ECGs with interpretation, and computer-ECG programs were developed by Telemed, Marquette, HP-Philips and Mortara. The “Common Standards for quantitative Electrocardiography (CSE)” directed by Jos Willems evaluated nine ECG programs and eight cardiologists in clinically-defined categories. The total accuracy by a representative “average” cardiologist (75.5%) was 5.8% higher than that of the average program (69.7, p less than 0.001).



Those results don't say much for the cardiologists either but that's a topic for another discussion.  In a green journal editorial in 2012 Dr. Joseph Alpert cited additional research from the 1970s:



In 1976, I was involved in one of the earliest evaluations of 5 competing computer programs that interpreted electrocardiograms (ECGs).1 At that time, computer interpretation of ECGs was just beginning to make its way into hospitals in the United States and abroad. Dr Arthur Hagan and I evaluated the accuracy of the different computer interpretations compared with our own experienced analysis of more than 100 ECGs with various well defined abnormalities.



The results were illuminating. The computer interpretations were often wrong, particularly with respect to arrhythmia identification. Furthermore, the different computer ECG readings from the 5 programs often were surprisingly different. The conclusion of this early study was that computers were not as accurate in reading ECGs when compared with experienced cardiologists. We suggested that all computer-read ECGs should be over-read by an experienced physician. In the end, this study showed that the overall accuracy score for the computer ECG programs was approximately 80%, and as already noted, the computer was particularly poor on arrhythmia interpretation.



Of note, Alpert cites no improvement in over 30 years. Again from the editorial:



This is still the situation today with all ECGs with computer diagnoses over-read by an experienced physician, usually a cardiologist. Of note, when I am the over-reading cardiologist in our hospital, I still find that the computer reading of the ECG is incorrect approximately 20% of the time.


Because we often rely on the ECG to supply the critical data to guide decision making in very ill patients, this is unacceptable. And it hasn't improved in decades. These numbers were derived using artificial conventions. The results would certainly be even worse against more nuanced standards based on subtle ECG patterns.



Alpert suggests the reason for such poor results:



What is the reason that the most sophisticated computer ECG interpreting software makes so many mistakes? I think the answer lies in the remarkable and extensive capacity of the human brain to recognize visual patterns. This capacity is the reason that a person with minimal prior instruction can recognize a van Gogh painting without looking at the accompanying label. The distinctive style of van Gogh is easily recognized by the highly complex visual pattern recognition system of our central nervous system... Today, we apply this ability in a variety of areas, including athletic endeavors, police investigations, aesthetics, and many other venues, including the interpretation of ECGs.



Based on this explanation and the lack of progress over time it would appear unlikely that the computer will supplant the clinician in ECG interpretation let alone in other areas of diagnostic evaluation that are far more complex and less mechanistic.




Tuesday, March 24, 2015

From experienced clinician to master clinician

Dr. Gurpreet Dhaliwal, known by his colleagues as Goop, is regarded as one of the master clinicians in the department of Internal Medicine at UCSF. If you've attended very many SHM conferences you've probably been bedazzled watching him discuss a mystery case in CPC fashion.

How do you get to be a master clinician? Are some people just born that way? Goop has pondered this question and decided it's a matter of attitude and motivation as much as anything else. It's the subject of a talk he gave, which I was fortunate enough to attend, at the Society of Hospital Medicine national meeting last spring. That same talk, given as a guest medical grand rounds speaker at the University of Washington, is available for viewing here.

Goop tries to be evidence based in his talk but encounters a problem: there has been next to no research on this question in clinical medicine. In attempting to work around the problem Goop has to look to non medical fields, in which there is a fair body of research on what makes an expert. But such research tends to be unconvincing, as comparison of the art and science of medicine with the mechanics of industry falls short time after time. Fortunately though Goop sprinkles in plenty of personal insights he has gained on his journey to becoming a master clinician. I'll unpack a few things here that rang true to me although I recommend everyone watch the video in its entirety at the link above.

It's a lot about attitude.
Complacency is the enemy. The slide appearing about six minutes into the talk reflects the typical career learning curve. Early on the curve is steep. Everything is new and it's a struggle. After a while, though, things get easier. As experience accumulates we become comfortable and the curve flattens. This, according to Goop, is a zone of complacency where professional stagnation and eventual decline may ensue. The key to staying out of this rut is to keep the curve steep but it takes deliberate effort. If you're comfortable in a particular content area make it harder by inventing new challenges and go after them. Curiosity and humility, the realization of how little you know, are important drivers.

Practice must be deliberate.
Passive practice, the kind we get from seeing a lot of patients, is an inefficient learning method. Deliberate practice might mean, for example, making it a point to carefully review as many electrocardiograms (or rashes or images, etc) as possible during a given month along with related material in textbooks or review articles.

Make the most of case reports.
Though relegated to “low impact” status in medical journals, case reports can be powerful learning tools when read with deliberate learning objectives (not just casually). Case records and clinical problem solving exercises in the New England Journal of Medicine are but two examples.

Is this the next version of MOC? It's a lot of work but there is a key difference. Unlike MOC this is self motivated and self directed. And it's a much more robust form of learning than that which is imposed by some outsider who knows nothing of your educational needs.


Tuesday, July 15, 2014

Wednesday, April 09, 2014

Orthostatic vital signs: evidence based or not?

An evidence rundown is presented in the video below. The test characteristics are poor.


 

HT to LITFL

Thursday, March 13, 2014

Overdiagnosis of pneumonia

The overdiagnosis of pneumonia, which is becoming increasingly recognized, was the topic of a recent article in the Cleveland Clinic Journal of Medicine. As is apparent from the review the reasons for this trend in overdiagnosis are multiple and complex. One factor, I believe, is time pressure. This pressure comes in many forms including throughput initiatives to reduce ER crowding, time based performance measures, pressure to identify a “principal problem” on admission and incentives to place hospitalized patients on care pathways. Vague problem statements like “pulmonary opacity” and “breathing difficulty”, while often more honestly and accurately reflective of problem resolution at a given time, are frowned upon in today's regulatory and performance environment. “Systems improvements”thus lead to diagnostic inaccuracy.

Another problem pointed out in the article is the disconnect between application of a simple diagnostic label and discrimination between patients who will and will not benefit from antibiotics. As the article points out:

The central problem with pneumonia, as with many long-recognized clinical conditions, is that the diagnosis is separated from the treatment. In other words, although physicians are confident that antibiotics benefit patients who have what Sir William Osler would have called pneumonia (elevated white blood cell count, fever, cough, dyspnea, pleurisy, egophany, lobular infiltrate), we don’t know whether the treatment benefits patients whose pneumonia would have been unrecognizable decades ago (with cough, low-grade fever, and infiltrate on CT alone). Improvements in imaging may exacerbate the problem. In this sense, pneumonia exists on a spectrum, as do many medical diagnoses. Not all cases are equally severe, and some may not deserve to be labeled as pneumonia.

It goes on to say that there is equipoise for the performance of clinical trials to determine whether antibiotics can be withheld in dubious cases.

Friday, February 07, 2014

On the proper use of the stethoscope

This essay on auscultation opens with:

Years ago I heard a story about a sage practitioner who was making teaching rounds. The house staff watched as he listened to the patient's heart with his stethoscope for perhaps 5 minutes. He then stood up to stretch his back and a resident asked him what he thought about the heart murmur. The practitioner responded, “What murmur? I'm still listening to the first heart sound.”

From back in the day when the stethoscope was a clinical tool instead of a coding tool.