CME-MOC

Time Stamps

  • 02:45 Case Introduction 
  • 05:10 Yes/No questions with our first two discussants
  • 18:46 The diagnosis is revealed 
  • 21:14 Why did I miss the diagnosis?
  • 26:18 Using diagnostic problem solving to break down the case and understand our errors  
  • 30:17 The importance of data gathering  
  • 30:51 Introducing our third clinician  
  • 34:17 Hypothesis generation
  • 37:16 The interplay between hypothesis generation and evaluation
  • 38:44 The win-stay, lose-shift heuristic 
  • 41:34 Cognitive forcing strategies 
  • 49:55 Knowledge vs. experience

Show Notes

  1. Our tendency as clinicians to judge decision-making based on outcomes rather than on soundness of principle poses a serious obstacle when we attempt to “learn from our mistakes” (and conversely when we celebrate our successes).
  2. Several authors have studied cases of diagnostic error and created inventories of deeper causes or contributing factors to these errors. 
    • The diagnostic process can be idealized as a sequence of steps: data gathering, progressing through the interpretation and synthesis of findings, the generation and evaluation of hypotheses, the result of which is one (or a few) working diagnoses that can be further evaluated or treated.
    • In reality, these processes are probably interdependent, and depending on the case and context, the diagnostic process is likely iterative and recursive, rather than strictly sequential in execution.
    • Nevertheless, this schema can serve as a framework for clinicians reflecting on cases of diagnostic error in which a cognitive failure is suspected.
  3. Exhorting clinicians to “be more thorough”, in a general sense, is unlikely to be an effective remedy to diagnostic error. Thoroughness for thoroughness’ sake and the gathering of excessive amounts of data are markers of sub-expert rather than expert problem-solving, in that experts know what to look for and where to find it. 
  4. The win-stay, lose-shift heuristic describes the tendency of human beings to stick with a hypothesis as long as it is consistent with the available data. 
    • Heuristics such as these are responsible for a great deal of our success as diagnosticians, rather than a deficiency. 
    • However, heuristics can 1) be augmented with the use of cognitive forcing strategies, and/or 2) their effectiveness optimized by the acquisition of a more sophisticated and “compiled” base of knowledge.
  5. Primary varicella in an immunocompetent adult is unusual but not rare, and tends to run a more severe course than in pediatric cases, with a greater incidence of varicella pneumonia and varicella encephalitis.
  6. Acute generalized exanthematous pustulosis (AGEP) is an uncommon adverse drug reaction characterized by a diffuse pustular eruption concentrated on the face and intertriginous areas, associated with fever and leukocytosis, typically occurring within a few days of exposure to one of several culprit antimicrobial classes.

Transcript

JOHN: We had our latest morbidity and mortality conference a couple of weeks ago, and since then I’ve been thinking. We’re supposed to learn from our mistakes, that’s the assumption these conferences are built on. But how? How do we go about doing that, actually? What do I do to uncover the truth of what went wrong with my thinking, especially when I barely understand how my own brain works, and when I suspect I can’t trust it to investigate itself objectively, because of the fragility of memory — and of my pride.

I think back to my training and I remember blocks on kidney disease, conferences about epidemiology, getting my cases precepted in clinic. But when it comes to how we as doctors should be analyzing our diagnostic failures, I guess the unspoken assumption I worked with was that, I’d just figure it out along the way. But at this point in my career, I’m not so sure anymore.

A couple of years ago I had a case that’s stuck with me, a case in which I missed the diagnosis. Fortunately, no harm befell the patient — he was treated appropriately, he recovered, and his diagnosis wasn’t delayed, because another clinician immediately recognized what he had. But ever since, that case has bothered me. Because to this day, I don’t think I understand why I missed that diagnosis.

Now that we’ve been doing Hoofbeats, I’ve been thinking, what I might I learn if I saw another clinician, perhaps one with greater experience or a different perspective, go through that case? And what can we learn about this process in general, about how we “ought” to learn from our mistakes. Well, on this week’s episode, we’re going to do just that. I’ll present this case to you to solve. We’ll use a 20 Questions format: I’ll start by giving you just some basic information, then ask you to think about what questions you’d want to ask next, and in what order. We’ll follow along with no fewer than three different discussants whom we challenged the same way — each of whom successfully diagnosed this case (to both my wonderment and chagrin). And hopefully by the end, I’ll have the answer to my question: How did they succeed? And why did I fail? My null hypothesis: That I’m just an idiot. 

With Core IM, I’m John Hwang.

CINDY: And I’m Cindy Fang.

JOHN: We are general internists, faculty at the NYU School of Medicine, and this is Hoofbeats. Stay with us for the case.

“A 39 year-old man carrying a diagnosis of schizophrenia currently hospitalized on the inpatient psychiatric unit is transferred to the medical service after developing a fever to 102. On interview, he is reticent and minimally verbal, which is consistent with the behavior the unit staff have been observing since his arrival. He acknowledges that he feels feverish and that his appetite is poor, but he denies all other symptoms, including headache, rhinorrhea, sore throat, cough, nausea/vomiting, abdominal pain, diarrhea, or dysuria. He volunteers no medical history, but doesn’t really get any routine medical care. The repeat temperature is 103.6, and he is mildly tachycardic to 108, but his blood pressure, respirations, and oxygen saturation are normal. He is quiet and withdrawn, but alert and does not appear toxic in any sense.

To diagnose this patient, what would you focus on next? What additional information would you like to have, whether it’s more history, exam, or tests? Take a moment to think about it, and we’ll hear from our first two discussants after the break.

STERN: The one question that for me would unfold so much more of the case is: What meds is he on? Because yes, there’s this whole thing about, yes, it’s a fever and so there’s, you know, various categories of infection and different kinds of infection, acute and chronic infection, and blah, blah, blah, blah, blah, blah, blah, blah, blah. But I want to know what his meds are.

JOHN: We started off by inviting back two discussants who’ve previously been on Hoofbeats. The voice you hear jumping straight into the med list is Dr. David Stern; he is joined by Dr. Patrick Cocks — both are general internists and senior faculty at NYU.

COCKS: When you mentioned a psychiatric illness and a medical condition, that being fevers, it has already provided some organizational framework with which to think about this.

JOHN: Regular Hoofbeats listeners may remember a few episodes back how Drs. Cocks and Stern handled the case we gave them with ease — a young man with metastatic gastric cancer. Well, after that, we obviously had to up the ante. So not only did we make them ask for whatever data they wanted, we made them ask only in yes-or-no questions. You know, so that they wouldn’t get any free information from our answers.

STERN: Let’s talk about our top six questions.

COCKS: You can say the gauntlet has been dropped.

STERN: And it may be that we have a branch point in there but that’s okay. So I’ve thrown out my, my first question which is really meds.

COCKS: Yeah. Immunocompromised or not what what I was thinking.

STERN: Which we get from the meds.

COCKS: That’s true. But depends on how many meds he has. It may take up all six of our questions.

STERN: No, I thought it was going to ask for a med list and I was going to get a med list and it’s not going to be a yes or no. Like how can you ask that? He can’t do that. He’s got to actually give us the meds.

COCKS: So the two branch points in my mind are immunocompromised or not.

STERN: Yeah, absolutely.

COCKS: And then whether — and I have to think about how to present this in the form of yes-no — but was his presenting symptom to the psychiatry unit mania and sort of hyperadrenergic? Was it delusional/psychotic? Or was it a hypomanic plus flat affect? Those were my two sort of questions.

STERN: The only other thing I was thinking about is the tox screen sort of stuff. I mean which gets into the same thing that you’re getting at. Like if he comes in a hyperadrenergic state, then is he really psychotic or is he actually has some toxicity that we’re looking at.

COCKS: Either taking something or withdrawing from something. SSRI. How many questions do we have?

STERN: Meds. Immunocompromised state. Drug screen. But your other question was sort of….

COCKS: Well it’s more like… because if he presents delusional/psychotic and then develops a fever, thinking more about an organic brain process like encephalitis or meningoencephalitis. If he’s hypernergic I’m thinking about toxidromes. And if he is flat, Wilson’s or some, you know… The fever we’d have to deal with.

STERN: This is a minor question which I would put in the lesser category B (these are things that would be interesting, sort of depends on how things play out): Time of year came to mind. [COCKS: Yes.] Just because of like babesia or influenza or… He could be giving us a case of schizophrenic who developed influenza in the hospital. Sorry, did I get it already? You have to go on to the next case, oh well…

JOHN: So let’s answer some of these questions for you folks.

CINDY: So their first question: they asked what medications the patient is currently taking. Those are: clozapine, haloperidol, and lithium. He’s also prescribed diphenhydramine as needed for sleep. Nothing else.

JOHN: Next, they asked whether the patient was immunocompromised in any way. Kind of a broad question, since there are many different forms of immunocompromise. 

CINDY: Maybe you should have made them itemize.

JOHN: Maybe. But you know what? You don’t say no to your former program director. Or the vice chair for faculty affairs. And regardless, the answer is the same across the board: No. Specifically, a 4th-generation HIV screening test on admission was negative, a hemoglobin A1c measurement confirmed that he was not diabetic, and he didn’t appear severely malnourished. And he wasn’t on any immunosuppressants, or for that matter, any other medications, aside from what we just said. 

CINDY: Third question, they wanted to know how the patient initially presented to the psychiatric unit. As it turns out, the patient lived in a homeless shelter, and was sent in by the staff there, who reported he was behaving psychotically: responding to internal stimuli, guarded, and neglecting his self-care. However, he was not violent or agitated, nor did he appear dysthymic.

JOHN: And in terms of the time of year, this case occurred in early July. Uh, in the northern hemisphere. So no, this wasn’t the flu. Though this show is called Hoofbeats, so we appreciate the thought. 

COCKS: I’m deviating a little off our list. Was the patient in the psych ward for more than three days?

JOHN: Yes. And I can give you more details about that. The patient was admitted… There’s no way for for you to ask a yes or no as to what time of year. So I’ll give it to you. The patient was admitted on the 22nd of June and the fever developed on the 9th of July.

STERN: It’s interesting only because most psych admissions tend to be 14 days from the insurance perspective. 

JOHN: You haven’t been at Bellevue for a while…

STERN: No, I haven’t. … I’m curious as to why you wanted to know how long — just to see if it was something they brought in from the outside?

COCKS: Yeah, exactly. So sort of this idea of how he presented, if he was coming in and a toxidrome, if he was coming in with a viral encephalitis, we would anticipate there to be a lot of overlap, and that they wouldn’t have taken 14 days for him to present with this reportedly high fever.

STERN: Which allows us to get rid of the tox screen with the exception of the idea that somebody’s bringing in something. 17 days is too long. It’s too long for a lot of things. I’m very fixated on the meds, brings me back to his medicines again. Are there any medications that were added or changed once he came in?

JOHN: Uh, yes and no. Let me explain. The patient was supposed to be taking two medications as an outpatient: lithium and clozapine. Those medications were resumed during the hospitalization. But it’s unclear whether he was actually taking them as an outpatient.

STERN: And when he got into the hospital, those were reinitiated?

JOHN: That’s right.

STERN: And anything else change from hospital day 2 to the ninth? In terms of his meds.

JOHN: Haloperidol was added in the hospital on hospital day 4, two milligrams twice a day.

COCKS: His med list suggests that he has an underlying psychiatric illness to begin with. And so it’s reframed the way I’m thinking about this now. The way I first heard the chief complaint and thought of the context of this conversation, I thought it would be a primary psychiatric presentation of a disease process that would unify his neuropsychiatric symptoms and this high fever. The fact that he comes to his initial presentation on these medicines suggests that this is a chronic process and I’m less inclined to tie them together.

JOHN: Okay, so just to review, so we have a presumably immunocompetent, 39 year-old man with schizophrenia in the psychiatric unit; he was started on clozapine and lithium followed by haloperidol, and then a little over two weeks into his hospitalization he develops this fever. 

CINDY: At this point, they requested the complete blood count. On the day of the first fever, the total white count is normal, 5.2. 47% neutrophils, 28% lymphs, 16% monos, 7% eosinophils. In relative terms, the monos and eos are modestly elevated, but to save you the trouble of doing the math, the absolute cell counts are all within norms. The hemoglobin and platelets are normal.

JOHN: Our discussants then asked for a hepatic panel. The transaminases were slightly elevated, AST 83, ALT 84 IU/L, a little over twice the upper limit of normal. These had been normal when last checked on the first day of his hospitalization. The alkaline phosphatase and bilirubin measurements, meanwhile, were both within norms.

CINDY: At this point, they shifted to the physical exam. They asked whether clonus, hyperreflexia or diaphoresis were present, findings that would be consistent with serotonin syndrome. They also ask whether there was any rigidity. These were all absent.

JOHN: They wanted to know more about the pattern of the fever. How high did it reach, and how many days did it last? The temperature reached a maximum of 103.6. He was persistently febrile after 72 hours — every temperature measured was elevated, in spite of repeated doses of acetaminophen.. 

COCKS: I would just think that his degree of pyrexia is unique, right? 103.6 is unique.

STERN: And the acuity of this is also drives me in a particular way. I mean, again, like you said, it’s against a pyogenic infection; it’s also against any one of the sort of more chronic low grade… I mean TB for example, like for somebody to suddenly show up with a fever of 103, and you know, I have one of these other infections that does, that is an infection but does not cause an elevated white count. And just that’s not there.

COCKS: I mean, that’s what made me think of serotonin syndrome. But the lack of clonus, it makes it less likely in my mind.

JOHN: We’re at ten questions. At this point, Dr. Cocks asked whether there was a rash. And yes, there was — a diffuse eruption of pustules and papules. By asking follow-up questions, our discussants established that these lesions were scattered across the face, chest, trunk, back, and arms, while the legs seemed relatively spared. There were a few lesions on the palms, but none on the soles, and nothing in the mouth or on the genitals. When asked, the patient denied that these lesions were particularly painful or pruritic; in fact, he barely seemed to have registered the presence of the rash at all, as he could not recall exactly when or where it had started. Notes written by the staff psychiatrists earlier that week hadn’t mentioned any rash.

CINDY: Aside from the rash, the rest of his physical exam was relatively unremarkable. He had a soft systolic murmur in the pulmonic area that did not radiate. The lungs were clear. There was no appreciable hepatosplenomegaly or tenderness. There were no joint effusions. The prostate was nontender on rectal exam.

JOHN: And it’s at this point, after asking roughly fifteen questions, having heard the story, that exam, and some very basic labs, that our discussants proposed the correct diagnosis. I have to say, I was surprised they got it so quickly. I had a long list of answers prepared to questions I thought they were going to ask. 

CINDY: What did you think they were going to ask? 

JOHN: Well, just getting back to something you talked about in our last episode, about how expert clinicians often focus on who the patient is and what they’re at risk for — what Feltovich and Barrows called “enabling conditions.” So I expected they’d want to know more about who he was. 

CINDY: We could run down the list, in case any of our listeners would want some of this information.

JOHN: Sure Cindy. So more about this patient. He was born in West Africa, had immigrated to the United States in his early 20s, and had been living in New York City for the past five years. He said he had not traveled outside of NYC within the past six months. He was unemployed, and as you said Cindy, he was living in a shelter. He had used marijuana before, but otherwise denied any substance use. He had not been sexually active in the past year. There was no significant family history; he had a father and brothers in Africa who as far as he knew were healthy.

I can also share some other test results we had that you may or may not be interested in. The basic metabolic panel was normal. The urinalysis was normal, and cultures of urine and blood showed no growth through 72 hours. His chest X-ray was clear. The creatine kinase on the day of his fever was slightly elevated to 283 U/L; this rose over the next 48 hours to 2,000, before downtrending spontaneously thereafter. Clozapine and lithium levels were checked on the day of his fever; these were both within target range.  The HIV screening test, as I said, was negative; an HIV PCR was also sent — and this too was negative. The RPR and TPPA assays were both negative. Urine PCR for gonococcus was also negative. And I think that’s all the data we had.

CINDY: So John, what did you think this patient had?

JOHN: I have the admission note I wrote here, which helps keep me honest about my thought process. For my problem representation, I wrote “high, persistent fever and diffuse papulopustular rash in a young man three weeks into inpatient psychiatric hospitalization and resumption of clozapine, haloperidol and lithium.” And I thought that the patient was having some sort of pustular febrile drug reaction. Acute generalized exanthematous pustulosis (AGEP), an idiosyncratic clozapine reaction, an atypical form of DRESS, maybe even Sweet syndrome. 

CINDY: How confident were you?

JOHN: Actually I wasn’t particularly confident in any of those diagnoses. I simply haven’t seen enough examples of any of those entities to be entirely confident when diagnosing them. But I was confident in my problem representation — I thought I was thinking about the case in the right way. We weren’t in a rush; we hadn’t gotten that many admissions that morning, I had taken enough time to create a differential that covered other categories. So for example in my note I also suggested syphilis, acute HIV (this was before the PCR came back). I even tried thinking laterally a bit: Maybe the rash had been chronic and overlooked, related to lithium, for example — which would make the fever a non-focal finding. So I felt confident I was in the ballpark.. …And I wasn’t. Half an hour after I saw the patient with my team and left that note, the consulting dermatologist came by and immediately recognized the correct diagnosis. 

And dropped the mic, so to speak, by leaving a brief and incisive note. Which I think is timestamped like ten minutes after my bumbling note, with my differential that somewhat managed to be both overbaked and undercooked. 

CINDY: They were certain?

JOHN: I mean, they put in a differential… but I think they were just being polite. 

CINDY: And what was the correct diagnosis?

JOHN: [groaning] …Chickenpox. This was… really embarrassing.

CINDY: Why? I mean, you said the diagnosis was barely delayed. The patient wasn’t affected.

JOHN: Well, even though the patient did fine… this disease could’ve really hurt him. Primary varicella tends to be more severe in adults. A significant minority develop pulmonary disease. And early diagnosis permits early airborne isolation. He might have been fine, but can you imagine if he had exposed an immunocompromised patient in the neighboring bed?

CINDY: Fair, I guess. But to be honest, I don’t think I would have been able to identify varicella either. Plus, primary varicella is just so rare.

JOHN: Rare, but it’s not like it’s some exotic disease. I mean, it would have been one thing if I had considered varicella but dismissed it as too unlikely to test for or treat. That’s a problem of misjudging probability, and we’ve talked about mistakes like that on Hoofbeats before. I could comfort myself with the knowledge that humans generally suck at judging probabilities. But that wasn’t the error I committed. The problem was, I never thought of varicella in the first place. And that feels embarrassing, to be blind-sided. 

In 1998, Dr. Georges Bordage wrote an influential paper titled “Why Did I Miss the Diagnosis?” In it, he describes asking ten general internists to reflect on the reasons why they committed a diagnostic error over the past year. Their answers included things all of us can relate to: “I didn’t pay enough attention to a finding,” “I didn’t know enough about the disease,” “I let the consultant convince me.” But number one on the list, the most common error they reported, was: “The diagnosis never crossed my mind.” 

CINDY: I think I can relate to that sentiment, too often then I am willing to admit. But I think the problem is, simply recognizing that doesn’t stop it from happening again.

JOHN: Yes. It’s not like I chose to not think of varicella.

CINDY: At least you’ll never forget to think of varicella the next time you see fever and a rash, after a case like this!

JOHN: And I think that’s exactly what’s bothering me: The prospect that, thanks to my brain’s reliance on the availability, this diagnostic error might end up having two victims: This patient, who had chickenpox that I didn’t see, and my next patient, in whom I’ll see chickenpox that isn’t there. There has to be a better takeaway from this case… I’ve just struggled with what that’s supposed to be.

CINDY: So, listeners, here’s your second diagnostic challenge of the episode. If you didn’t think of primary varicella, why do you think you missed the diagnosis? And if you did get the diagnosis — good for you — now diagnose my co-host. Why didn’t he get the diagnosis? What should his takeaway lesson be? Take a moment to come up with some ideas, and we’ll compare notes after the break.

JOHN: The way in which I reflect on and talk about the mistakes I’ve made as a doctor has been heavily influenced by vocabulary I first learned in residency, a vocabulary with words like “anchoring”, and “availability bias”, and “premature closure.” This language was supposed to facilitate self-reflection, and also to enable us to communicate with each other about “what went wrong” in the diagnostic process, a common tongue for self-improvement. Yet when I reflect on this case I find myself at a loss for words. Like, that morning when my resident first presented this patient to me, my mind jumped straight to an adverse drug reaction, and stayed there — so is the mistake I committed “premature closure”? Does the fact that I spent an hour reading and creating my differential for the patient’s rash change that? Would I have thought of varicella had I just thought harder or longer — and how much? How much more time would I have to have spent before my “closure” is no longer “premature”? When I opened the chart I read the note written the night before by the medical consultant, in which he wrote he suspected a clozapine reaction — does this mean I fell prey to “diagnostic momentum”? After interviewing the patient and examining the rash myself, I was still convinced he was having a drug reaction — was that an example of “confirmation bias”? Or “overconfidence bias?”

Dr. Gurpreet Dhaliwal, the distinguished diagnostician from UCSF, writes: “In sports, the exact same decision-making process can conjure different adjectives depending on the outcome. If the coach or athlete undertakes a high-risk play and the team wins, he or she is hailed as con-fident, strategic, experienced or gutsy. If the playresults in a loss, the same decision is called short-sighted, foolish, overconfident or reckless […] When a physician makes a challenging diagnosis with just a few pieces of information, she is called a brilliant diagnostician. If her diagnosis is wrong, it is called premature closure. It is human nature to judge the quality of the decision-making process by the result rather than the logic—we cannot help it.” 

CINDY: Concepts like this are useful — in learning them, I now understand that human reasoning malfunctions in predictable ways. But there is ambiguity in these terms, and that makes them prone to misuse.

JOHN: Exactly. A laxness in their use that can actually distort or obscure the deeper meaning of our mistakes, rather than illuminating them.

CINDY: Bordage writes in his review that, not only do internists make mistakes, but they commonly misunderstand why they do so. And I think part of the problem, these terms do not, by themselves, identify the root cause of our failures. Let’s say I committed the premature closure – why did I prematurely close? Was it because I had insufficient data? Then why did not gather data ? Was it simply because I was a lazy doctor, or is there a larger factor at play that affected my data gathering process?  If I keep asking these questions, ideally I can precisely locate the lesion within my diagnostic process so that i can come up with an actionable strategy to prevent that from happening again.

JOHN: What would be helpful is a way to search for answers to these questions systematically. Fortunately for us, this has been the object of considerable research. Beginning in the late 1980s there have been a number of different authors (including Chimowitz, Kassirer and Kopelman, Bordage, and Graber, to name a few) who, by examining case records and interviewing physicians, sought to systematically compile different types of cognitive errors that lead to misdiagnosis. There’s considerable variation in the terminology these authors use, but their studies all had something in common: They took as a starting premise that diagnostic problem-solving can be represented as a sequence of steps. First, the clinician starts by gathering and interpreting data; next, that data is synthesized, and hypotheses generated, or “triggered”; and third, those hypotheses are subjected to a process of validation. These authors categorize and associate the various cognitive errors they identified based on “where” in this process they seemed to be occurring. Right now, we’re going to repurpose this framework to explore our case, to try to narrow down one or more deeper reasons why I never thought of the diagnosis of varicella.  

CINDY:  It would make sense to start at the beginning of the process: the first step, data gathering. 

JOHN: Right. Those studies we referred to earlier attributed a number of mistakes to this stage. Failure to gather important information while taking a history is an obvious example. But so is gathering excessive amounts of unhelpful information. Relying on history or exam findings elicited by other clinicians is another. Deficiencies in physical exam technique, along with performing an incomplete exam — or failing to examine the patient at all — also fall under this category. 

And actually, there is something I overlooked in the patient’s exam. And I’ll be honest, if you’re listening and you missed the diagnosis, you might be mad at me for this. I told you that the patient’s rash consisted of papules and pustules, and it did. But the dermatologist who saw the patient wrote that the patient also had some vesicles. And sure enough, when I went back to the bedside, I found that although the majority of lesions were papular or pustular, there were a few scattered vesicles. One on the hand. A couple of the chest. Pinpoint blisters of clear fluid on an erythematous base. I had completely overlooked those. 

CINDY: And even though you initially told Dr. Cocks and Stern the rash was papulopustular, later, when we showed them a picture of the rash, they immediately recognized that a couple of the lesions were actually vesicles. That seems to be what led them to varicella.

COCKS: Almost pustular, actually. On no, those are vesicles. That’s vesicular.

STERN: That’s vesicular, I think. There’s one that’s circular and crusted over a bit.

COCKS: Does he have an eschar anywhere?

JOHN: No.

CINDY: What were you thinking about?

COCKS: Rickettsialpox.

(JOHN: Rickettsialpox. Babesiosis. IgG4 disease. The Patrick Cocks differential.)

COCKS: Febrile diagnosis. Diffuse vesicular rash. On the palms, right? Yeah. I mean, right. Rickettsial disease. Syphilis. Right. But the degree of pyrexia is surprising to me.

STERN: Chicken pox?

COCKS: Primary varicella?

STERN: Primary varicella… Strange on the hands, but on the chest.

COCKS: But in a 39 year old? I mean, I saw it once in my life, in an adult with primary varicella. The vaccine is not 100%.

JOHN: So maybe one obvious takeaway from this case for me is a simple rule: When a patient has a rash, look at every single lesion. And the next time I see papules or pustules, I need to search for vesicles. After all, data gathering is upstream of data integration. We’ve said this before, but if the quality of data gathering is poor, there’s no way to “reason” your way to the correct diagnosis. This is why you can’t rely on a “clinical reasoning” podcast to become a better diagnostician…

CINDY: Actually John, I have an alternative interpretation of what happened here. But to explain, I think we need to introduce our third discussant for this episode.

PORTER: So I immediately and only thought neuroleptic malignant syndrome when I, when I heard your one liner. so I guess the next question I have is, um, it’s really about what medications he is taking.

I think I would have two main things that I would be thinking about. One is, is this a psychiatric emergency, a psychiatric pharmacologic emergency, like neuroleptic malignant syndrome. And then I’m like, well, he’s a 37 year old or a young guy with a fever. So what do we know about a young man with a fever who also has some psych history? What does he look like? What are his other vital signs look like?

CINDY: CoreIM listeners, meet Dr. Barbara Porter. Dr. Porter is also a general internist and senior faculty at NYU. 

JOHN: Dr. Porter suggested the possibility of primary varicella very early on, without most of the case information you’ve heard. As you heard, she started the same way as our first two discussants, asking about the medications. But then she deviates by going straight to the physical exam, which leads her to the rash.

PORTER: And does he have it, uh, what surface of his arms and legs?

JOHN: They seem to be on both the dorsal and the ventral surfaces that if that’s what you’re,

PORTER: Yeah. And how about his digits?

JOHN: There aren’t any on his digits. Maybe a couple on his palms. Okay.

PORTER: On his soles too?

JOHN: None on his soles.

PORTER: How about in his mouth does he have any lesions inside of his mouth?

JOHN: Nothing inside the mouth.

PORTER: And his conjunctiva?

JOHN: Conjunctiva are clear.

PORTER: And you called it… “pustular papules”? And the “pustules”, do they have, are they clear, fluid filled or…

JOHN: Not really. 

PORTER: They don’t look like a dew drops on a rose pedal for example. Not really. That’s not what you thought when you saw them.

All right. I mean, I’m wondering if I’m like, well, is it varicella? And then, of course I’m thinking it’s syphilis, because what’s not syphilis? Yeah.

JOHN: What triggered syphilis and what triggered varicella?

PORTER: Well, so what triggered syphilis was when you said his palms, so, and that was like, you know, if this then that, you know, if palmar, then syphilis, um. And I don’t know what made me think Varicella, but um, but something about fever and a diffuse rash with pustules made me think, oh, is this something that we just don’t see a lot of? And a guy who is right on the cusp of, um, the age where we started, uh, immunizing people against varicella, although I don’t know where he would have picked that up. So, and I, and I don’t remember when fever is part of varicella, but that, but a viral illness like that crossed my mind.

JOHN: Now to clarify, I was describing the patient’s papulopustular rash to her, but I had not shown her the picture at this point. 

CINDY: And that’s my point. You hear how she thought of varicella — something about the patient’s young age  reminded her of the diagnosis — and at that point she openly questions whether the rash you saw was in fact vesicular. John, 5 mins ago  you said you didn’t think of varicella because you missed the vesicles. Could it in fact be the other way around? 

JOHN: I see you what you’re saying. Dr. Porter didn’t need the photo to think of varicella, she looked for vesicles because she was thinking of varicella. So, did I not think of varicella because I didn’t notice the vesicles — or did I overlook the vesicles because I wasn’t thinking of varicella?

CINDY: Yes. Humans see what they expect to see. There was an influential study by Geoffrey Norman in which doctors were given a series of photos, each one showing a classic physical exam finding straight out of a textbook, like a malar rash, moon facies, jaundice. The authors found that the subjects were able, for the most part, to identify the images correctly. However, if they were given a case history along with the photo, they tended to interpret the images differently, in a way to support a diagnosis that would  fit the history better.

JOHN: Yeah, I remember that study. So for example, subjects could easily identify a textbook image of moon facies in a patient with Cushing’s disease, but if you told them beforehand the patient was a heavy smoker, they’d mistake it for facial PLETHora, you know, thinking about SVC syndrome. Maybe it’s not surprising that the brain works this way — our observations usually have context — but the degree to which expectations can distort perception should still be concerning. Like for example, Norman’s group noted that subjects who were told that a patient had liver disease saw jaundice in a photo of the man who in reality, just had a healthy tan — and they made that mistake even though the sclerae were clearly white. 

CINDY: Right. So to me it seems an oversimplification, saying the root cause was a failure in data gathering. Because even though data gathering informs hypothesis generation, hypothesis generation also influences data gathering. 

JOHN: I got you Cindy. I mean, obviously, the point isn’t that I get a pass for overlooking the vesicles on exam. The takeaway here is that when I realize I’ve missed a finding on exam, I can’t simply dismiss this as a lapse in attentiveness, something that can be avoided simply if next time I “look harder, be more thorough.” It’s also potentially a cognitive error — looking for the wrong things, or even worse, not thinking about what to look for at all. As Georges Bordage writes, “If you don’t know what you’re looking for, there is a good chance you won’t see it no matter how thoroughly you look. When in doubt, step back and view the problem from a different angle; gather and interpret data with another diagnosis or differential diagnosis in mind.

So Cindy, we’ve just been talking about how, in the model of the diagnostic process we’ve adopted from earlier authors, some errors seem to fall under the category of data gathering and others into hypothesis generation, but that these two subprocesses are closely linked and interdependent, almost recursive. In fact the authors themselves highlighted this interdependency, and the iterative nature of the entire process. I think something similar emerges when we move further along the diagnostic process, when we consider the relationship between hypothesis generation and hypothesis evaluation.

CINDY: What do you mean?

JOHN: In order to better explain this I think we can actually recreate a little experiment for our listeners. Folks, here’s a challenge for you: I’m going to list six industries associated with one particular US state. As I give you each item, try to guess what state I’m thinking of. Are you ready?

1. Beef cattle, cows. 

2. Fish. 

3. The aerospace industry.

4. Citrus fruits. 

5. Tourism. 

6. Forestry. 

…What did you come up with?

CINDY: I’m lost. 

JOHN: So in 1979, Gettys and Fisher published a study in which they gave subjects a bunch of tasks just like this, asking subjects to record their hypotheses after hearing each piece of data, and also how subjectively plausible they thought each hypothesis was. They deliberately constructed lists so that the first three items would favor one hypothesis, but the fourth, fifth and sixth items would be inconsistent with that hypothesis. So for example, the list you just heard was designed such that a typical American subject (in 1979 at least), upon hearing cattle, fish, and aerospace, would typically think of Texas — only to then hear citrus trees, tourism and forestry, none of which are consistent with that state. (The correct answer here is Florida, those are six products of Florida). 

The authors’ main finding was that even though the subjects steadily received more data, they didn’t steadily generate new hypotheses. Subjects were three times more likely to introduce new hypotheses when confronted with new data that decreased the subjective plausibility of the subject’s currently held hypotheses. So like, thinking “Texas” and hearing “aerospace industry” wouldn’t move the needle, but then “citrus fruit” would prompt a flurry of new hypothesis generation. In other words, the study suggested that our existing hypotheses have to be contradicted before we can be bothered to come up with new ones. 

CINDY: That’s supposed to happen right? Come up with new ideas when the old ones don’t seem to explain things anymore?

JOHN: Yes, but if you really think about it, that doesn’t seem optimal. To explain this, let’s imagine you suspect that a patient with dyspnea is having a COPD exacerbation. If you then hear wheezing on exam, sure, that strengthens the plausibility of your presumed diagnosis of COPD, but logically, normatively, hearing wheezing should also trigger other diagnoses associated with that cue — pulmonary edema from CHF could cause wheezing; vocal cord dysfunction could cause wheezing; for that matter, other things besides COPD can cause bronchospasm and thus wheezing. But Gettys and Fisher’s work suggested the brain cannot be trusted to work like this. We are prone to ignoring alternative hypotheses suggested by the data, as long as the available data is still consistent with our favored hypothesis. Only when we receive and recognize data that contradicts our current hypothesis — like if we got a chest X-ray showing a wedge-shaped consolidation in the lung — only then do we search for an alternative explanation in earnest. This kind of behavior has been observed in a wide variety of contexts and is referred to as the win-stay, lose-shift heuristic.

CINDY: Though knowing this is how my brain works, are there tools I can utilize to “counter” the natural tendency?

JOHN: I don’t think you can counter the tendency itself. Nor would you want to, because win-stay lose-shift, like other heuristics we exhibit, is often successful. If we accept that our brain’s use of heuristics is inevitable, I think there are basically two things we can do. The first is to try to augment the heuristic with a different way of conscious thinking, so that the diagnoses we come up with will not solely be the product of the heuristic. 

CINDY: Right, “what else could it be”. This is a “cognitive forcing strategy.”

JOHN: And there are many ways to facilitate a search for additional diagnoses. As one example, a friend of the show, Dr. Andrew Parsons, suggests reframing the case by pairing aspects of the broader problem representation, like “fever and rash”, “fever and antipsychotic use”, “antipsychotic use and rash”, and subjecting these “mini-representations” to standardized schema like “What’s are most common causes of this? What’s the worst-case explanation for this? Going down a list of organs, what can cause this?” Had I adopted use of a strategy like this, maybe varicella would’ve been triggered in my head. To be honest, cognitive forcing strategies probably merit their own episode.

But heuristic are tools, not handicaps. So in addition to augmenting our heuristics with cognitive forcing strategies, I think we can also optimize the performance of these heuristics themselves: We can try to understand under what conditions the heuristic is most likely to be successful. In the case of the win-stay, lose-shift heuristic, I need to be certain that I will recognize the data that is contradictory to my working hypothesis. In our case, that would be equivalent to asking myself, “Do I know what to look for that would be inconsistent with my diagnosis of acute generalized exanthematous pustulosis? Am I knowledgeable enough about the way this disease should, and shouldn’t behave?” In retrospect, I wasn’t, and I think this was a major weakness in my diagnostic process. Earlier we heard how all three of our of discussants started with the same initial suspicion that I had, that this was some sort of adverse reaction to one of his medications. They had to have rejected this incorrect hypothesis in order to have worked their way to the correct one. What did they recognize in this case that I didn’t, that enabled them to do this?

PORTER: But then I go back to this crazy pustular rash. What medicine causes that diffusely? What?

JOHN: It bothers you that the rash is pustular? It bothers you because…?

PORTER: Yeah, it makes it seem less drug related to me. Less uh, less of an allergic type rash.

STERN: So I’m like squarely in the, in the viral rickettsial mode. That’s the only place that I am right now. It’s not those meds, it’s not a bacterial infection. I don’t know where you are.

COCKS: I mean, when I first saw it, I thought it was rickettsialpox, but I got snide remarks from our lay presenter. You could have pustular drug reactions that are acute, but the fever to me argues against that.

STERN: And the drugs that he’s on, I don’t see a drug fever after that period of time. I just feel like that’s a little odd. And the rash, I mean, again, I’m not going to see a rash. This does not to me look like that. Measles is going around. Well the drug reaction timing here is just, it’s not great for the lithium and clozapine… that’s a little late as well. And so the timing isn’t great for the drug reaction. It’s not impossible.

CINDY: So a couple of things they mention there. Dr. Porter mentions that pustular rashes are generally speaking an uncommon manifestation of drug reactions.

JOHN: At the time that didn’t bother me — I was aware acute generalized exanthematous pustulosis as an entity, and I had seen a pustular form of DRESS a few years earlier. But it turns out these diseases are much rarer than I thought them to be. So my knowledge of base rates for the diagnoses relevant to this case was inadequate.

CINDY: Dr. Stern mentions the time course, which I think is critical. AGEP typically starts within a week, and often within just a day or two of the medication. This patient’s rash started 14 days after his medications were started.

JOHN: And DRESS typically that reaction occurs 2-6 weeks after initiating the medication. So this patient’s symptoms are kind of in this awkward in-between zone. Again, not impossible. But inconsistent.

CINDY: And he also talks about how these are the wrong medications. AGEP is almost always caused by antibiotics, antimalarials, and diltiazem. That includes penicillins, cephalosporins, quinolones, sulfonamides, etc. Antipsychotics and lithium are not on that list.

JOHN: The highest-risk medications for DRESS, meanwhile, are antiepileptics, like phenytoin, carbamazepine, and lamotrigine. Allopurinol is a common culprit, as are some antibiotics, particularly sulfonamides. (If you’re wondering, most of these medications have something in common: An aromatic ring.) Now, antipsychotic-related DRESS has definitely been reported. In fact, I’ve seen two cases of clozapine-related DRESS here at Bellevue, which might skew the enabling conditions I’ve encoded in my illness script for DRESS. But again, it’s not classic. 

There’s a great deal of variation in the way diseases behave. We expect pneumonia to cause fever and cough — but every now and then, it doesn’t. This is a central challenge of diagnosis (and arguably part of what makes it so rewarding, and fun.) With this challenge we stumble in two ways. Sometimes, our expectations are too lenient — we overestimate how much variation a disease can have. That’s the kind of error I made here, thinking AGEP could plausibly present after two weeks, or as a result of psychotropic medications. The consequence that we keep a hypothesis we ought to have rejected. In retrospect it’s obvious how treacherous of a ground I was standing on: I was invoking a rare type of drug reaction, attributing it to an atypical culprit, and following an uncharacteristic period of latency. 

Now, other times, our expectations are too restrictive — we’re too specific in how we expect a disease to behave, and the consequence is that we reject a hypothesis we ought to have kept. For example, I think of varicella as a disease of young children and profoundly immunocompromised adults, not healthy 39 year-old immigrants. In other words, compared to an expert, our knowledge of a disease as novices is imprecise; our mental representation of a disease isn’t “tuned”, it poorly approximates the way that disease behaves in reality. 

I’m not just bringing this up because it gives me another tangible thing I can work on. Cindy, back when you and I first got interested in going deeper into clinical reasoning, I had this unspoken assumption that expert clinicians have certain strategies or problem-solving methods they used that enable them to succeed. But look what enabled our discussants to succeed here. They didn’t use a cognitive forcing strategy. It was knowledge. Specific, precise knowledge about a given disease, organized and accessible for their use when needed. 

CINDY: In other words, clinical reasoning does not occur in a vacuum.

JOHN: Which was basically one of the major turning points in the history of research into clinical reasoning. That central to the development of expertise isn’t the acquisition of some general problem-solving skill, applicable across all kinds of cases, but knowledge. As Gurpreet Dhaliwal writes, “Knowledge is king.”

CINDY: Can we just take a step back for a moment here John? Are we making this more complicated than it has to be? 

JOHN: How do you mean?

CINDY: Could it simply be that you didn’t get the diagnosis because you hadn’t seen it before? I mean, primary varicella in an immunocompetent adult? You’re inexperienced, and it’s rare. Even Dr.Cocks suggested this.

COCKS: Knowledge base though. I think that we don’t think about it because, what is the primary manifestation of varicella zoster virus that we see? It’s shingles. It is a dermatomal illness. How many primary varicella have you seen in your life? [JOHN: Never.] Right. I’ve seen it once (actually in the same room that my patient’s in now). So I don’t think we see it, right? We think of it as a disease in a dermatomal distribution yet, I would say 30 years ago this patient… the psychiatrist would know, this is primary varicella. 

JOHN: I know what you’re asking Cindy. If knowledge is so important, isn’t getting experienced worth more than thinking about our reasoning? Why have a Hoofbeats podcast at all? Lol.

Well, I don’t doubt that experience would have increased the likelihood of making a diagnosis. But a couple of things that make that conclusion unsatisfying. One, I don’t think we can attribute our discussants’ success in this case to their being experienced with this diagnosis. Because our discussants are experienced yes, but we directly asked them how many cases of varicella they had seen before in their careers. You just heard Dr. Cocks himself say he’s only seen a single case. As for the others:

STERN: I don’t know that I’ve ever seen primary varicella in an adult. I don’t think I’ve ever seen it in adults.

JOHN: And how many cases of Varicella have you seen?

PORTER: Um, well, seven if I count my six brothers and sisters and myself. But, um, no, so, you know, I saw it during training. Um, I haven’t seen, I haven’t seen a case of Varicella in a very long time, so it’s, but I, I, I, I recollect seeing it in a, um, in a grown up during my training — a young adult, I feel like they were immunocompromised. So it was an, an, you know, like an unusual case.

CINDY: So they’re not varicella experts either.

JOHN: No. The other problem with this explanation is, it implies I won’t diagnose anything I haven’t yet seen myself. I don’t want to be that kind of a doctor! Someone who has to screw up in order to get better. And then says at his retirement party,  “If I have seen far, it is only because I have stood on the shoulders of my patients’… corpses.”

Last but maybe most importantly. I know “experience is the best teacher” is an age-old proverb. But I think I’ve been a doctor long enough to say this, and I’m just going to say it straight: Experience isn’t a great teacher. In fact, it kind of sucks. Experience does the kinds of things we criticize mediocre attendings for doing, you know, probably the kind of stuff my students write in my monthly evaluations. For one, experience doesn’t always give feedback, and when it does it’s rarely in a timely fashion (because consequences aren’t always immediate). Experience’s teaching is inconsistent (because not every bad decision leads to a bad consequence, and not every sound decision leads to a good one). Experience doesn’t always hold you accountable for your failures, or recognize you for your successes (because we work as part of teams within large systems).

CINDY: So there’s a difference between experience and expertise. More experience makes you by definition a more experienced doctor — but it does not necessarily make you an expert. 

JOHN: Right. Experience without reflection turns us into what people call the “experienced non-expert”. Actually, as a young attending, I can’t even say I’m that. But after our discussion I’m more convinced that ever we’ve got to make space, find time, and put in effort into learning the right lessons from our mistakes. And for this case, at least finally I’ve got some ideas.

Alright listeners, that should do it for this episode. As always, let us know what you think as our case formats continue to permutate.

CINDY: And remember, if you have a case you’d like to submit for discussion, or someone you’d like to come on and hear as a discussant, or if you’re interested in developing and hosting an episode, please, get in touch with us! Visit our website at www.coreimpodcast.com or send us an email at hello@coreimpodcast.com. We are also on facebook and twitter, at @CoreIMpodcast.

JOHN: Thank you to Drs. Barbara Porter, Patrick Cocks and David Stern as our discussants, as well as Amy Ou, Shreya Trivedi, and Marty Fried. Special thanks to our audio editor for this episode Harit Shah, along with our other CoreIM colleagues.

CINDY: … And an honorable mention, as always, to Dr. Steven Liu.

JOHN: Opinions expressed in this podcast are our own, and do not represent the opinions of other affiliated institutions, nor should they be construed as medical advice.

CINDY: Thank you for joining us. With CoreIM, I’m Cindy Fang.

JOHN: And I’m John Hwang. See you next time.

References

 


Tags: , , , , , , , ,