Time Stamps

  • 01:00 System 1 vs. System 2
  • 07:46 Cognitive Biases
  • 13:46 Case for Knowledge
  • 18:29 Other Practices to Decrease Error
  • 28:26 Conclusion

CME-MOC

Show Notes

Deconstructing the process of thinking – system 1 vs system 2

  • Traditional teaching: 
    • System 1: 
      • Fast
      • Intuitive
      • Often “incorrect” 
    • System 2:
      • Slow
      • Deliberate 
      • Leads to “accurate” conclusions 
  • The data behind system 1 vs system 2 thinking comes from research with undergraduate psychology students
  • Some research on system 1 vs system 2 in medicine
  • System 1 vs. System 2 creates a false dichotomy; both have their place
    • Slowing down is especially important in absorbing and gathering information
      • Experts may have to slow down less and less as they have learned which info is most relevant to gather

Cognitive biases

  • Terms used to describe deviations from normative, rational thinking
    • Distinctions between different types of biases are arbitrary 
  • Experiment: What happened when experts from the Society to Improve Diagnosis in Medicine reviewed resident cases?
    • In cases where the resident was wrong (compared to when the resident was right): 
      • Identified 2x as many cognitive biases identified twice as many cognitive biases 
      • Takeaway: Identifying biases is not useful in itself 
        • May be a tool to open conversation/reflection about mistakes!
        • Language of cognitive biases helps to discuss our errors 
        • This language is better used to describe what happened than identify why it did 

Case for knowledge

  • If you do not have the knowledge needed to come to the diagnosis, cognitive “de-biasing” is futile. 
    • It’s impossible to organize and consider knowledge that you don’t have 
    • Gurpreet Dhaliwal: “Knowledge is king” 
  • Experts are able to use more experiential knowledge + pattern recognition 
    • Advanced physicians can come to a correct diagnosis with minimal info provided 
  • Experiment: What happens when medical students and cardiologists are given extraneous clinical information along with ECGs? 
  • Chat GPT/AI is a valuable tool that relies solely on pattern recognition
  • Experts spend more time in system 1 thinking than system 2

Other everyday practices to decrease errors

Check out ACP’s Diagnostic Excellence curriculum

Further Reading

Transcript

Dr. Geoff Norman: There were times when I thought, you’re going to pay me to do this. Goodness. Well, for the last 10 years they haven’t been paying me. I’ve been doing it because I enjoy it. So there you go.

Dr. Shreya Trivedi: That was Dr Geoff Norman, Professor of Clinical Epidemiology at McMaster University. And we were so fortunate to sit down with him and hear about his career’s work in the realm of diagnosis. He is the author of 10 books and over 300 journal articles that he still happily works on today during his retirement. 

Dr. Shreya Trivedi: Welcome to a special episode on diagnostic excellence and mitigating diagnostic errors. I am Dr. Shreya Trivedi. Today we will start with rethinking some of our traditional teaching around clinical reasoning – first with system 1 vs. system 2 processes, and then with cognitive biases. Then we will ask ourselves hard question on what can we do to mitigate diagnostic error and end on a creative note thinking of everyday practices and instructional strategies we can do achieve that diagnostic excellence!

Dr. Shreya Trivedi: Typically when I’m thinking about mitigating diagnostic errors, I think back to talks I have gotten on the dual-process theory, right? System 1 is fast, implicit, unconscious and system 2 is slow, deliberate, and analytic. Traditionally, in training we have a tendency to prioritize system 2 and slowing down and going fast with system 1 gets a bad rap for being error prone. But in talking to Dr. Norman I quickly learned that that dichotomy between system 1 and system 2 may not be all that it’s hyped up to be.

Dr. Geoff Norman: System one is a pejorative term. Nobody says, isn’t that great what he did with system one thinking, no, no, no. System one thinking is what people do when they’re sloppy. Yeah, they’re not taking their time, they’re not being careful, they’re not being systematic, they’re not being thorough, blah, blah, blah, blah, blah, terrible. Let’s switch over to system two because that’s rational and logical and it’s cognitive thinking. It’s good stuff. The world talks about it the other way because we always prize that man sitting on a stool thinking deep thoughts

Dr. Shreya Trivedi: So I got curious, where did this idea that errors in reasoning  come from in system 1 processes? And where did the idea come from that system 2 processes can correct biases that lead to error? Turns out a lot of it comes from first year undergraduate psychology students and how they reason through problems they’ve never seen before. So system 1 and system 2 may be a good description of how freshman psych majors reason, but how well does that translate to expertise building in medicine? Dr Norman looked into just this with resident physicians. He and his team asked how effective is it when they switch over from system 1, going fast as possible, to going slow in reducing diagnostic error? 

Dr. Geoff Norman: And so what we did in this study is essentially a two-part study where part one, go as fast as you can come with a diagnosis. Part two, remember the 45 year old guy with chest pain, would you like to take another look at it? And so people would then take another look at it. Even though their overall accuracy at the first pass was only 50%, when they looked at the cases the second time, they only looked at about 20% of the cases. But they only changed their mind 8% of the time. And it’s basically a, and of course sometimes they change it in the wrong direction. So left to your own devices, you’re really a very poor judge of character and of your own abilities.

Dr. Shreya Trivedi: And so when given time to reflect more through the cases, in the end, the overall study accuracy only increased about 2%. And this experiment was not an anomaly. He rattled off study after study, which we will link to in the show notes, that showed instructing clinicians to just proceed slowly and cautiously did not improve diagnostic accuracy. 

Dr. Geoff Norman: Another study that I came across it’s about crowdsourcing. And the idea is of course we’re going to give the case to a bunch of doctors and what we’re going to do is in some sense average them in some sense from the collective opinions of the doctors. We’re going to decide what the right diagnosis is. Turns out the worst strategy to decide who, what’s the answer is to pick the person who took the longest. Going slow hurts you. The best strategy is the guy who gets there first is the guy is most likely to be right. Pick the fast strategy, rely on system one. It’s fast and it’s effective. It’s not just efficient, it’s effective. What generally happens is that when you do these studies, you tend to put one against the other. So this group is going to be encouraged to take all the time they wanted to debias their way. And this group is going to be encouraged to go as fast as they can. We show very nicely that there’s no difference in accuracy except the first group took twice as long, but there was no gain in accuracy from taking twice as long. Full stop.

Dr. Shreya Trivedi: You know I’ll be honest, I was pretty surprised to hear just the number of studies that showed that simply asking people to slow down does not help. But like most things in medicine, the body of literature is mixed. So for example, if you look at Dr. Silvia Mamade work on reflective practices, in one study, she instructed residents  to do either do fast pattern recognition or “reflective practice,” which she defined as actually writing out the pros and cons for each diagnosis on their differential and then coming to a conclusion. For simple clinical cases, pattern recognition versus reflective practice didn’t make a difference, but for complex unusual cases, that reflective practice did have a positive effect. And thinking about this reminds me of the challenge that comes with these experimental studies. The cases contain all necessary information to reach the correct diagnosis, which does often favor system 1 processing. But, in the real world, you have to gather the information without explicit cues from case-writing. And, probably, that data-gathering probably the place to do have be more methodical thinking through things like medication changes or chart reviewing to look at the prior discharge summaries. And the more that I think about this, the more I think that the system 1 versus system 2 debate creates is a false dichotomy. And Dr Norman actually agrees, we need both.

Dr. Geoff Norman: The challenge of course is to think slow down at the right point. The first study I did where we watched people doing, working up standardized patients and watched them in infinite functional inquiry. Why do that? To give yourself thinking time? You’re not listening to the responses unless they say something, have you had any problems with headaches, with eye blurriness, with dizziness, with earaches, with throat or blah blah, blah, blah, blah. You’ve got a few things that you’re looking for and your ears will perk up at the right moment, but that gives you time to think through what’s going on here. Physical exam, a lot of that is giving you thinking time, too. I mean there are obviously cases, situations in every day of your working life where you have to think carefully through something. Fair enough. I give you that. But it’s also an observation that the more expert you become, the less you have to do that. So it’s a balancing act. And obviously my concern is there’s far too much emphasis on getting away from system one and far too little on system one

Dr. Shreya Trivedi: And maybe the reason why there is not emphasis on system 1, is that the clinical environment is getting increasingly demanding. The cognitive load of the day already demands fast thinking before you get to your next page or next thing that pulls you in a different direction. So, maybe, all the emphasis system 2 in training and in faculty development is a chance to remedy some of that. And so if I were to summarize so far, maybe a better way to think about system 1 and 2 is that each may serve its purpose at different times. And in diagnostic decisions, system 1 is not bad and maybe why we also say, “trust your gut,” and just simply slowing down and cautiously reflecting without explicit guidance is not necessarily going to correct errors.

Cognitive Biases

Dr. Shreya Trivedi: Another thing about traditional clinical reasoning that we learn, I remember every M&M, Mortality & Morbidity, conference in residency I prepped, that the last slide after disclosing a diagnostic error, I would have to try to name the cognitive bias at play. Oh, this is availability bias or anchoring or something else at play. But it got thinking, does having a better sense of cognitive biases actually help us improve our diagnostic accuracy?  So I also turned to Dr. Hwang, a long time friend of Core IM, especially the beloved Hoofbeats segment. I got him pretty fired up when I asked him, hey, would it be helpful to learn about the biases that are most prevalent and he very kindly, as a good friend, redirected me!

Dr. John Hwang: You can divide the electromagnetic spectrum into an arbitrary number of colors. And in the same way people I’ve identified literally hundreds of cognitive biases and they overlap horribly. And many of them are not well defined and they’re all trying to get at the same thing, which is what is the definition of a cognitive bias? A cognitive bias is a deviation from what is normative, what is normal or what is rational. Right? So people have come up with this menagerie, hundreds of cognitive biases to describe all the ways that we deviate from behavior, but they’re purely descriptive terms, just calling something red as opposed to light red or dark red or maroon or crimson or whatever.

Dr. Shreya Trivedi: Okay, so if cognitive biases are descriptive terms along a spectrum, that might explain why even experts cannot agree on which cognitive bias is responsible for an error.

Dr. Geoff Norman: We then got a bunch of people from the Society to Improve Diagnosis in Medicine, and these are the experts on cognitive biases. Said, okay guys, please go through these protocols describing residents working up cases and identify the cognitive biases for us. 

Dr. Shreya Trivedi: They gave the faculty in the study the same exact resident cases, but they just changed the last line. So for example either the d-dimer was either negative or positive and so the diagnosis the resident either made the right or wrong.

Dr. Geoff Norman: The bottom line was that when the resident was right, they identified 1.8 biases when the resident was wrong, they identified precisely twice that, 3.6 biases. Even though all this change was the last line, they were reviewing the same protocol except for the last line. The d-dimer was positive or negative. All the rest of the protocol was exactly the same when they saw twice as many biases when the last line was negative as when the last eye was positive.

Dr. Shreya Trivedi: They repeated this but this time, gave definitions of 6 biases to make sure, hey, everyone was on the same page but even there, there was no agreement whatsoever. People could talk themselves into a case being availability bias or base rate fallacy or representative bias. But the fact that there is so much overlap of the biases is just the surface of the issue. Both Dr. Norman and Dr. Hwang said the bigger problem with cognitive biases is that people often stop at just identifying bias and don’t go deeper.

Dr. John Hwang: Because if you learn this menagerie, if you learn what anchoring is, you can say to somebody, I really anchored on this diagnosis of acute generalized exanthematous pustulosis. And you’ll be able to start a conversation where that other person knows exactly what you did wrong. And I think that, again, the mistake here is to say that that’s the end of the process is that you say, okay, well then don’t anchor next time. That is not the issue. The key is to recognize that the anchoring is the phenomenon. It’s almost like the symptom and you’re trying to diagnose what things may have given rise to that diagnosis and just like diagnosis, there could be multiple things and you’re never figure out exactly what it was that was the prime mover. Was it a knowledge gap?Overconfidence? Was it circumstantial factors? Emotional things? Tiredness?

Dr. Shreya Trivedi: Dr. Hwang gave another example from one of the Core IM Hoofbeats episode, #39 to be exact, where he did not diagnose chickenpox when he was consulted for on the psych floor for a patient who developed a rash and a fever.

Dr. John Hwang: Now is that because of base rate fallacy? I didn’t consider the fact that chickenpox was actually a fairly common disease worldwide. Or was it just availability? The fact that there was drug induced diagnoses, they were just close at hand. Was it , you know, you could just do a million things. And I don’t know exactly at the end of the day why it was that I didn’t think of the diagnosis. All I know is that I didn’t think of the diagnosis. So the certain thing is, the thing I’m fairly certain of is why I aired. But in terms of why I deviated from what would’ve been considered to be rational behavior, like it’s just people talking out loud about what they think happened. 

Dr. Shreya Trivedi: Okay so we may not gain much by trying to categorize errors we may have made as driven by this bias or that bias. But then it begs the question, how does knowing different cognitive bias help us? Is there any benefit?

Dr. John Hwang: I think that clinical reasoning talking about it is most valuable in the fact that it provides a shared language for us to communicate with each other about diagnostic mistakes and also diagnostic successes. And I think I would just add to that it allows people to communicate with themselves.

Dr. Shreya Trivedi: Yeah! I really do get that, the language of cognitive bias can help us to engage in reflection and in terms of shared language, it does allow me to be a bit more open about errors since someone else can relate to a time that they also anchored. And so my takeaway here, is that this shared language of cognitive biases can help describe what we did but the awareness of the type of bias is less important than figuring out why you deviated from the norm in the first place. Was it a knowledge gap? Was it overconfidence? Was is the cognitive load of the environment? Or was it multiple different factors? 

Case for Knowledge

Dr. Shreya Trivedi: Okay, so now we know, simply taking an ‘inventory’ of cognitive biases, by itself, doesn’t reduce diagnostic error. Nor does just slowing down, in and of itself. But then what that does help? And for me was where is the biggest plot twist came if I’m being honest. Dr. Norman quotes Dr. Gurpreet Dhaliwal in his 2017 editorial in BMJ on the topic.

Dr. Geoff Norman: And it’s such a beautiful quote, I’ve got to tell you, too. This is Gurpreet Dhaliwal, whose UCSF: “if you’ve not heard about myasthenia gravis, you cannot cognitively de-bias your way into that diagnosis. You can spend all day in system two and collect more and more information. But if you do not have a well-developed illness script that contains atypical manifestations of heart failure, you’ll never recognize it. In the realm of expert performance knowledge is king.” He said it beautifully, he said it best.

Dr. Shreya Trivedi: Knowledge is king. You know initially I was surprised to hear that but after sitting on it some more, I thought wow that actually makes a lot of sense. If I don’t have a good illness script for porphyria, no matter how much humming and ha-ing i do on rounds about someone’s unexplained abdominal pain, I won’t think to send urine porphyrins and get the diagnosis. And if we look as “experts” it might not be their reasoning that distinguishes their acumen, but it’s their ability draw upon an expansive knowledge and access the right information.

Dr. John Hwang: You can’t organize knowledge you don’t have, and it’s only by gaining knowledge that you’re forced to organize it in the first place. 

Dr. Geoff Norman: And there is good evidence that as you become expert, you move away from formal knowledge towards experiential knowledge. I like that better than you move away from system two to system one. So you think that expertise is a matter of getting people to system to pattern recognition. With experience will buy you an awful lot of accuracy that experts, expert physicians are really, really well calibrated in terms of minimal amount of information.

 Dr. Shreya Trivedi: So if experts have more experiential knowledge, that is they have developed knowledge actually experiencing taking care of that illness, rather than just being taught about it  from others or read about it, it got me thinking are experts then, drawing from all their experiential knowledge, less vulnerable to cognitive biases?

Dr. Geoff Norman: Everybody assumes that everybody all the time is vulnerable to cognitive biases. Very few people have studied how vulnerability to biases evolves with expertise. And so this was a very simple study where we had a bunch of ECGs and they either had a confirming history, a negative history, a disconfirming history or no history and surprise, surprise, medical students are incredibly vulnerable to the effect of history. Cardiologists virtually not at all. You can move them up or down by about three or four or five that’s all. Whereas medical students, you’re getting swings of plus or minus 30% in terms of accuracy diagnosis.

Dr. Shreya Trivedi: Very interesting. So attending cardiologists are much less swayed in their diagnosis. Again, this was done in controlled environments, but at least here at least when researchers throw curveball histories with objective ECG findings, those attending have seen those ECGs many, many more times than medical students were are able to engage in that pattern recognition. And speaking of pattern recognition, Dr. Norman brought up AI, artificial intelligence, and what is the reason behind ChatGPT potentially being promising for diagnosis.

Dr. Geoff Norman: ChatGPT. It doesn’t understand anything. It just pattern recognizes. And the reason it mimics humans so well is because pattern recognition is almost the quintessential human skill. But that’s not the bad guy. That’s not the good guy. And the notion that somehow if we can rise above that and become more rational, the world will be a better place. Not really. In fact, if we stay in system one, the more expert you are, the more you stay in system one, the more you don’t have to rely on rational deduction and all that stuff. And so we’re spending a lot of time in this cognitive de-biasing, essentially picking on the wrong target. We should be encouraging system on. Devising educational strategies to improve that. Rather than to say don’t do it. 

Dr. Shreya Trivedi: And before we get into just how we go about this, I don’t want the idea of encouraging system 1 to be misinterpreted. There is certainly value in slowing down at the right moments— stopping to look at the prior imaging to make sure nothing is missed, looking carefully at those medications and seeing if there’s any correlation to symptoms, and even when you do get the initial diagnosis of say, pneumothorax, slowing down yet again to ask WHY is that pneumothorax there to help us uncover the actual underlying diagnosis. And then yes, and then after you have gathered all the data, then lean on your knowledge base and your system 1 that helps us most.

Other everyday practices to decrease errors

Dr. Shreya Trivedi: So we talked about the importance of building up that knowledge base, and really solidifying those illness scripts. But I don’t want the takeaway to be “read more and see more patients.” How can do this in an intentional way? And this is the part of me that loves thinking about systems and habits gets really excited. And thinking about, what can we do every day in our practice to decrease errors?

Dr. Geoff Norman: Those are different kinds of checklists content specific checklists. Let’s look at all 12 leads and let’s see about the PR interval and the ST segment and on and on. As near as I can tell, content specific checklists give you a small advantage in terms of diagnostic accuracy. 

Dr. Shreya Trivedi: This does remind me of some of the most diligent internal medicine attendings and residents spend a lot of upfront energy creating their own dot phrases for common presenting symptoms that goes through a checklist of all the pertinent positive and negative that they don’t miss anything on their differential. And as one of our peer reviewers, Dr. Cindy Fang, pointed out this can be a big deal for those who cannot miss diagnosis. Think about the time that you’re called into a PEA arrest for a patient and you go to the 5 H’s & Ts. So checklists can definitely help us to be proactive about our blindspots. Make sure we’re not missing anything. Dr. Hwang also takes a proactive approach. This is a concept called a pre-mortem, that is a well-established practice, even in the business world that he adapts on rounds with his teams occasionally.

Dr. John Hwang: So rather than doing an M&M do like a pre-M&M is like get a case where there is a working diagnosis, but diagnostic uncertainty or whether the diagnosis is clear, but there are management decisions that are thorny and have people work on imagining that they’re going to present this case in M&M two weeks from now, what went wrong? So that is a practice that’s done in I think business and finance and stuff all the time. And its a good way to force people to think outside of oh it’s just going to work this way. Or this person clearly has this diagnosis. So I do this a lot because most diagnoses in medicine are pretty boring. Someone comes with a cellulitis. So to make it interesting on attending rounds, I say, okay, this person’s going to be a presented M&M in two weeks. And it forces them to think, oh, it turned out to be nec fasc or it’s not cellulitis, it’s like a DVT or something. And that framework works well for management too, just so that they can understand that they can make good decisions and it can lead to bad outcomes. 

Dr. Shreya Trivedi: What I really appreciate about Dr. Hwang’s strategies to decrease error is they are centered around building awareness because you know “out of sight is certainly of mind.” He has another strategy but this is a dream strategy at the moment and also built around increasing awareness and transparency, but this time around diagnostic uncertainty, especially during handoffs.

Dr. John Hwang: One of my pipe dreams, was in this handoff, in addition to the summary and the to do, is a field called uncertainties where anybody in the team, whether it’s an attending or a student or the day team or the night intern, can write a note and just be like, the eosinophil count is going up, and I’m not sure why, or we’re saying that this tachycardia is from volume depletion, but we’re not totally sure. So that not only is there a shared model for what we think the patient has, but there’s also a shared model for what we’re not certain of about the patient.

Dr. Shreya Trivedi: Can you image what would happen if we openly shared with each other when giving handoffs what we are uncertain about and if any makers of EPIC are listening and want to create an explicit field on uncertainties, that would be so great to create an expectation for people to speak up, right, versus if we don’t have a field to say our uncertainties explicitly, then what written in the chart gets taken as certainty or dogma and then theres a lot inertia to challenge it. And I like where we are going with this and thinking of systems solutions to mitigate errors. I also put on my medical education hat and asked Dr. Norman, from a system level with regards to curriculum or instructional design, what we can do? 

Dr. Geoff Norman: If you’ve ever done case-based learning, you say, well, let’s see, the patient has chest pain and he’s 45 years old, what could that be? And then you create a differential diagnosis and you turn the page and you say, his father died of a heart, heart attack at age 32, and his mother is still alive with Alzheimer’s. What does that tell us now? And you work through the thing again and again, takes you three quarters of an hour case, and you feel like you’ve learned how to do clinical reasoning and clinical history, data gathering and all that stuff. And Hank Schmidt has shown, looking at a systematic review, delightfully, you’re far better off to have a one pager, just read the case and come up with something and then go on to the next one. You can do one of those every five or 10 minutes, and you get more information out of it. 

Dr. Shreya Trivedi: I love this since I can think of so many conferences where we spend the whole hour dissecting a diagnostic challenging. Which is great, BUT what’s more helpful is exposing learners to a variety of situations where they can really learn discriminating features of how this versus that presents or different ways the same underlying illness can presents. And you may think, so what if our formal education doesn’t have that variety practice, we see tons of cases in real life. Like every patient is another pop quiz or another case. Dr. Norman points out that even some our everyday clinical practice settings are NOT set up to do the best mixed batch cases either.

Dr. Geoff Norman: The difficulty is that it’s not, again, it’s not just multiple cases. If you’re in family practice, you’re going to see far too many cases of otitis media. Far too much essential hypertension, and far too little neurology.

Dr. Shreya Trivedi: A similar problem rears its head again when you are practicing if you’re only in subspecialty clinics.

Dr. Geoff Norman: And also the system is fighting against you because what you have is specialty in subspecialty clinics where every patient you see today has multiple sclerosis. And so good teachers try to compensate for that. So as an educational curriculum developer, I would go out of my way to create mixed practice cases, inter-lead practice cases, so that you don’t have to count on the really good teacher reminding you of something you saw two weeks ago, which is idiosyncratic to you and that teacher, you can engineer that from the outset.

Dr. Shreya Trivedi: This reminds me of all those who love anki cards from med school, maybe there is a way to operationalize interleaved or mixing up types of cases and also adding some spaced repetition with actual clinical practice. Maybe instead of anki cards with teaching points, we have have deck cases grouped by cases of chest pains I’ve taken care of or fatigue that I’ve taken care of. And you go through them when you have a down time so you are reminded to their discriminating factors between each.

Dr. Geoff Norman: What we’ve found about in the education literature is you get as pretty well as much learning out of a piece of paper, describing the case as you do in seeing the patient. That’s Steve Darning at Armed Forces Uni, uniform Services University for the Health Sciences. They’ve done studies comparing learning from written paper cases to computer simulations to standardized patients, and looking at down the road in OSCEs and stuff. No difference. 

Dr. Shreya Trivedi: This was an interesting point because I think we often “oo” or “ah!” that a curriculum has “high fidelity” simulation. Oh they have a Harvey to hear heart sounds. But maybe instead, we we should focus on is making experiential learning as accessible as possible. 

Dr. Geoff Norman: I did a review article on high fidelity versus low fidelity simulation in three different domains. Heart sounds, critical care, and cystoscopy simulator. Where you had your choice of the $3,000 cystoscopy simulator with the plastic urethra and the plastic bladder and blah, blah, blah, versus the one donated by McDonald’s, which consisted of a coffee cup and two straws. You got as much learning off the coffee cup and the two straws as you did, because it was all about holding the cystoscope and maneuver. And it really doesn’t matter whether those things are the color of a urethra or not, doesn’t matter. And basically in all those domains we showed that whatever you get with a high fidelity simulator, you get 95% of it with a low fidelity simulator. But you can use it for 10 hours at a time instead of the 20 minutes allocated to you, so the next student has to take over.

Dr. Shreya Trivedi: Yes, that point is well taken. Some of current resources are costly and maybe just as effective to go down the hallway of the cardiology unit and ask to listen to everyone’s heart sounds and compare this sound versus that heart sound. And the most important thing that Dr. Norman points out when you engage with all these practices or habits is to do this with some type or reflection or feedback.

Dr. Geoff Norman: Casey Steinle said that manager has not had 17 years of experience. He’s had one year of experience repeated 17 times. It’s a point. The point is it’s not simply a question of seeing more cases. That’s part of it. But seeing more cases with feedback, seeing more cases with some reflection about it, which is difficult in the environment you live in. Particularly if you’re in the emergency department or if in primary care.

Dr. Shreya Trivedi: That’s the difficulty in many clinical environments where there is a high cognitive load. And makes me think how we can creative to get that feedback without a lot of resources? And maybe it comes down to holding ourselves accountable and playing games or quizzing ourselves. Say we don’t look at what the diagnosed murmur in the chart and then after we here it, go back to the chart and reflect on why we got it right or wrong. And maybe it’s as simple as keeping a case log of all the presenting complaints like weight loss you’ve seen or diarrhea without a clear cause and then you get to reflect on what the pivot points were when you made a diagnosis with that presenting complaint.

Conclusion 

Dr. Shreya Trivedi: Okay! I am smiling from ear and ear thinking about how we can better ourselves in the diagnosis of patients. I am so grateful to Dr. Norman for helping us stay close to what the evidence does and does not tell us about diagnosis in clinical medicine. One of my biggest takeaways about mitigating diagnostic errors is after all the data is gathered which does require us to slow down and be thoughtful, is, it is really our knowledge base and our pattern recognition from strong illness scripts that we can tap into. And then to really build up those strong illness script, the money is going to be in the practice of cases with feedback and reflection, however that may look different depending on what with whatever mentors or resources you may have.

Dr. Shreya Trivedi: I’d love to continue this discussion offline. What do you do in your practice to mitigate errors? Let’s continue the conversation and all help each other try to be best we can for our patients. Leave a comment on the website in the show notes or Tweet or X at us on twitter, reach out on instagram or facebook. Or whatever platform that you use. And that is a wrap for this episode! Thank you to our 5 peer reviewers for this episode. Dr Andrew Parsons, Dr. Cindy Fang, Dr. Gurpreet Dhaliwal, Dr. Justin Choi, Dr. Varun Kishor Phadke. Thank you to Dr Alice Kennedy for helping produce this episode with me – what a pleasure that was! Thank you to Dr. Caroline Coleman for the accompanying graphics and thanks to YOU for taking time for yourself to learn! This episode is funded by the Gordon and Betty Moore Foundation through a grant program administered by the Council of Medical Specialty Societies. If you found this episode helpful, please share with your team and colleagues and give it a rating on Apple podcasts or whatever podcast app you use! It really does help people find us! As always we love hearing feedback, email us at hello@coreimpodcast.com. Opinions expressed are our own and do not represent the opinions of any affiliated institutions.

References