From the Editor
“How could an idea that worked so effectively in so many situations fail to work in this one? The most likely answer is the simplest: Human behavior changed, but it didn’t change enough.”
Readings of the Week generally focus on psychiatric topics. But here’s a task for all of us in health care: improving the quality of care. This week, we look at a new essay written by oncologist Siddhartha Mukherjee, the Pulitzer Prize-winning writer. In it, he talks about the success of using checklists in reducing complications in some places – but not in others. The above quotation comes from this provocative essay.
Why do checklists work some of the time? In this Reading, we consider the essay, and the larger questions it raises.
Complications and Checklists
“Surgical Checklists Save Lives — but Once in a While, They Don’t. Why?”
The New York Times Magazine, 9 May 2018
Late last year, I witnessed an extraordinary surgical procedure at the Cleveland Clinic in Ohio. The patient was a middle-aged man who was born with a leaky valve at the root of his aorta, the wide-bored blood vessel that arcs out of the human heart and carries blood to the upper and lower reaches of the body. That faulty valve had been replaced several years ago but wasn’t working properly and was leaking again. To fix the valve, the cardiac surgeon intended to remove the old tissue, resecting the ring-shaped wall of the aorta around it. He would then build a new vessel wall, crafted from the heart-lining of a cow, and stitch a new valve into that freshly built ring of aorta. It was the most exquisite form of human tailoring that I had ever seen.
The surgical suite ran with unobstructed, preternatural smoothness. Minutes before the incision was made, the charge nurse called a ‘time out.’ The patient’s identity was confirmed by the name tag on his wrist. The surgeon reviewed the anatomy, while the nurses — six in all — took their positions around the bed and identified themselves by name. A large steel tray, with needles, sponges, gauze and scalpels, was placed in front of the head nurse. Each time a scalpel or sponge was removed from the tray, as I recall, the nurse checked off a box on a list; when it was returned, the box was checked off again. The old tray was not exchanged for a new one, I noted, until every item had been ticked off twice. It was a simple, effective method to stave off a devastating but avoidable human error: leaving a needle or sponge inside a patient’s body.
In 2007, the surgeon and writer Atul Gawande began a study to determine whether a 19-item ‘checklist’ might reduce human errors during surgery. The items on the list included many of the checks that I had seen in action in the operating room: the verification of a patient’s name and the surgical site before incision; documentation of any previous allergic reactions; confirmation that blood and fluids would be at hand if needed; and, of course, a protocol to account for every needle and tool before and after a surgical procedure.
So begins a short essay by Dr. Mukherjee.
In it, he considers the success of checklists at reducing surgical complications; he notes: “The mortality rate fell to 0.8% from 1.5%, and surgical complications declined to 7% from 11%.” And it was successful in diverse settings.
- In South Carolina, the 30-day mortality for certain surgical procedures fell to 2.8%, down from 3.4%.
- In the Netherlands, checklists (used throughout the surgical admission) resulted in “a striking decrease in complications and mortality.”
But when researchers tried to use this approach in the developing world – applied to birth practices – the results weren’t impressive. In an effort in the Uttar Pradesh state of India, researchers developed a 28-item checklist with an eight-month peer-coaching program. The result? “There was no discernible impact…”
“How could an idea that worked so effectively in so many situations fail to work in this one? The most likely answer is the simplest: Human behavior changed, but it didn’t change enough.” To that point, he notes the success of the intervention at some level: coached attendants washed hands more (35% vs. 0.6% in the uncoached group), measured the newborn’s temperature more (43% vs. 0.1%). Yet there was no real difference in outcomes.
He then makes a couple of points:
On too much knowledge. “Every intervention cannot be tested in every context — that strategy would bust the bank — and so we use our best judgment to extend the data from one study to another.” He notes that a similar study in Namibia had worked. “What if, rather than an absence of knowledge, there was the perception of too much local knowledge: What if birth attendants did not bother using the checklists because they thought that they already knew what to do? Were there other habitual practices in Uttar Pradesh that made ‘checklisting’ ineffective? And how do we learn to account for such local effects when shifting a medical intervention from one context to another?”
On human behavior. He goes on to argue that: “human behavior remains an uncharted frontier for medicine.” “In recent times, the imagination of experimental medicine has been dominated by mechanisms to alter human physiology. But these new drugs and treatments won’t work if we don’t simultaneously target human behavior: Our latest cancer immunotherapies or the newest cardiac drugs would be rendered useless if the patients don’t turn up for their infusions on time or if the nurses administer them to the wrong patients or doctors fail to note allergic complications.” He notes that 35% of the attendants did handwashing in the early months, but that proportion dropped to 12% after the supervision and coaching ended.
We might describe this situation as a “behavioral relapse,” akin to the physiological relapse of cancer or of an immunological illness. Unlike cancer, though, behavioral relapse has no measure: no marker, no biopsy, no powerful predictive test; it remains undetectable by most methods. As much as we need experimental tools to survey human physiology, doctors need experimental tools to understand, survey and change medicine’s least familiar frontier: human behavior.
A few thoughts:
- This is a great essay.
- The data is clearly mixed.
On the one hand, he notes the success of checklists in surgical settings in western countries. On the other hand, he describes the failure of checklists in obstetrical settings in India. It would be easy to attribute this outcome difference to an issue of the developed world versus the developing world. But it’s more complicated: the Namibia data was good, after all.
- Surgical checklists have a mixed record themselves.
Yes, there was robust data in the original study – published in The New England Journal of Medicine, no less. But an attempt to introduce such measures into Ontario hospitals was a dud.
In July 2010, the Ontario Ministry of Health mandated public reporting of adherence to surgical checklists. David R. Urbach et al. considered the outcomes of surgical procedures for the three months before and after the mandate, drawing data from 130 hospitals.
“In contrast to other studies, our population-based study of surgical safety checklists in Ontario hospitals showed no significant reduction in operative mortality after checklist implementation. Adjusted operative mortality was 0.71% before and 0.65% after checklist introduction. Checklist use did not result in reductions in risks of surgical complications, emergency department visits, or hospital readmissions within 30 days after discharge.” This study was also published in The New England Journal of Medicine.
Why did checklists work so well in the initial study but less well in Ontario? The Urbach et al. paper ran with an editorial written by Harvard University’s Dr. Lucian L. Leape. In “The Checklist Conundrum,” he offers a few explanations. He starts with this one: “it is important to state the obvious: it is not the act of ticking off a checklist that reduces complications, but performance of the actions it calls for.”
As well, he notes that hospitals require help in implementing checklists and that such processes take time. He also questions the compliance rates themselves. “The likely reason for the failure of the surgical checklist in Ontario is that it was not actually used. Compliance was undoubtedly much lower than the reported 98%.” He doesn’t mince his words: “If a checklist is required, the person responsible for documentation will ensure that all boxes are ticked.”
Lucian L. Leape
Needless to say, the paper and its editorial have sparked an ongoing debate. The New England Journal ran some thoughtful letters to the editor.
Here are the links –
For the study:
For the editorial:
For the letters:
- So, to summarize the results of the different experiments: checklists work and save lives, except when they don’t.
- Are there any larger lessons to draw? Dr. Benoit Mulsant, the Chair of the University of Toronto’s Department of Psychiatry, makes the following comment:
I wish we would do a better job teaching psychiatry residents the modern understanding of the determinants of human behavior. Currently, this expertise is in business schools (marketing, behavioral economics), engineering (human factors engineering), and psychology (cognitive psychology). Hopefully, our QI track will attract some of our graduates who are interested in implementation science.
- Do you have thoughts on this week’s selection? The Reading of the Week occasionally publishes short letters to the editor.
Reading of the Week. Every week I pick articles and papers from the world of Psychiatry.