Skip to main content

Advanced Search

Advanced Search

Current Filters

Filter your query

Publication Types

Other

to

Newsletter Article

/

Perspective: Patient Safety—Beyond the "Easy" Phase

By Paul M. Schyve, JCAHO

No one disputes that far too much preventable harm comes to patients in our health care system. In fact, many nations worldwide have reluctantly agreed that preventable harm occurs in their systems also. Patient safety is now a universally recognized goal—and challenge—that has led to national and international efforts to advance the science of safety and to identify safe practices for implementation.

In the United States, federal and state government agencies (such as the Agency for Healthcare Research and Quality), private-sector accrediting bodies (such as the Joint Commission on Accreditation of Healthcare Organizations), and other quality improvement organizations (such as the Institute for Healthcare Improvement) have instituted safety programs. Other countries have begun their own programs (such as the National Patient Safety Agency in England and Wales), and worldwide programs have been initiated through the World Health Organization's World Alliance for Patient Safety.

Meanwhile, individual health care organizations have instituted their own learning and implementation programs in patient safety, often building on national programs in their countries. McCarthy and Blumenthal recently reviewed six case studies of such organizations. Selected from 10 more detailed case studies published by The Commonwealth Fund this month, the stories are inspiring. They demonstrate that, first, there is a science of patient safety; second, there is an array of tools that can be used to understand and improve safety (e.g., root cause analysis and failure mode and effects analysis); and, third, the science and tools have been translated into successful practices that have resulted in improvements in patient safety.

This is good news: the science, patient safety tools, and our knowledge of safe practices have advanced rapidly since the 1999 Institute of Medicine (IOM) report on error in health care. Unfortunately, as difficult as this process of change has been, it is the "easy" phase. McCarthy and Blumenthal's case studies demonstrate the critical importance of a safety culture in making patients safer. It turns out that developing and maintaining a safe culture is the "hard" phase—the real, underlying challenge to successfully applying safety science and safe practices throughout health care.

Why is this the "hard" phase? It relates to the nature of culture in general, and of a safety culture in particular. A culture is defined by the customary beliefs, values, and behaviors—including traditions—shared by members of a group. These beliefs, values, and behaviors are intertwined, often serving to justify and reinforce each other. It is difficult, therefore, to change one (e.g., behaviors) without making corresponding changes in the others. And, tradition in health care is strong: we pride ourselves on participating in the traditions, for example, of pharmacy, nursing, and medicine. These traditions are established within our professional education programs and transferred from generation to generation. In fact, these traditions—beliefs, values, and behaviors—literally become part of our personal identities as pharmacists, nurses, and physicians.

No wonder changing the existing culture is hard: we are asking health care professionals to change not only their traditional ways of thinking and doing but their images of themselves. That is why many health care organizations, after translating some of the science and tools into safe practices and implementing them, have begun to feel they have "hit the wall" of culture change. Further changes to advance patient safety seem increasingly difficult to make and sustain.

Why must the culture of health care change if we are to advance through the "hard" phase of patient safety? With regard to health care professionals' values, there is one that need not be changed, but must be fully understood: first, do no (preventable) harm. The word "first" is a substantive value judgment. Safety is a prerequisite to patients'—and society's—belief that the health care system will help, not harm, them. It is, therefore, the "first among equals"—the IOM's six aims for the health system: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity.

But the traditional health care culture assumes that safety is an emergent property of a system composed of committed, competent individuals. There is no need to focus on safety, only on the commitment and competence of health care professionals; safety takes care of itself. In this tradition, when a patient suffers harm, it is caused by a person(s) who is not committed or competent, or by the limits of science, or by an idiosyncrasy of the patient beyond the professional's control.

In the new culture of safety, safety is neither automatic nor results only from the commitment and competence of health care professionals. Rather, it is a goal that requires constant, conscious attention: systems thinking, measurement, intensive study, and action, both prospectively before things go wrong and retrospectively after things go wrong. It is easy, and pleasant, to think about the ways in which we are able to help and care for those who are ill and suffering; it is hard, and disturbing, to think about how we may unintentionally harm them. A culture of safety requires us to continuously focus on the hard thoughts.

Values are tied to beliefs, and the belief that a committed, competent individual plays the critical role in providing safe and high-quality care is a premise of professionalism. Coupled with that belief, especially for physicians, has been the belief that this committed, competent individual's independence ensures the right thing is done for his or her patient.

However, these beliefs contrast with those that derive from the science of safety. First, cognitive science teaches that the phrase "to err is human" reflects reality. Therefore, depending entirely on committed, competent individuals—whether they're acting "independently" or not—will virtually ensure health care errors occur. Second, the systems in which the individual works—the processes, people, machines, and information—can force, enable, permit, or prevent human errors, or they can magnify or mitigate the effect of errors that occur. To protect our patients from errors, we must redesign the systems in which we work. These system redesigns will contribute more to patient safety than each professional striving to be more committed and competent. Third, systems affect every patient, not just the patients of one professional. So, to "first, do no harm" to one's own patients is dependent upon being personally involved in redesigning the systems that will protect everyone's patients.

That is, to fulfill my ethical obligation to my patient requires me to improve safety for all patients. This is a new belief, based on new science, that results in a shift of values from an almost exclusive focus on one's own patients to a shared focus on the system in which all patients are placed, from a fierce protection of independent decision-making to contributing one's knowledge, skills, and judgment as part of a team that cares for the patient and redesigns systems to enhance safety.

These new beliefs, based on the science of safety, provide rationale for new behaviors: structuring communication protocols (e.g., read back of oral orders) to enhance teamwork; always following certain processes that are critical to safety (e.g., marking the site and a time-out before surgery) even if the prevented error is rare; following (rather than developing workarounds for) forcing functions in systems designed to prevent human error; learning and using tools for systems analysis and redesign; and reporting errors that occur so that they can be the source of learning.

Some who read this may be wondering whether the health care professional—the human in the system—has a central role in patient safety. That is, does the new safety culture devalue professionals' roles? In fact, while the redesigned systems can build safety into the "blunt end" of the system, the physician, nurse, pharmacist, and other health care professionals have a new role. It is to create safety at the "sharp end"—in their interactions with individual patients. This is where individual commitment and competence most count.

Cognitive science is teaching us that humans have skills that cannot be replaced by today's systems, including information technology. Humans have the ability to rapidly process large amounts of (often incomplete) data in the nonconscious mind and reach conclusions that, more often than not, are correct. Because this processing is nonconscious, a person may, at least initially, be unable to even fully explain their conclusion. This is what happens when a nurse, going beyond predetermined criteria, senses that a patient is at risk and calls for a rapid response team, or when a physician recognizes that a patient's response to an evidence-based medication protocol is not as expected and intervenes to change it. Gladwell and others have described this phenomenon and what psychologists and cognitive scientists have learned about it, including the fact that practice improves the accuracy of these "blink" responses. [1]

In the new safety culture, health care professionals' obligations are to:

  • be committed and competent, but recognize that they will still make mistakes;
  • be active participants in reporting and studying errors and in redesigning systems to prevent them;
  • commit to improve the safety of all patients, not just their own;
  • train, through practice (e.g., through simulation), to make better judgments at the "sharp end" of patient care—the "blink" response and nonconscious processing; and
  • be vigilant.

Why vigilance? First, as discussed above, safety must be a continuous, conscious focus in a safety culture; it cannot be assumed. Second, the risks from latent system failures often are difficult to recognize until the failures align and a patient accident occurs; even rigorous prospective risk analysis and reduction will not find them all. [2] And third, whenever systems are changed—whether through conscious redesign or unplanned workarounds—there will be unexpected consequences. The more complex the system (and health care systems are very complex) and the more multiple systems interact (and health care systems are open systems), the less we are able to predict all the consequences of planned change—both in the system we are changing and in the systems with which it interacts. Unfortunately, vigilance is neither easy nor pleasant, and itself creates fatigue.

A culture of safety demands much of us: changes in our beliefs, values, and behaviors. But we will be unable to fulfill our obligation to "first, do no harm" to our patients unless we meet the challenge. This is the "hard" phase of patient safety.

Paul M. Schyve, M.D., is senior vice president of the Joint Commission on Accreditation of Healthcare Organizations.

References
[1] M. Gladwell (2005) Blink: The Power of Thinking Without Thinking. New York: Little,
Brown and Company; A. Dijksterhuis et al. (2006) On Making the Right Choice: The Deliberation-Without-Attention Effect. Science 311, 1005–1007.
[2] J. Reason (1990) Human Error. Cambridge: Cambridge University Press

Publication Details