Date_en
January 2026

Thinking and organising in systems


subtitle
Reframing the long problem of learning from incidents
auteur
Author(s):
author

Carl Macrae

référence
Reference:
référence_en

Macrae, C. (2025). Thinking and organising in systems: reframing the long problem of learning from incidents. BMJ Quality & Safety

Our opinion

stars_en
5
opinion

The British Medical Journal has just published an editorial by Carl Macrae, from the University of Nottingham, on the incredibly poor quality of incident and accident analyses in healthcare, compared with what is done (better) in (many) industries. A key topic from both a theoretical and practical standpoint, which the example of healthcare allows us to re‑examine for the benefit of all high‑risk activities.


About the author, Carl Macrae

For the record, Carl Macrae, an organizational psychologist by training, is one of the new global rising stars in safety thinking, with experience built across multiple sectors (aviation, maritime, industry, healthcare). Initially close to James Reason, Charles Vincent, and the resilience theorists, he completed his PhD ten years ago on the analysis of near-miss events in aviation, and after several influential books and publications, he has become a global leader in systemic and organizational thinking applied to the analysis of incidents and accidents. He has also become a regular contributor at Foncsi and within the NeTWork community supported by the foundation, frequently invited and listened to during in‑depth analyses.


 

Our Summary

Learning from the analysis of incidents and accidents, or how to truly adopt a systemic perspective, based on the counterfactual example of healthcare

Learning lessons from safety related incidents is one of the most common and widespread improvement strategies in industry, services, and of course healthcare.
It is also one of the most problematic. Health systems around the world devote enormous amounts of time and effort to investigating a large number of incidents, writing reports, and issuing recommendations. A wide range of policies, frameworks, tools, and methods surround and support these efforts. 
The remarkable scale of these activities comes with growing frustration about the limited return on these investigation investments: patients continue to be harmed by the same types of incidents, in the same ways, while investigations reveal the same issues and often repeat the same recommendations over and over again.
 

Although the situation is not of the same magnitude as in healthcare – where even major accidents are poorly analyzed – many industries observe similar patterns at the level of “everyday” incidents and accidents. The proposed approach therefore offers valuable lessons for all, based on an extreme case of a large system that is immature in terms of safety. 

 

Why do all these efforts lead to so little improvement? Why is it so difficult to learn from incidents?

Carl Macrae reports that these challenges in healthcare motivated a recent study (Bowditch et al., 2025) examining whether incident investigations effectively address the complex systemic issues and sociotechnical sources of risk that regularly threaten patient safety.

The short answer is: overall, they do not.

More specifically, this study covers a large sample of 300 investigations into the most serious incidents that occurred across 56 Australian healthcare services, with the aim of determining to what extent the contributing factors and resulting recommendations are aligned with the principles of systemic thinking: going beyond the last person involved with the patient, considering root causes and contributing factors, available resources, contexts, sociotechnical interactions, and the complex interdependencies between people and management.

The results of this study confirm fears that analyses remain limited to surface-level interpretations, focused on the first and most obvious cause.

Nearly one quarter of investigations (23%) did not identify any contributing factors. About 14% did not formulate any improvement recommendations. And even though most investigations attempted to go further, there is very little evidence of consistently sophisticated analysis or consistently robust improvement recommendations.

Among all contributing factors identified in the investigations, about half (47%) focused on the people closest to the event: their communication, their knowledge, their decision-making, etc. About 1 in 10 investigations (11%) considered only these elements.

Among the recommendations formulated, only 6% were deemed “strong,” meaning they involved durable changes in equipment or a redesign of care processes.
 

Worse still, these investigations all dealt with the most severe and dramatic incidents occurring in these healthcare systems – cases in which individuals died or suffered serious harm due to failures in the care provided to them. In other words, these represent the highest level of investigative activity. These are situations where systems, processes, and practices must be critically examined and subjected to in depth reflection to identify every possible way to prevent such incidents from happening again. These findings echo many other publications showing that incident investigations do not reliably generate insights, engage relevant stakeholders, or improve patient safety.

 

What should be done?

How can we reconfigure the considerable investments made by the healthcare sector in investigation infrastructures? At this point, to echo what Bob Wears and Kaltlin Sutcliffe – two other major thinkers in the field of safety – said in their book published in 2019, it must be acknowledged that, after turning aviation into a “big brother” to emulate, the healthcare sector has often gone astray and imported only the chapter titles without the actual content.

Five recommendations

Bowditch (mentioned earlier) offers five thoughtful recommendations for genuinely adopting industry practices in this area:

  1. Reliably use systemic sociotechnical analysis methods.
  2. Pay close attention to the concrete, practical work performed in the healthcare sector.
  3. Professionalize investigators by providing them with more advanced training and skills.
  4. Ensure the independance and impartiality of investigations.
  5. Aggregate safety data at the system level. 

These are commendable and urgent proposals, and different healthcare systems are at various stages of this gradual evolution. Yet while these changes are necessary, they may not be sufficient and may not solve some of the most fundamental problems and misunderstandings that can undermine systemic learning. To achieve meaningful progress, it will likely be necessary to rethink some of the foundational premises and assumptions that shape how we design, approach, and learn from safety related events (Mesinioti et al., 2025).

 

An urgent need to reframe the logics and assumptions of investigation and of what is meant by systemic learning

There are at least three ways to fundamentally reframe the issue of learning from incidents, and these shifts in the underlying logic indicate how health systems could be reconfigured to learn more rigorously and systemically from past experiences.

1. Investigate systems, not incidents

First, and perhaps most fundamentally, the main focus of investigation must be reframed: the emphasis should not be on the incident itself, but on the underlying systems and interactions through which certain types of healthcare are delivered, organized, and made possible – or are disrupted and fail.
The specifics of an individual event are always important, especially for those directly involved and for the legal compensation they may be entitled to.
But for the purpose of learning and improvement, incidents are only a starting point from which one must rigorously investigate and critically examine how work is done and how it should be improved in a given area of healthcare. In other words, incidents are not only a window into the system but an entry point into the system and its risks.

Effective investigation and learning require diving – both analytically and literally – into the systems in which people work and in which care is delivered. This means that, instead of organizing the analytical and investigative infrastructure around incidents, attention and activities must be centered on the risk, reliability, and resilience of organizational systems.
 

Independent investigation bodies

An indicator of such a shift would be for healthcare to undergo a change in professional identity and skills: from healthcare professionals whose main concern is to manage and respond to incidents, toward healthcare professionals whose primary responsibility is to analyze and manage system safety and reliability – a highly valued professional identity in other high hazard sectors. Examples include aviation and rail, which rely on independent investigation bodies (such as BEA and BEA TT), or many industries with independent inspectors to analyze the most serious cases (EDF, for instance).
None of this exists in healthcare, and there are almost no established independent investigation bodies supported at the sociopolitical level. Yet such “offices” would consistently enable systemic inquiry, both to adopt a systemic perspective and to provide a protected space for learning.
To be precise, such bodies have begun to emerge recently in some healthcare systems, and it is striking that the two most important ones – in England and Norway – have quickly become the target of sustained attacks from political and institutional forces that seem to completely misunderstand their purpose. In a recent article (Macrae, 2025), Carl Macrae describes in particular how the newly created independent health accident investigation body in England (Health Services Safety Investigation Body – HSSIB), launched in 2023 as a major global innovation, was administratively reintegrated in 2025 under the control of national health authorities (the NHS).
These are only two examples (England and Norway), but they illustrate the political challenges and deep misunderstandings that can derail – or even destroy – the most important efforts to establish systemic learning approaches in healthcare.

These challenges also arise within healthcare organizations themselves. In other sectors, quasi independent oversight and investigation functions sit at the core of each organization’s safety infrastructure, typically under the responsibility of a Safety Director who is a member of the executive board. These units, generally well resourced and highly respected, have authority across the entire organization. This allows safety analysis and investigation to remain closely connected to organizational practices, while protecting these analytical and investigative activities – and the safety data collected – from operational and other pressures inherent in organizational life. Such protected institutional spaces dedicated to safety remain relatively rare in healthcare.

2. Systems are not concepts… they are realities to be analyzed with the right tools and at every level

Secondly, the way systemic factors and systemic risks are analyzed and understood needs to be reconsidered: systemic risks are not abstract, distant, or general concepts, but tangible, concrete, and practical mechanisms that materially shape local work and its context.
Systems are created and implemented by people – whether those who design, govern, and manage them, or those who carry out, implement, and perform a particular set of valued activities.
Thinking about risks in systems, and analyzing and studying systemic risks, always involves examining the practical sociotechnical work of people: how they perform the work of a system; the conditions under which they work within a system; and the technologies and materials that surround, support, and enable a system to function.

Taking this perspective seriously in the field has important consequences.
One of these is that the work of people who are removed from the “sharp end” should receive as much analytical and investigative attention as that of those closest to patients. The way leaders allocate resources, how standard-setting bodies define specific policies, and how managers or regulators choose to focus on certain issues while neglecting others are all nuanced and important considerations within practical work carried out in complex sociotechnical environments. This work deserves careful study, explanation, and often improvement – rather than being simply abstracted as some distant, higher order “systemic factor.”
Conversely, the practical work of people closest to the “sharp end” can itself be systemic: the way a particular task is carried out or how a specific device is used may form a stable, system wide pattern – and therefore a systemic one – that represents a systemic risk. Thus, “systemic” problems do not necessarily exist at a “higher level” of the system: systemic risks are persistent patterns of practical sociotechnical activity that can be observed and managed in the real world, and they may occur at any scale within a healthcare system.
 

Confusing the analysis of systemic risk patterns with moving to higher levels of analytical abstraction can complicate efforts to identify and address persistent sources of risk within systems.

3. Accessing how systems are designed, implemented, and maintained

Third, we must fundamentally rethink what it means to design, implement, maintain, and analyze systems.
One of the many differences between healthcare and high hazard industries lies in the relatively limited attention paid to the design and organization of work systems - in healthcare.
The tightly connected set of practices and technologies – carefully designed, rigorously specified, strongly supported, and continuously stabilized – that often forms the foundation of high reliability in other sectors is the exception rather than the rule in many areas of healthcare, where work can take the form of a loosely coordinated bricolage of improvised, situation dependent performances. Clinical processes barely reach 80% reliability.
Given this variability, what can reasonably be learned from any attempt at systematic study of a phenomenon likely to change with each observation?
In practice, one of the first and most important questions facing any systemic analysis or investigation concerns the stability and generality of the system itself. More than any other sector, healthcare is an industry built by sectors and constant evolution, which ultimately uses its low level of systematization as justification for not engaging in serious systemic work. But this should not prevent us from reaffirming the importance of designing reliable systems and adaptive processes intelligently from the outset.

 

Rethinking the practices and institutions of investigation and systemic learning

Rethinking these fundamental assumptions – following a logic centered on the analysis and improvement of organizational systems, rather than on the management of and reaction to individual incidents – opens the way to several avenues for reconfiguring practices and institutions in order to draw lessons from experience more effectively.
One of the most immediate reconfigurations concerns the way in which investigation activities are – or could be – used to initiate and coordinate change
Incidents and other disruptive events provide a continuous stream of opportunities to explore potential gaps between plans and practices, as well as between expectations and reality. This is how enlightened and flexible learning cultures are created and maintained.
Similarly, safety recommendations can be used by investigators in a systemic and integrated manner: by developing sets of interdependent recommendations that address the interdependent aspects of a systemic risk, so that each recommendation complements and reinforces the others.

This approach is particularly evident in the work of established investigation bodies in other sectors, where integrated sets of recommendations may target, for example:
 

  • policymakers to develop a robust practical standard,
  • regulatory bodies to develop reliable methods for assessing that standard,
  • service providers to reorganize their practices in line with the standard,
  • and training organizations to improve training related to that standard.
     

Evaluating the effectiveness of systemic sets of recommendations requires a more systemic research approach than has been deployed so far, going beyond assessing the relevance of each recommendation taken individually.

 

Another area for reorganization concerns the coordination of efforts: monitoring, analysis, investigation, safety management, and safety governance must be organized around safety risks, not around safety incidents. Incidents provide a source of data that helps reveal the nature and origin of underlying safety risks within healthcare systems. But there are many other sources, ranging from patient feedback to clinical audits and clinical simulations.

The development of more integrated safety management systems is gaining momentum in the healthcare sector: it is based on a commitment to organize safety management efforts around strategically defined and systematically identified risks, and must not allow the organization’s resources and attention to be driven by a random stream of incidents tied to whatever went wrong during the week.

 

Finally, system reconfiguration becomes necessary whenever new services or technologies are implemented. These activities provide new opportunities to describe, design, articulate, and specify the systems and the sociotechnical work that underpin the delivery of healthcare. While artificial intelligence (AI) generates enormous enthusiasm due to its transformative potential in the health sector, one of the most promising secondary transformations may be sociotechnical in nature: to maximize the benefits and ensure the safety of AI, organizations and regulatory bodies will need to carefully analyze and design the complex sociotechnical systems that AI technologies will both disrupt and become integrated into. For example, the historical introduction of technologies such as CT scanners (computed tomography) led to reconfigurations of roles and structures in certain areas of care.

Likewise, the widespread adoption of new AI technologies could create countless opportunities to reexamine, rethink, and precisely define the work systems that ensure safe and reliable healthcare, while also shedding light on the complex interactions between people, technologies, data, organizational structures, and regulatory requirements.
Effective learning from experience and system analysis rely heavily on the prior work carried out to design and implement safe care systems.
It is essential to recognize, reframe, and reconfigure this foundational work across the entire healthcare sector if we truly want to think and organize in terms of systems, turn incidents into improvements, and transform moments of risk into sources of resilience.
 

 


Comments by Hervé Laroche from the Foncsi team

One limitation of this reflection is that it focuses more on the objects of learning than on learning itself. The text concentrates on the opportunities to understand, on what must be understood, and on what must then be implemented and done. But the learning process, which links these different elements, is not truly analyzed at the level of concrete actors: individuals, teams, healthcare organizations, etc. Which actors are expected to learn, and how do they learn? How are new content and new practices integrated, and by whom?
The only point mentioned concerns norms and rules coming from above, from a regulator. But in a complex field such as healthcare, with highly autonomous practitioners, that cannot be enough… It would be necessary to supplement this by specifying, on the one hand, the vectors of learning and, on the other hand, the mechanisms for retaining knowledge and practices.


 


To explore further

  • Bowditch, L., Molloy, C., King, B., Abedi, M., Jackson, S., Bierbaum, M., ... & Hibbert, P. (2025). Do patient safety incident investigations align with systems thinking? An analysis of contributing factors and recommendations. BMJ Quality & Safety
  • Card, A. J., Ward, J., & Clarkson, P. J. (2012). Successful risk assessment may not always lead to successful risk control: a systematic literature review of risk control after root cause analysis. Journal of Healthcare risk management, 31(3), 6-12
  • Macrae, C. (2025). Failing to learn? The NHS is losing its capacity for system-wide safety investigation. Journal of the Royal Society of Medicine, 118(10), 317-319
  • Mesinioti, P., Macrae, C., Sheard, L., Hampton, S., Louch, G., & O’Hara, J. (2025). Closing investigations: The role of national policy in shaping structural, organisational and relational constraints on learning from patient safety incidents. Safety Science, 192, 106999
  • Wears R. Sutcliffe, K. (2019). Still Not Safe: Patient Safety and the Middle-Managing of American Medicine. Oxford University Press