Principle 9. Emergence

Principle 9. Emergence

Summary

System behaviour in complex systems is often emergent; it cannot be reduced to the behaviour of components and is often not as expected

Consider how systems operate and interact in ways that were not expected or planned for during design and implementation

"As systems become more complex, we must remain alert to the positive and negative emergent properties of systems and system changes." Image: Rafael Matsunaga CC BY 2.0

In the traditional approach to safety management (which may be characterised as Safety-I) the common understanding and theoretical foundations follow a mechanical worldview – a linear model where cause and effect is visible and wherein the system can be decomposed into its parts and rearranged again into a whole. This model is the basis for the ways that most organisations understand and assess safety.

Almost all analysis is done by decomposing the whole system into parts and identifying causes by tracing chains of events. For simple and complicated (e.g. mechanical) systems, this approach is reasonable because outcomes are usually resultant and can be deduced from component-level behaviour.

As systems have become increasingly complex, we have tended to extrapolate our understanding (and our methods) from our understanding of simple and complicated mechanical systems. We assume that complex system behaviour and outcomes can be modelled using increasingly complicated methods.

However, in complex sociotechnical systems, outcomes increasingly become emergent. Woods et al (2010) describe emergence as follows: “Emergence means that simple entities, because of their interaction, cross adaptation and cumulative change, can produce far more complex behaviours as a collective and produce effects across scale.” System behaviour therefore cannot be deduced from component-level behaviour and is often not as expected.

From this point of view, organisations are more akin to societies than complicated machines. Similar to societies, adaptations are necessary to survive. Small changes and variations in conditions can have disproportionately large effects. Cause-effect relations are complex and non-linear, and the system is more than just the sum of its parts. Considering the system as a whole, success and failure are increasingly understood as emergent rather than resultant. As variability and adaptation is necessary and there are interactions between parts of the system, variability can cascade through the system and can combine in unexpected ways. Parts of the system that were not thought to be connected can interact, and catch us by surprise.

These emergent phenomena can be seen in the 1999 Mars Polar Lander crash, or in the 2002 Überlingen mid-air collision. In both examples, there were cross- adaptations and interactions between system functions, and major consequences. These effects cannot be captured by simple linear or sequential models, nor by the search for broken components. Further examples can be seen in stock market and crowd behaviour.

Emergence is especially evident following the implementation of technical systems, where there are often surprises, unexpected adaptations and unintended consequences. These force a rethink of the system implementation and operation. The original design becomes less relevant as it is seen that the system-as-found is not as imagined (see Bainbridge, 1983).

Emergence is reflected in systems theory, but less so in safety management practice, or management generally. As systems become more complex, we must remain alert to the adaptive and maladaptive patterns and trends that emerge from the interactions and flows, and ensure a capacity to respond.

Systems thinking and resilience engineering provide approaches to help anticipate and understand system behaviour, to help ensure that things go right. They have in common a requirement to go ‘up and out’ instead of going ‘down and in’, understanding the system-as-found (structure, boundaries, interactions) and work-as-done (adaptations, adjustments) before trying to understand any specific event, occurrence, or risk.

Practical advice

  • Go ‘up and out’ instead of going ‘down and in’. Instead of first digging deep into a problem or occurrence to try to identify the ‘cause’, look at the system more widely to consider the system conditions and interactions.
  • Understand necessary variability. Try to understand why and where people need to adjust their performance to achieve the goals of the organisation. Instead of searching for where people went wrong, understand the constraints, pressures, flows and adjustments. Integrate field experts in the analysis.
  • Make patterns visible. Look for ways to probe and make visible the patterns of system behaviour over time, which emerge from the various flows of work.
  • Consider cascades and surprises. Examine how disturbances cascade through the system. Look for influences and interactions between sub-systems that may not have been thought to be connected, or were not expected or planned for during design and implementation.

View from the field

Alfred Vlasek Safety Manager & Head of Occurrence Investigation, Austro Control GmbH, Austria

“The modern ATM system is a highly complex environment. To assess any impact on safety in such systems, you have to understand – more or less – not only the components, but how they interact. Unfortunately, system interactions and outcomes are not always linear. Outcomes are often ‘emergent’ rather than ‘resultant’, and so they take us by surprise. For this reason, we need to address safety not only systematically but also in a systemic way – looking for desirable and undesirable emergent properties of the changing system. So we must adapt our safety processes to address this complexity. This does not mean that we stop using common methods (investigations, survey, audits, assessments, etc) but it does mean that we need to combine our safety data sources and supplement them with more systemic approaches that allow us – together with the field experts – to ‘see’ this emergence.”

Source: Systems Thinking for Safety: Ten Principles. A White Paper. Moving towards Safety-II, EUROCONTROL, 2014.

The following Systems Thinking Learning Cards: Moving towards Safety-II can be used in workshops, to discuss the principles and interactions between them for specific systems, situations or cases.

SKYbrary Partners:

Safety knowledge contributed by: