Human Factors Strategy (OGHFA BN)
Human Factors Strategy (OGHFA BN)
1 Background and Introduction
Air transport is the safest way to travel. Global safety standards and a harmonised approach formulated cooperatively by governments, regulators, manufacturers, industry associations and operators have been successful in reducing the rate of incidents and accidents. The efforts, however, cannot stop. In a constantly changing world, aviation operations must continue to adapt so that they adequately address emerging issues and apply lessons learned to strengthen the industry’s defenses against accidents.
Since human error is a major contributor to aviation incidents and accidents, human factors must be an important focus of any aviation safety strategy. Whether for off-line safety analysis or within real-time operations, there is always a need to improve understanding of human performance in an operational context. Human factors provides a universal basis to tie all the ingredients of risk management together into a meaningful whole.
This briefing note explains the strategic importance of including human factors in aviation safety programs. It covers the concept of resilience to errors and the need to maintain a current view of safety by challenging both operational assumptions and legacy safety principles.
Professor René Amalberti reviewed the following three answers to the question of why it is important to focus on a human factors strategy if safety programs are to be effective:
- Reducing human performance variability by standardising behaviors and thereby increasing overall system predictability is the main goal of aviation human factors strategies. This must be accomplished in an aviation environment that is characterised by:
- High human variability. There is a large number of pilots with low behavioral predictability and a wide range of experience, cultures and error propensity.
- High organisational variability due to differences among airlines in terms of such factors as size, maturity, operational styles and subcontracting strategies.
- Low aircraft variability. As a result of advances in materials, components and design, equipment failure is relatively rare and failure modes are highly predictable.
- The basic field of human factors has matured over time, and the human factors focus on aviation has compiled a long list of achievements. The cumulative effect of the application of these human factors successes to aviation has been well-documented. There is also a continuing human factors research effort that has created a strong inventory of new potential countermeasures. Aviation human factors has advanced through the following phases:
- Beginning in the 1920s and continuing to the present in modified form, human factors has focused on pilot selection as a means to control variability and increase safety.
- The advent of World War II added a basic ergonomics focus on the design of the human interface with equipment that continues to this day.
- The introduction of the first big jets in the early 1960s was accompanied by the emergence of high-fidelity, motion-based simulator training that supported practice and proficiency on manoeuvres and recovery procedures that are too dangerous to train in aircraft during flight.
- A focus on non-technical skills and crew resource management (CRM) training emerged in the early 1980s.
- Highly automated flight decks and flight envelope protection also were introduced in the late 1980s.
- Human factors regulations (e.g., flight crew licensing, operations, maintenance, design) received added impetus in the early 1990s.
- Beginning in the mid-1990s, enforced supervision, flight data recorder analysis, line oriented safety audits (LOSA) and similar programs became commonplace.
- Safety levels increased but with apparent diminishing returns, suggesting the need for a new coupling paradigm to emerge from a new industrial cycle. The progression of safety efforts within an industrial cycle can be characterised by time period as follows:
- “Heroic times” have no distinct focus on safety except that public order cannot be disrupted.
- A “time of hope” developed with the advent of quality management techniques. Accountability, however, is still low, and risk awareness in operations is only gradually emerging.
- A “time of justification” cycle boosts risk management prompted both by safety concerns and the growing scrutiny of aviation safety by the news media and legal authorities.
- “Times of diminishing returns” tend to signal the end of one industrial cycle and a readiness for the emergence of the next.
'1.1 Human factors priorities evolved over time
With respect to aviation human factors, there has been a distinct shift in focus and priorities over time:
- From the 1940s through the 1970s, reducing workload was a priority.
- From the 1970s through the 1990s, increasing situational awareness was a driver.
- From the 1990s to the present, there has been a continuous promotion of organisational safety, as well as the unabated supervision and auditing of front-line human errors through dedicated programs such as LOSA.
- The future appears ready for a focus on improving corporate resilience to remove weaknesses through process-oriented change management.
Human error has been continuously at the forefront of aviation psychology studies during the following progression:
- Before the 1990s, the focus was on certification requirements and on the first development of international voluntary systems.
- The early 1990s saw requirements for CRM and basic human factors focusing on crew synergy and safety culture.
- In the late 1990s, new standards in Joint Aviation Requirements — Operations (JAR-OPS) led to mandatory categories of CRM training for flight and cabin crew, mechanics and air traffic controllers.
- The late 1990s also saw the need for dedicated human factors evaluations — the non-technical skills evaluation (NOTECHS) method.
- Starting in the 21st century, more demand has been expressed for human error control and mitigation beginning with the aircraft design phase as well as operationally auditing human errors in flight with LOSA and other formal and systematic flight analyses.
1.2 Views of human error
The ways in which error is characterised, examined and avoided have changed dramatically, based on the work of Dr. James Reason (1997). Reason postulates that there are two typical ways of looking at human error. One is the “person” approach, which focuses on the errors and violations of individuals with remedial efforts directed at the human operators themselves. The other is the “systems” approach, which traces causal factors back into the system as a whole with remedial efforts directed at situations, error defenses (or their absence) and organisations.
Recently, two additional lines of examination have emerged: looking at system dynamics and looking at the root-cause factors of aviation accidents and incidents. In this context, Amalberti and Aslanides observed that the better the safety, the more the violations. When comparing errors vs. violations in the French air force, they noted fewer errors and more violations after the introduction of human factors (1998-2002) compared with before (1992-1993). There seems to be a systematic migration to boundaries of acceptable work space as working envelopes are defined and protected in stratified areas of operations with graceful degradations, well-defined and delineated.
Where are we headed? No system stands on its own. Any system is a mosaic of points of view seemingly acting together but with the potential to hinder each other when they are exceedingly one-sided in their optimization. It is best to assume that there is no such thing as a stable system. All companies migrate from one state to another.
Prevention requires monitoring actual practice and the way it deviates from intended practices before the first incidents or accidents occur. The culture shared orally by all operators must be examined to identify migrations from prescribed procedures. Care must be exercised to avoid unnecessary complexity in safety procedures because complexity is typically the enemy of reduced risk. However, ongoing safety improvements must sometimes challenge our references. It is essential to take the real world into account, rather than focusing on a theoretically safe model that is likely unattainable.
New safety problems are likely to be complex and have root causes buried deep in the system. In spite of being in an unprecedented era of aviation safety, there may be an accelerating safety fragility arising from unusual threats, such as terrorism, as well as factors such as a growing insurance crisis, trends toward low-cost operations with economic challenges, looming bankruptcies, salary cuts, labor concerns, pilot heterogeneity, reduced personnel training, increased training pressures, new international organizations with their own agendas, regions with explosive traffic growth, and increasing numbers of small, marginal airlines.
Ultimately, safety is a chain of people, procedures and technology. This requires a safety management system (SMS) based on an effective safety culture that is mature and well established within each organisation and the industry as a whole. This is why the “person” approach is of limited benefit. Progress from linear expansion of current strategies cannot address all safety needs. Hence, short-term priority should be given to dynamic system adaptation, specifying sources of violations and control margins, going beyond the cockpit to encompass maintenance, cabin crew and dispatchers, and recognising the growing need for dynamic work organisations and the need to share feedback from experience at collegial or horizontal levels rather than systematically at supervisory or vertical levels. Likewise, long-term priority should be given to systematic approaches that are global in nature and take into consideration the likely next major technological advances such as air traffic management (ATM) and the electronic flight bag (EFB).
Organisational resilience is a new challenge with pre-accident activities focused on anticipating problems, accepting a wide range of variability, adapting to unstable and surprising environments, and designing error-tolerant human/technical systems. Even with the best prevention efforts, accidents and incidents will still occur. It is therefore also important to focus after an accident not only on coping with the accident as an organisation but also with redesigning the system and learning from the experience.
2 The Limits of Expertise
Operations, incident and accident investigation reports identify errors made by the crew and, perhaps, even discuss human performance issues that could have contributed; but, typically, they stop short of drawing conclusions that link errors to what made skilled pilots vulnerable to the breakdowns in human performance.
Woods and Cook refer to the concept of limited rationality, whereby experts typically do what seems reasonable to them at the time, given their understanding of the circumstances. According to Dismukes, Berman and Loukopoulos (2007), the vulnerabilities of human cognitive processes — such as attention, vigilance, memory and decision making — must be seen in the appropriate context. Contributing to the misunderstanding of the vulnerability of experts such as pilots to error is that the presence and interaction of factors contributing to error are probabilistic rather than deterministic. To a large degree, the errors made by experts are driven by four factors that shape their probability of occurrence:
- Characteristics and limitations of human cognitive and perceptual processes.
- Events in the environment in which tasks are performed.
- Demands placed on human cognitive processes by task characteristics and environmental events.
- Social and organisational factors that influence how a representative sample of experts would typically operate in particular situations.
Similar to the situational examples provided in this Operator’s Guide to Human Factors in Aviation (OGHFA), Dismukes, Berman and Loukopoulos (2007) developed an approach that makes pilot error in their 19 accident reviews far less mysterious. Even though error cannot be completely eliminated, understanding interactions among these four factors provides resilient ways to prevent their repetition and reoccurrence.
Throughout their book, Dismukes, Berman and Loukopoulos repeatedly refer to the limits of expertise. For example, they say that crew performance, even by highly experienced pilots, cannot be expected to be always reliable “under conditions of high workload, time pressure, stress, inadequate or confusing information, perceptual and cognitive limitations, inadequate training and competing organisational goals.”
In each OGHFA situational example, imperfect conditions and attitudes combine to become opportunities to illustrate problem scenarios with varying degrees of success in recovering. Elimination of the underlying situational causes of human factors problems might have prevented accidents or incidents. But with their accident reviews, Dismukes, Berman and Loukopoulos assert that “the largely random and complex confluence among situational factors, organizational factors and crew errors is one of the reasons each accident is unique and difficult to anticipate.” Performance reliability is an important characteristic summarised by Amalberti. All pilots will commit errors ranging from small mistakes to serious errors under circumstances that are often seemingly benign. These situations must be promptly detected and corrected to avoid further aggravation of the situation and must be fully understood to avoid repetition. Remedial actions often involve improving procedures, training or design.
It may be necessary to improve the approach to error control by understanding how people create safety and how safety efforts can, at times, be under pressure in resource-limited systems that pursue multiple competing goals. This is the new view of human error in which systems are at fault, not necessarily the humans who actually commit the error. Errors can be seen as symptoms of factors such as legal constraints, regulations, economic constraints, culture and time pressures. In the new view of human error, Dekker (2006) suggests that many compensatory mechanisms have been used successfully to enhance safety, including:
- Developing memory aids.
- “Buying time” to relieve pressure.
- Employing buffers, routines, heuristics, tricks, double-checks and shortcuts to simplify a task.
- Looking for feedback loops and feed-forward mechanisms to increase situational awareness.
- Developing fall-back procedures so that contingencies involve less uncertainty.
- Anticipating forms and pathways that lead to failure.
- Tailoring tasks more closely to capabilities.
The accidents reviewed by Dismukes, Berman and Loukopoulos had the following six “commonality themes,” defined in terms of both the crews’ actions and failures to act, and the situations that confronted them:
- Inadvertent slips and oversights while performing highly practiced tasks under normal conditions.
- Inadequate execution of highly practiced normal procedures under challenging conditions.
- Inadequate execution of abnormal procedures under challenging conditions.
- Inadequate response to rare situations.
- Judgment in ambiguous situations that hindsight proved was wrong.
- Deviation from explicit guidance or standard operating procedures (SOPs).
Human cognitive variability, task demands, environmental events and social, cultural and organisational factors interacted in many of the accidents reviewed by Dismukes, Berman and Loukopoulos. Several specific patterns of interaction were highlighted as being strongly associated with the commission of pilot errors:
- Issues dealing with concurrent task management and workload.
- Situations requiring very rapid response.
- Plan continuation bias — the tendency to keep doing whatever was begun.
- Equipment failures or design flaws.
- Misleading or absent cues that were needed to trigger correct responses.
- Inadequate knowledge or experience provided by training and guidance.
- Hidden weaknesses in defences against error.
Because of the difficulty in implicating some of these underlying factors with absolute certainty, they often do not appear as causal or contributing factors in official accident reports. This and the very fact that human behavior is unpredictable and probabilistic contribute to a prevailing belief that human factors is a “soft science.” Human factors will never be as deterministic as many of the physical sciences, but research clearly shows the benefits of viewing systems and organisations, as well as people, according to a human factors model.
Although crews will always make errors, there is clear evidence from accident statistics, line observations and research studies of the benefit of the human factors-based measures that have been instituted. Aircraft, systems and equipment, documentation, procedures and training that are designed with human factors inputs have been successful in limiting the number and effects of errors, thereby making the whole system more resistant to failures, errors and unexpected events. It is clear that current flight procedures have been enhanced to provide better responses to a growing variety of occurrences and to promote more effective systems monitoring. It is obvious that aircraft systems (e.g., airborne collision avoidance, ground-proximity warning, controller/pilot data link, fly-by-wire, full authority digital engine control, engine indicating and crew alerting, automated flight control, autobrakes) have evolved to avoid and tolerate error, and thereby protect against undesirable outcomes. Recent years have also involved the emergence of methods and tools to improve crew resource management (CRM) and to provide feedback from operational experience (e.g., threat and error management, flight data monitoring, aviation safety action programs, operations safety audits, SMS, fatigue and alertness management systems). All of these approaches are attempting to capture the context in which certain events occur in order to understand the confluence of causal factors, to detect accident precursor conditions in advance and to understand why and how vulnerabilities emerge, develop and grow.
All operators in the aviation system — including pilots, operations engineers, flight instructors and safety personnel — can benefit from an improved understanding of human factors principles. The maximum benefit will come not only from an understanding of fundamentals but also from the ability to communicate experiences clearly in a common human factors language. Using these human factors tools, pilots can anticipate their own vulnerabilities in potentially hazardous situations and develop countermeasures by thinking proactively and strategically about human factors challenges that could overwhelm them. This approach also makes sense for operations engineers and managers to strengthen their procedures to reduce vulnerability to misunderstandings, errors and distractions.
3 Organizational/Institutional Resilience
It has been demonstrated repeatedly that organisations involved in accidents are not sufficiently mindful of the identified hazards their people faced in the past and therefore not careful enough about predicting and countering those hazards in the future. Reason (1997) observed that organisations move within a “safety space,” alternating between states of increasing resistance and vulnerability to accidents. He suggested the need for “navigation aids” and the use of “regular health checks” to determine an organisation’s position in safety space. The OGHFA Checklist for Assessing Institutional Resilience (CAIR) is an example of such a navigation aid.
4 Challenging Operational Assumptions and Safety Principles
Human factors is inherently involved in all incidents and accidents. Whether related to crews, air traffic control, maintenance, organization or design, each link in the safety chain involves human beings and therefore the potential for fallible human decisions and human errors. The analysis of operational events must therefore include a method to identify the possible contribution to the event of each operational aspect and human factor. This analysis will help reveal lessons learned in terms of design, flight operations/procedures and training.
The OGHFA flight operations briefing notes discuss lessons learned from applying current human factors theory and practice along with insights from industry studies and incident and accident investigation reports. Significant human factors issues are covered in many of the briefing notes and situational examples, both with respect to their role as causal agents and as effective countermeasures. These include:
- SOPs;
- Use of automation;
- Briefings
- Pilot-controller communications;
- Pilot flying and pilot not flying communications;
- Altimeter setting and altitude deviation issues;
- Rushed and unstabilised approaches;
- Runway excursions and overruns; and,
- Adverse winds and crosswind landings.
One of the significant benefits of applying a structured approach to operational and human factors analysis is that it helps revisit well-established operational assumptions and safety principles by identifying and challenging them, particularly if the analysis showed they were impaired.
By assessing the robustness of operational assumptions, we are led to challenge our operational model of human performance; and,
By assessing the robustness of training assumptions, we are also led to challenge our training model of human performance. |
The following are a few examples of some of these widely held beliefs or assumptions that may not always be valid and therefore may need to be revisited:
- Weather information:
- Weather forecast techniques allow the effective prediction and avoidance of adverse weather.
- Flight crews have received updated and accurate information regarding wind direction, speed and gusts.
- Flight crews are aware of the runway condition (e.g., nature and depth of contamination).
- Training and airmanship:
- Awareness of “pitch/power/performance” is an established flying skill among pilots.
- Pilot actions always follow the “plan/execute/verify” rule.
- Cabin crews are aware of circumstances that warrant breaking the sterile cockpit rule.
- Operations:
- “Operations golden rules” are well-integrated and applied.
- Training items (e.g., briefings, simulator drills, initial operating experience) are implemented “as taught.”
- Pilots are effectively flying on line “as trained.”
- Air traffic control:
- Controllers are aware of the deceleration characteristics of aircraft (e.g., going down or going slow tradeoff).
- Controllers are aware of the implications associated with the reconfiguration of flight management systems (e.g., in case of a last-minute clearance change).
- Flight crews are assigned and use the most favorable runway for the prevailing conditions.
- Normal procedures:
- SOPs are strictly adhered to, including standard calls, normal checklists and deviation callouts.
- Primary flight displays and navigation displays (e.g., active and armed modes, guidance targets) are monitored at all times.
- Approaches are stabilized at the applicable stabilisation height, and, if required, the go-around policy is followed.
- Abnormal and emergency procedures:
- Warnings and cockpit effects always allow the detection, assessment and diagnosis of the prevailing condition.
- Electronic centralized aircraft monitor (ECAM) and quick reference handbook (QRH) procedures are accomplished as published, completely and in the intended sequence.
- Circuit breakers are not pulled or cycled for reset purposes unless specified by the ECAM or QRH.
- Effective communication, mutual cross-check and backup allow the timely detection and recovery of working errors and monitoring errors.
- Threat-related prevention strategies are really effective and productive relative to rejected takeoff, wind shear, controlled flight into terrain, approach and landing accidents, turbulence, wake turbulence, volcanic ash, loss of control, runway incursions, near midair collision, altitude deviation/flight level bust, and other weather threats and environmental hazards.
- Strict adherence to SOPs is a sufficient countermeasure to mitigate the effects of workload and fatigue on the crew’s ability to:
- Recognise (i.e., detect, assess, diagnose); and,
- Decide (i.e., make decisions, take action, monitor and manage).
5 Key Points
Safety improvement requires an earnest examination beyond reported events in order to be proactive, more robust and more resilient to the ever-changing threats in the aviation environment. It is essential to:
- Engage in cross-program reviews to assess exposure.
- Document operations and training procedures through tests and reassessments.
- Define requirements for enhancing existing procedures, developing new recommendations or procedures, or for feedback to design, maintenance and training.
The key to resilience both at the individual and at the organisational level is to be as prepared as possible for unexpected surprises by anticipating problems and developing countermeasures to deal with them. Aviation safety cannot continue to improve if the focus is only on “preventing the last accident.”
Accepting, designing and adapting for human variability using a systematic process are essential, as is an organisational focus that truly strives to uncover the root causes of accidents and incidents. Safety must be viewed in the context of the entire aviation system, including management.
6 References
- Dekker, S. (2006). The Field Guide to Understanding Human Error. Ashgate; Aldershot, England.
- Dismukes, R.; Berman, A.B.; Loukopoulos, L.D. (2007). “The Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents.” Studies in Human Factors for Flight Operations. Ashgate; Aldershot, England.
- Reason, J. (1997). Managing the Risks of Organisational Accidents. Ashgate; Aldershot, England.
Categories