Tempe Gembus Model for Human Factor Risk Assessment: Learning from Several Transportation Accidents that Have Occurred

sendy ardiansyah
7 min readJan 9, 2024

--

Tauhid Nur Azhar

After the occurrence of transportation accidents such as major incidents (PLH) like the Argo Semeru derailment or collisions between KA 65 and 350, or crashes and fires involving an A350-XWB Japan Airlines plane with flight number 516 at Haneda Airport, it is necessary to study and seek the appropriate design methods for identifying the root causes related to the implementation of an integrated safety program.

To achieve this, analytic tools are needed that can function to map and mitigate the potential for system safety failures, specifically for conducting periodic assessments and evaluations on operational systems that are the object of observation.

In the context of safety, various components are involved and contribute to peak conditions in the form of system failures that occur in undesirable conditions such as accidents or major extraordinary events (PLH) with systemic impacts.

A safety assessment system can be constructed from the HEST (human, environment, system, & technology) aspect. Each element can be made a tool for testing, for example, for the human factor aspect, a Human Factor Risk Assessment/ HFRA can be developed, a digital platform-based testing tool that can be constructed rör together with a cross-disciplinary human factor team consisting of representatives from the fields of occupational medicine, psychiatry, psychology, ergonomics, transportation safety experts, communication experts, industrial engineering experts, mechanical engineering experts, and so on.

The data acquisition process for HFRA can refer to diagnostic and assessment methods such as Human Reliability Assessment or HRA, a structured and systematic way to predict the likelihood of human error in performing a specific task (HSE UK, 1999).

In the process, the Human Factor Analysis and Classification System or HFACS method can be used. HFACS is a framework used in accident and safety-related incident analysis to understand the contribution of human factors.

HFACS (Human Factors Analysis and Classification System) presents a systematic approach to categorize and understand human factors that can contribute to unwanted incidents. This analysis model was developed by Dr. Scott Shappell and Dr. Douglas Wiegmann. In its implementation, HFACS has four levels of classification as follows:

1. Level 1: Unsafe Acts (Unsafe Acts)
* Skill-based Errors (Skill-based Errors): Errors that occur during the execution of routine tasks.
* Decision Errors (Decision Errors): Errors in decision making, including lack of information or understanding.
2. Level 2: Preconditions for Unsafe Acts (Preconditions for Unsafe Acts)
* Adverse Mental States (Adverse Mental States): Psychological or emotional factors that affect performance.
* Physiological Factors (Physiological Factors): Physical conditions or health that affect performance.
3. Level 3: Unsafe Supervision (Unsafe Supervision)
* Preconditions for Unsafe Acts by Supervisors (Preconditions for Unsafe Acts by Supervisors): Factors related to job planning and supervision.
* Organizational Process Failures (Organizational Process Failures): Failures in organizational processes, such as lack of training or poor communication.
4. Level 4: Organizational Influences (Organizational Influences)
* Resource Management (Resource Management): Resource limitations that affect performance.
* Organizational Climate (Organizational Climate): Organizational culture and cultural factors that can affect behavior and performance.

Besides HFACS, another human factor analysis model is the Swiss Cheese Model, which is a concept in risk management and safety developed by James Reason. This model is used to illustrate how failures in safety systems can occur through a series of intersecting defense layers, similar to the holes in Swiss cheese.

The main concept of the Swiss Cheese Model is as follows:

1. Defense Layers (Defense Layers):
* Each safety system has several defense layers or safeguards designed to prevent unwanted incidents. This can include operational procedures, training, safety equipment, etc.
2. Holes (Holes):
* Each defense layer has the potential to have “holes” or failures. These holes can occur as human errors, technical failures, or other factors that can pass through the defense layer.
3. Series of Holes (Chain of Holes):
* Failures or holes in one defense layer do not always cause unwanted incidents because there are other layers that can block the risk. However, if a series of holes or failures in several layers occur simultaneously or are related, then an opening is created and unwanted incidents can occur.
4. System Failure (System Failure):
* If all the holes in the defense layers intersect, the system can fail, and unwanted incidents can occur.

Technically, with a methodological and systematic approach in mapping the contribution aspects of each element in instructing the final result of an accident or unexpected events, Fault Tree Analysis or FTA can be used.

FTA was initially developed in 1962 at Bell Laboratories by HA Watson, under contract for the US Air Force Ballistic Systems Division to evaluate the Minuteman I Intercontinental Ballistic Missile (ICBM) Launch Control System.

In its development, FTA is used to analyze safety and security systems, especially to evaluate and identify potential causes of system failures. This analysis is used primarily in the industrial field (including transportation services) to increase safety and identify and mitigate potential risks.

Here are the general steps in FTA:

1. Identify Analysis Goals: Determine the incidents that need to be avoided or protected and determine the goals of the FTA analysis.
2. Identify Critical Events: Identify critical events that can cause failure or accidents.
3. Create Fault Tree: Build a fault tree using logical symbols such as “AND,” “OR,” and “NOT.” The fault tree reflects the causal relationship between events that can cause critical events.
4. Determine Probability: Set the probability of basic events that make up the fault tree. This probability can be based on historical data or expert estimates.
5. Analysis and Evaluation: Evaluate the probability of the final events identified in the fault tree. This analysis can identify the main factors that cause failure.
6. Recommendations for Improvement: Based on the analysis results, create recommendations to reduce risks and prevent critical events. This can involve changes to the design, operational procedures, or use of additional safety devices.
7. Verification and Validation: Verify the analysis results with related experts and validate with empirical data or test results.

Integrating several methods in a layered manner to sift and identify error factors in constructing the cause of accidents is expected to increase the accuracy of the analysis process in the context of root findings. One risk management method that can be used is the Critical Decision Method, especially for evaluating systems and decision-making patterns and distraction factors in critical conditions that can be experienced by certain professions in carrying out their tasks.

Critical Decision Methods (CDM) are an approach in risk management and decision-making that focuses on identifying and analyzing critical decisions that can affect the safety and performance of a system or organization. This method is designed to help experts understand and improve decisions made in complex situations.

Common CDM techniques used include:

1. CDM Interview: In-depth interview with individuals or teams to understand the critical decision-making process. Highlighting factors that influence decisions, problem-solving strategies, and cognitive aspects involved.
2. Knowledge Audit: Evaluation of individual or team knowledge related to a specific situation or task. Identifying weaknesses in understanding or knowledge that may affect decisions.
3. Scenario-based Training: Involving individuals or teams in decision-making scenario training or simulations. Creating an environment that closely resembles real-life situations to improve decision-making skills.
4. Sensemaking Tools: Using tools or techniques to help individuals or teams make sense of complex information. Diagrams, concept maps, or other visualization tools can help in understanding and organizing information.
5. Cognitive Task Analysis (CTA): Identifying thinking strategies and cognitive processes involved in decision-making. Focusing on the mental steps taken by individuals in completing tasks or making decisions.
6. Decision Support Systems: Use of software or systems that can provide information or advice to support decision-making. Systems can provide data analysis, simulations, or scenarios to help in the decision-making process.

All of these methods can be integrated into a model and dedicated assessment module for a specific profession, such as train drivers, pilots, train dispatchers, air traffic controllers, port captains, captains and teachers, or professions with common critical conditions such as in the healthcare field or other public service management.

Proposed Innovation:
In the era of advanced artificial intelligence, the proposed human factor assessment process, HFRA or human factor risk assessment, can be managed using various subsets of machine learning.

The assessment process can be carried out periodically and also be considered for daily implementation as part of a daily competency check such as assessments conducted for KA crew which are currently running with point-and-call (yubishashi kanko) and health checks.

Routine evaluations to identify professional distortions from the human factor aspect can be assisted with the use of HFRA applications and gamification. Gamification will help with honesty in the assessment/evaluation process and can be more enjoyable to do, especially if done routinely which has the potential to cause boredom leading to bias in evaluation results.

Developing a Self-Repair Human Factor Disorder Model, the Tempe Gembus Model, referring to advanced material concepts such as carbon nanocomposites that adapt to environmental changes and perform auto adjustment, the Tempe Gembus Model which is a development of the Swiss Cheese Model, can be directed to perform auto correction when identified there is a “hole” or leakage that has the potential to trigger system failure.

The key is AI Direction which is part of the decision support system, where early signs of system failure can be detected quickly and corrective efforts can be suggested through concrete multi-level steps appropriate to the organizational control system and authority of each personnel according to their official capacity as regulated in operational regulations.

Thus are some suggestions related to system development which is expected to help reduce the risk of undetected system failures and have consequences in failing to anticipate.

--

--

sendy ardiansyah
sendy ardiansyah

No responses yet