Incorporating Epistemic Uncertainty into the Safety Assurance of Socio-Technical Systems
📝 Abstract
In system development, epistemic uncertainty is an ever-present possibility when reasoning about the causal factors during hazard analysis. Such uncertainty is common when complicated systems interact with one another, and it is dangerous because it impairs hazard analysis and thus increases the chance of overlooking unsafe situations. Uncertainty around causation thus needs to be managed well. Unfortunately, existing hazard analysis techniques tend to ignore unknown uncertainties, and system stakeholders rarely track known uncertainties well through the system lifecycle. In this paper, we outline an approach to managing epistemic uncertainty in existing hazard analysis techniques by focusing on known and unknown uncertainty. We have created a reference populated with a wide range of safety-critical causal relationships to recognise unknown uncertainty, and we have developed a model to systematically capture and track known uncertainty around such factors. We have also defined a process for using the reference and model to assess possible causal factors that are suspected during hazard analysis. To assess the applicability of our approach, we have analysed the widely-used MoDAF architectural model and determined that there is potential for our approach to identify additional causal factors that are not apparent from individual MoDAF views. We have also reviewed an existing safety assessment example (the ARP4761 Aircraft System analysis) and determined that our approach could indeed be incorporated into that process. We have also integrated our approach into the STPA hazard analysis technique to demonstrate its feasibility to incorporate into existing techniques. It is therefore plausible that our approach can increase safety assurance provided by hazard analysis in the face of epistemic uncertainty.
💡 Analysis
In system development, epistemic uncertainty is an ever-present possibility when reasoning about the causal factors during hazard analysis. Such uncertainty is common when complicated systems interact with one another, and it is dangerous because it impairs hazard analysis and thus increases the chance of overlooking unsafe situations. Uncertainty around causation thus needs to be managed well. Unfortunately, existing hazard analysis techniques tend to ignore unknown uncertainties, and system stakeholders rarely track known uncertainties well through the system lifecycle. In this paper, we outline an approach to managing epistemic uncertainty in existing hazard analysis techniques by focusing on known and unknown uncertainty. We have created a reference populated with a wide range of safety-critical causal relationships to recognise unknown uncertainty, and we have developed a model to systematically capture and track known uncertainty around such factors. We have also defined a process for using the reference and model to assess possible causal factors that are suspected during hazard analysis. To assess the applicability of our approach, we have analysed the widely-used MoDAF architectural model and determined that there is potential for our approach to identify additional causal factors that are not apparent from individual MoDAF views. We have also reviewed an existing safety assessment example (the ARP4761 Aircraft System analysis) and determined that our approach could indeed be incorporated into that process. We have also integrated our approach into the STPA hazard analysis technique to demonstrate its feasibility to incorporate into existing techniques. It is therefore plausible that our approach can increase safety assurance provided by hazard analysis in the face of epistemic uncertainty.
📄 Content
A. Groce and S. Leue: 2nd International Workshop on Causal Reasoning for Embedded and safety-critical Systems Technologies (CREST’17) EPTCS 259, 2017, pp. 56-71, doi:10.4204/EPTCS.259.7 © C. Leong, T. Kelly & R. Alexander Incorporating Epistemic Uncertainty into the Safety Assurance of Socio-Technical Systems
Chris Leong Tim Kelly
Rob Alexander Computer Science Department University of York York, United Kingdom {cwkl500, tim.kelly, rob.alexander}@york.ac.uk
In system development, epistemic uncertainty is an ever-present possibility when reasoning about the causal factors during hazard analysis. Such uncertainty is common when complicated systems interact with one another, and it is dangerous because it impairs hazard analysis and thus increases the chance of overlooking unsafe situations. Uncertainty around causation thus needs to be managed well. Unfortunately, existing hazard analysis techniques tend to ignore unknown uncertainties, and system stakeholders rarely track known uncertainties well through the system lifecycle. In this paper, we outline an approach to managing epistemic uncertainty in existing hazard analysis techniques by focusing on known and unknown uncertainty. We have created a reference populated with a wide range of safety-critical causal relationships to recognise unknown uncertainty, and we have developed a model to systematically capture and track known uncertainty around such factors. We have also defined a process for using the reference and model to assess possible causal factors that are suspected during hazard analysis. To assess the applicability of our approach, we have analysed the widely-used MoDAF architectural model and determined that there is potential for our approach to identify additional causal factors that are not apparent from individual MoDAF views. We have also reviewed an existing safety assessment example (the ARP4761 Aircraft System analysis) and determined that our approach could indeed be incorporated into that process. We have also integrated our approach into the STPA hazard analysis technique to demonstrate its feasibility to incorporate into existing techniques. It is therefore plausible that our approach can increase safety assurance provided by hazard analysis in the face of epistemic uncertainty.
Keywords: Safety assurance, causal factors, epistemic uncertainty, socio-technical systems, hazard analysis
- Introduction
Imagine a safety meeting among safety engineers, project managers and operators to evaluate the hazards affecting a system prior a flight trial. The operators raised a concern as to whether equipment item X could operate in a certain flight profile. Unfortunately, the information was not available. The equipment working procedures, which were provided during the design phase, did not include any operating specifications. While the project managers knew that the equipment operating specifications were missing, they did not anticipate that this absence required further attention after the design phase. The project managers thus did not follow up on this uncertainty. Separately, a junior engineer at the end of the table was concerned with possible distraction during the flight trial as the pilot needs to carry out multiple tasks during the flight, which was not considered during the safety meeting. Being inexperienced, he was unsure if such distraction could be safety-critical, so decided to remain quiet and not raise the issue.
C. Leong, T. Kelly & R. Alexander
57
To perform comprehensive safety analysis, we must be able to make timely and accurate
predictions about potential hazards. Such prediction is based upon the collective wisdom and
experiences of the people involved, as well as the best information available at the time of conducting
the assessment. In a meeting like the one above, plausible-but-uncertain predictions or concerns may
end up being discarded and ignored rather than captured and tracked. The aim of our work is to
investigate if more can be done to track such uncertainty and provide better prediction regarding
potential hazard during system development.
As part of the safety assurance for complicated socio-technical system (STS) [1], system
stakeholders (which include multiple parties such as safety engineers, project managers, system
managers and operators) capture safety-critical causal relationship so as to derive the causes of
hazards. Hazards can be identified from causal relationships among entities, states, behaviours and
events that are related to the system, to its surroundings, and to other systems in the STS. In this paper,
we will refer to all such things as “objects”. Examples of such hazards include components failure,
unsafe human behaviour, unexpected software interaction, incorrect or insufficient safety practice and
unde
This content is AI-processed based on ArXiv data.