Quantitative risk assessment software

Last updated

Quantitative risk assessment (QRA) software and methodologies give quantitative estimates of risks, given the parameters defining them. They are used in the financial sector, the chemical process industry, and other areas.

In financial terms, quantitative risk assessments include a calculation of the single loss expectancy of monetary value of an asset.

In the chemical process and petrochemical industries a QRA is primarily concerned with determining the potential loss of life (PLL) caused by undesired events. Specialist software can be used to model the effects of such an event, and to help calculate the potential loss of life. Some organisations use the risk outputs to assess the implied cost to avert a fatality (ICAF) which can be used to set quantified criteria for what is an unacceptable risk and what is tolerable.

For the explosives industry, QRA can be used for many explosive risk applications. It is especially useful for site risk analysis when reliance on quantity distance (QD) tables is not feasible.

Limitations

Some of the QRA software models described above must be used in isolation: for example the results from a consequence model cannot be used directly in a risk model. Other QRA software programs link different calculation modules together automatically to facilitate the process. Some of the software is proprietary and can only be used within certain organisations.

Due to the large amount of data processing required by QRA calculations, the usual approach has been to use two-dimensional ellipses to represent hazard zones such as the area around an explosion which poses a 10% chance of fatality. Similarly, a pragmatic approach is used in the simplification of dispersion results. Typically a flat terrain, unobstructed world is used to determine the behaviour of a dispersing cloud and/or a vaporizing pool. This presents problems when the effects of non-flat terrain or the complex geometry of process plants would no doubt affect the behaviour of a dispersing cloud. Though they have limitations, the 2D hazard zone and simplified approach to 3D dispersion modelling allow the handling of large volumes of risk results with known assumptions to assist in decision-making. The trade-off shifts as computer processing power increases.

The modeling of the consequences of hazardous events in a true 3D manner may require a different approach, for example using a computational fluid dynamics method to study cloud dispersion over hilly terrain. The creation of CFD models requires significantly more investment of time on the part of the modeling analyst (because of the increased complexity of the modeling), which may not be justified in all cases.

One major limitation of QRA in the safety field is that it is focussed primarily on the loss of containment of hazardous fluids and what happens when they are released. This renders QRA somewhat unworkable in hazardous industries that do not focus on fluid containment yet are still subject to catastrophic events (e.g. aviation, pharmaceuticals, mining, water treatment, etc.) This has led to the development of a risk process that draws on the experience of organisations and their employees to produce risk assessments that produce potential loss of life (PLL) outputs without fault and event tree modelling. This process is probably most commonly known by the name SQRA which was the first methodology to enter the marketplace in the late 1990s but is perhaps more accurately described by the term Experience-based Quantification (EBQ). Today there is a choice of software with which to undertake this methodology and it has been used extensively in the mining industry on a global basis.

In an effort to be more fair and to avoid adding to already high imprisonment rates in the US, courts across America have started using quantitative risk assessment software when trying to make decisions about releasing people on bail and sentencing, which are based on their history and other attributes. [1] It analyzed recidivism risk scores calculated by one of the most commonly used tools, the Northpointe COMPAS system, and looked at outcomes over two years, and found that only 61% of those deemed high risk actually committed additional crimes during that period and that African-American defendants were far more likely to be given high scores that white defendants. [1] These results are part of larger questions being raised in the field of machine ethics with regard to the risks of perpetuating patterns of discrimination via the use of big data and machine learning across many fields. [2] [3]

Related Research Articles

Risk management Identification, evaluation, and prioritization of risks

Risk management is the identification, evaluation, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability or impact of unfortunate events or to maximize the realization of opportunities.

Safety engineering Engineering discipline which assures that engineered systems provide acceptable levels of safety

Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.

Broadly speaking, a risk assessment is the combined effort of:

  1. identifying and analyzing potential (future) events that may negatively impact individuals, assets, and/or the environment ; and
  2. making judgments "on the tolerability of the risk on the basis of a risk analysis" while considering influencing factors.

Operational risk is "the risk of a change in value caused by the fact that actual losses, incurred for inadequate or failed internal processes, people and systems, or from external events, differ from the expected losses". This definition, adopted by the European Solvency II Directive for insurers, is a variation adopted from the Basel II regulations for banks. The scope of operational risk is then broad, and can also include other classes of risks, such as fraud, security, privacy protection, legal risks, physical or environmental risks. Operational risks similarly may impact broadly, in that they can affect client satisfaction, reputation and shareholder value, all while increasing business volatility.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

Probabilistic risk assessment (PRA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity or the effects of stressors on the environment.

A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different types of hazards. A hazard is a potential condition and exists or not. It may, in single existence or in combination with other hazards and conditions, become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the Risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, chemical process safety, safety engineering, reliability engineering and food safety.

ARP4761

ARP4761, Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment is an Aerospace Recommended Practice from SAE International. In conjunction with ARP4754, ARP4761 is used to demonstrate compliance with 14 CFR 25.1309 in the U.S. Federal Aviation Administration (FAA) airworthiness regulations for transport category aircraft, and also harmonized international airworthiness regulations such as European Aviation Safety Agency (EASA) CS–25.1309.

Atmospheric dispersion modeling Mathematical simulation of how air pollutants disperse in the ambient atmosphere

Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that include algorithms to solve the mathematical equations that govern the pollutant dispersion. The dispersion models are used to estimate the downwind ambient concentration of air pollutants or toxins emitted from sources such as industrial plants, vehicular traffic or accidental chemical releases. They can also be used to predict future concentrations under specific scenarios. Therefore, they are the dominant type of model used in air quality policy making. They are most useful for pollutants that are dispersed over large distances and that may react in the atmosphere. For pollutants that have a very high spatio-temporal variability and for epidemiological studies statistical land-use regression models are also used.

The technique for human error-rate prediction (THERP) is a technique used in the field of human reliability assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA: error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications: first-generation techniques and second-generation techniques. First-generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in matching an error situation in context with related error identification and quantification. Second generation techniques are more theory-based in their assessment and quantification of errors. ‘HRA techniques have been utilised for various applications in a range of disciplines and industries including healthcare, engineering, nuclear, transportation and business.

A Technique for Human Event Analysis (ATHEANA) is a technique used in the field of human reliability assessment (HRA). The purpose of ATHEANA is to evaluate the probability of human error while performing a specific task. From such analyses, preventative measures can then be taken to reduce human errors within a system and therefore lead to improvements in the overall level of safety.

Wind resource assessment is the process by which wind power developers estimate the future energy production of a wind farm. Accurate wind resource assessments are crucial to the successful development of wind farms.

In simple terms, risk is the possibility of something bad happening. Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value, often focusing on negative, undesirable consequences. Many different definitions have been proposed. The international standard definition of risk for common understanding in different applications is “effect of uncertainty on objectives”.

IT risk management

IT risk management is the application of risk management methods to information technology in order to manage IT risk, i.e.:

FLACS is a commercial Computational Fluid Dynamics (CFD) software used extensively for explosion modeling and atmospheric dispersion modeling within the field of industrial safety and risk assessment. Main application areas of FLACS are in petrochemical, process manufacturing, food processing, wood processing, metallurgical, and nuclear safety industries.

Event tree analysis (ETA) is a forward, top-down, logical modeling technique for both success and failure that explores responses through a single initiating event and lays a path for assessing probabilities of the outcomes and overall system analysis. This analysis technique is used to analyze the effects of functioning or failed systems given that an event has occurred. ETA is a powerful tool that will identify all consequences of a system that have a probability of occurring after an initiating event that can be applied to a wide range of systems including: nuclear power plants, spacecraft, and chemical plants. This technique may be applied to a system early in the design process to identify potential issues that may arise, rather than correcting the issues after they occur. With this forward logic process, use of ETA as a tool in risk assessment can help to prevent negative outcomes from occurring, by providing a risk assessor with the probability of occurrence. ETA uses a type of modeling technique called event tree, which branches events from one single event using Boolean logic.

The Natural Forest Standard (NFS) is a voluntary carbon standard designed specifically for medium- to large-scale REDD+ projects. The standard places equal emphasis on the combined carbon, social and biodiversity benefits of a project and requires a holistic approach to ensure compliance with the standards requirements and to achieve certification. The NFS applies a standardised risk-based approach to carbon quantification for consistent and comparable baseline calculations and aims to link local actions into national frameworks for reducing the loss of natural forests.

WindStation is a wind energy software which uses computational fluid dynamics (CFD) to conduct wind resource assessments in complex terrain. The physical background and its numerical implementation are described in. and the official manual of the software.

Domino effect accident Accident that causes one or more consequential accidents

Domino effect accident is an accident in which a primary undesired event in an installation sequentially or simultaneously triggers one or more secondary undesired events in nearby installations, leading to secondary and even higher-order accidents, resulting in the overall consequences more severe than those of the primary event. Due to the escalation of accidents, domino effect accident is indeed a chain of accidents. The entire accident escalation process is similar to the mechanical effect of a falling row of dominoes, so it is called a domino effect accident or Knock-on accident. Domino effect accidents are an important safety issue in the process industry in which a lot of hazardous materials are stored, transported, and processed via storage tanks, pipes and process facilities, etc. These hazardous materials may induce poisoning, fire, and explosion when a loss of containment occurs. Fire and explosion within an installation may escalate to other installations by hazardous physical effects such as heat radiation, overpressure, and fragments, etc.

References

  1. 1 2 Julia Angwin; Surya Mattu; Jeff Larson; Lauren Kirchner (23 May 2016). "Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks". ProPublica.
  2. Crawford, Kate (25 June 2016). "Artificial Intelligence's White Guy Problem". The New York Times.
  3. Thomas, C.; Nunez, A. (2022). "Automating Judicial Discretion: How Algorithmic Risk Assessments in Pretrial Adjudications Violate Equal Protection Rights on the Basis of Race". Law & Inequality . 40 (2): 371–407. doi:10.24926/25730037.649.