Risk assessments are subjective and arbitrary

Much research has been done on risk assessment practices and the use of risk matrices. Results from these studies are unambiguous. Risk assessments, as typically performed within organizations, give arbitrary and subjective results.

In his 2008 study, “What’s Wrong with Risk Matrices?”, Cox notes:

Second, use of a risk matrix to categorize risks is not always better than—or even as good as— purely random decision making. Thus, the common assumption that risk matrices, although imprecise, do some good in helping to focus attention on the most serious problems and in screening out less serious problems is not necessarily justified.

The 2019 study “Are We Objective? A Study into the Effectiveness of Risk Measurement in the Water Industry” shows that given the same data and while using the same risk assessment process, different people still make widely different assessments. The paper concludes:

Considering the range and wide distribution of risk assessment scores within this study, one cannot explicitly state that the risk matrix assessment process is objective. The risk rating is; thus, dependent upon the person who undertakes the assessment, despite the risk assessor being provided with identical information to other assessors and using the same organizational risk assessment process.

An earlier study, conducted in 2013 titled “Further Thoughts on the Utility of Risk Matrices”, was consistent with with these findings. For example, the paper states:

Two main surveys were undertaken drawing from a cohort of international postgraduate and undergraduate students. The students involved were studying either risk management or occupational health and safety. [..] The second stage of the study gave students much longer to reflect.

By the time of the study, they had the opportunity to become more familiar with the use of risk matrices, were aware of the output from the first study, and had as much time to think about their ratings as they wished.

One might have hypothesized that this would lead to results with greater consistency and less scatter [..] However, our primary interest is in the scatter and whether it was reduced in this second stage. As Fig. 4 shows, it was not.

In their conclusion, the authors referred back to Cox’s 2008 study as well, agreeing with some of the points made in that paper:

Here, we also agree with Cox that there simply may be no right way to fill in these matrices. While additional factors, such as context, values, and risk philosophy, all impinge on the matrix, there is simply no way of incorporating them in this two-dimensional schema.

We also concur with Cox that one of the leading arguments in support of risk matrices, which is that they are simple to use and transparent, is false.

Each of these papers (and more) has also been covered on the Safety of Work podcast where they go over the topic quite extensively (check out episodes 8, 40 and 63). Here are some quotes from the show’s hosts, Drew Rae and David Provan:

So, what we see is, if you stick to a single organization and eliminate the outliers, you’ve still got a wide spread of scores on every project.

What you’re representing on the matrix is less information than you started with.

What this research shows is that risk assessment doesn’t add transparency. It takes it away because we’re turning these differences between human judgments and trying to force them into risk categories. We’re saying we’ve made the decision because the risk is low or because the risk is high. This study just shows that those are purely arbitrary categories. They’re not giving us more information about how and why the decision was made like it was made.

The second one, which I think is a little bit new, was that this scatter isn’t due to lack of information. The scatter didn’t change between when they had a few minutes to when they have the opportunity to get expert information.

When you’ve got uncertainty of information, when you’re trying to do a risk assessment, then you have to make some assumptions. In my opinion, the most important column in your risk register is the one that lists all of the assumptions that have been made in the risk assessment. If you don’t have an assumptions column in your register template, then I’d probably say practically insert one and teach your organization how to use it.

But for a practical takeaway, the important thing is that once you accept that risk assessment is a bit arbitrary, once you accept that even the methodical things are just random preferences, then the thing to do is focus on risk reduction, don’t focus on risk. You should really only care about decision making tools if the end result is using those tools, is that you are better are risk reduction.

I think the follow on from that practical takeaway about using the risk matrix simply to know what you need to do something about and not getting too hung up over exactly what your cell at sitting in the matrix.

References

  • Safety of Work Podcast, Episode 63: How Subjective Is Technical Risk Assessment?” January 24, 2021

    • “So, what we see is, if you stick to a single organization and eliminate the outliers, you’ve still got a wide spread of scores on every project.”
    • “What this research shows is that risk assessment doesn’t add transparency. It takes it away because we’re turning these differences between human judgments and trying to force them into risk categories. We’re saying we’ve made the decision because the risk is low or because the risk is high. This study just shows that those are purely arbitrary categories. They’re not giving us more information about how and why the decision was made like it was made.”
    • “When you’ve got uncertainty of information, when you’re trying to do a risk assessment, then you have to make some assumptions. In my opinion, the most important column in your risk register is the one that lists all of the assumptions that have been made in the risk assessment. If you don’t have an assumptions column in your register template, then I’d probably say practically insert one and teach your organization how to use it.”
    • “The only way to know why a person assessed something as high or low is to know what they were thinking. You need to know what they were thinking when they filled in the gaps around the information that they didn’t have.”
  • Safety of Work Podcast, Episode 8: Do Risk Matrices Help Us Make Better Decisions?” January 5, 2020

    • “What you’re representing on the matrix is less information than you started with.”
    • “We can basically say if you’ve got a 5x5 or less matrix and you’ve got 4 colors, then you’re breaking the mathematical rules”
    • “The authors drew three main findings from these experiments. The first one was that assessors assign very different ratings to the same hazard. That was true in this experiment. That’s consistent across all risk assessment research. Even when we have very expert assessors.”
    • “The second one, which I think is a little bit new, was that this scatter isn’t due to lack of information. The scatter didn’t change between when they had a few minutes to when they have the opportunity to get expert information. The third one comes from the essays that they wrote, that the scatter isn’t just a difference in estimation. It reflects deep underlying differences not just about how they understood the hazards, but what they included as in scope and out of scope for risk assessment, what mattered and didn’t mattered, and how they conceived and understood the very notion of risk. All of that varied between the people during this assessment.”
    • “But for a practical takeaway, the important thing is that once you accept that risk assessment is a bit arbitrary, once you accept that even the methodical things are just random preferences, then the thing to do is focus on risk reduction, don’t focus on risk. You should really only care about decision making tools if the end result is using those tools, is that you are better are risk reduction. If they’re taking up taking up time and attention, and they’re taking away focus from what can we do to reduce this risk, then skip the risk assessment part of it, skip using the risk matrix and jumpstart to, we got five hazards here, forget about what order they’re in, which one of these five has hazards books the best thing we can do for each one.”
  • Ball, David J., and John Watt. “Further Thoughts on the Utility of Risk Matrices” Risk Analysis 33, no. 11 (November 2013): 2068–78

    • “A growing number of authors, highly experienced in risk assessment, have questioned or had cause to investigate alleged shortcomings of risk matrices, mainly on technical grounds. In addition, standards-setting institutions have warned of the potential for subjectivity and inconsistency, as have researchers in occupational safety.”
    • “Two main surveys were undertaken drawing from a cohort of international postgraduate and undergraduate students. The students involved were studying either risk management or occupational health and safety. [..] The second stage of the study gave students much longer to reflect. By the time of the study, they had the opportunity to become more familiar with the use of risk matrices, were aware of the output from the first study, and had as much time to think about their ratings as they wished. One might have hypothesized that this would lead to results with greater consistency and less scatter [..] However, our primary interest is in the scatter and whether it was reduced in this second stage. As Fig. 4 shows, it was not.”
    • “Here, we also agree with Cox that there simply may be no right way to fill in these matrices. While additional factors, such as context, values, and risk philosophy, all impinge on the matrix, there is simply no way of incorporating them in this two-dimensional schema.”
    • “We also concur with Cox that one of the leading arguments in support of risk matrices, which is that they are simple to use and transparent, is false.”
  • Anthony Tony Cox, Louis. “What’s Wrong with Risk Matrices?” Risk Analysis 28, no. 2 (April 2008): 497–512

    • “This article examines some mathematical properties of risk matrices and shows that they have the following limitations. (a) Poor Resolution. Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks (“range compression”). (b) Errors. Risk matrices can mistakenly assign higher qualitative ratings to quantitatively smaller risks. For risks with negatively correlated frequencies and severities, they can be “worse than useless,” leading to worse-than-random decisions. [..]”
    • “For this unfavorable joint distribution of (Probability, Consequence) pairs, the information provided by the risk matrix is worse than useless (Cox & Popken, 2007) in the sense that, whenever it discriminates between two risks (by labeling one medium and the other low), it reverses the correct (quantitative) risk ranking by assigning the higher qualitative risk category to the quantitatively smaller risk. Thus, a decisionmaker who uses the risk matrix to make decisions would have a lower expected utility in this case than one who ignores the risk matrix information and makes decisions randomly, for example, by tossing a fair coin”
    • “Second, use of a risk matrix to categorize risks is not always better than—or even as good as— purely random decision making. Thus, the common assumption that risk matrices, although imprecise, do some good in helping to focus attention on the most serious problems and in screening out less serious problems is not necessarily justified.”
    • “In summary, the results and examples in this article suggest a need for caution in using risk matrices. Risk matrices do not necessarily support good (e.g., better-than-random) risk management decisions and effective allocations of limited management attention and resources.”
  • Kosovac, Anna. “Risk Perceptions & Decision-Making in the Water Industry” 2019. PhD research paper.

  • Kosovac, Anna, Brian Davidson, and Hector Malano. “Are We Objective? A Study into the Effectiveness of Risk Measurement in the Water Industry” Sustainability 11, no. 5 (February 28, 2019): 1279

    • “Considering the range and wide distribution of risk assessment scores within this study, one cannot explicitly state that the risk matrix assessment process is objective. The risk rating is; thus, dependent upon the person who undertakes the assessment, despite the risk assessor being provided with identical information to other assessors and using the same organizational risk assessment process.”
  • Hubbard, D., and D. Evans. “Problems with Scoring Methods and Ordinal Scales in Risk Assessment” IBM Journal of Research and Development 54, no. 3 (May 2010): 2:1-2:10

    • “Risk assessment methods based on scoring methods that rate the severity of each risk factor on an ordinal scale are widely used and frequently perceived by users to have value. We argue that this perceived benefit is probably illusory in most cases.”
    • “When these diverse kinds of evidence are combined, the case against scoring methods is difficult to deny. In addition to the evidence against the value of scoring methods, there is also a lack of good evidence in their favor”
  • Talbot, Julian. “What’s Right with Risk Matrices?” July 31, 2018