If your car dashboard was run by a safety performance system


I recently pondered that if your car dashboard performance indicators, fuel level, speed, and oil pressure, operated under the mechanisms of a safety performance system, what would be the result?

Over the past two decades, safety resourcing, focus, and intervention have been subjugated by performance management dogma, contending that “safety” can only be present as an expression of its measurement in a zero harm paradigm. Within this paradigm, and my observations across numerous organisations and industries, I can only conclude that the amount of effort invested in the measurement of safety data directly correlates to the amount of “safety” an organisation possesses. I have observed organisations that produce dashboard data in excess of 70 pages for its leadership team. I am not sure how they meaningfully interpret this amount of data?

When reviewing safety performance metrics I am struck by the absence of poor performance data; compliance audits routinely return 95-100% compliance. Similarly, culture surveys reflect high levels of engagement, particularly following the latest initiative to engage the workforce. LTIFR and TRIFR metrics celebrate the virtual extinction of incidents, and operational risk assessment implies that the operational risk is conditionally treated and mitigated.

It is an interesting feature of these safety performance metrics that there is an overwhelming presence of dashboard indicators re-enforcing the great state of the respective organisations’ safety performance. In such conditions, significant consequence events arrive as a “surprise” aberration and are then attributed to an unauthorized deviation by a non-conforming individual or team. Such a performance system fails to acknowledge that these variations (deviations) are a constant and accepted norm in the delivery of work, where there are always varying conditions and constraints.

When examining other performance metrics used by organisations such as those for production or financial measurement, their characteristics are fundamentally different to that of safety metrics. Financial performance data will contain a variety of measures that produce consistent output where both good and poor performance is illustrated and expected. This data accurately conveys respective performance, allowing the management team to adjust strategy, resourcing, and intervention, avoiding unexpected “surprise” conditions and events.

In large and complex organisations, there is a consistent trend associated with fatal and catastrophic events. These events are often experienced at facilities delivering industry best safety performance for incident rates. Notable examples include Macondo, Texas City, and Esso Longford. Dekker and Pitzer (1) propose that the more attention that flows to keeping TRIFR down, the more a climate and culture of risk secrecy may be created. This, in turn, puts downward pressure on honesty, openness, and sharing, and erodes a culture of trust and learning, opening an organization up to the risk of a safety disaster.

The current safety performance paradigm consequently appeases safety anxiety for the leadership we have imbued with their due diligence responsibility. In the phraseology of Sidney Dekker, we produce LGIs (Looking Good Indexes). These dashboard safety indicators confer the desired safety result rather than reflecting the true operational condition. We design and transact metrics to produce the desired result.

The misdirection of safety performance measurement can be illustrated from research in healthcare. In our hospital system, hand hygiene is a primary strategy to prevent hospital-acquired infection. In most cases, when a patient is admitted to the hospital, the greatest risk to the patient is not the disease or condition they arrive with, but potential infection from those that treat them. A recent study in the MJA (2) reported that there are 165,000 cases of hospital-acquired infection in Australia each year and that there are 6,000 deaths from sepsis in Australian and New Zealand ICUs each year; hence the interest of Clinical Governance in the compliance rates for Hand Hygiene. To assess compliance, hospitals routinely have infection control staff perform observational compliance assessments. These assessments routinely report compliance rates with percentages in the high 90’s. This compliance rate features in the clinical governance dashboards to provide assurance for the conditional state for hand hygiene.

A recent study in the British Medical Journal on Hand Hygiene compliance (3) examined the accuracy of this compliance assessment method. In this study, the authors used a real-time location system (RTLS) to record all uses of alcohol-based hand rub and soap for 8 months in two units in an academic acute care hospital. The RTLS also tracked the movement of hospital hand hygiene auditors. Rates of hand hygiene events per dispenser per hour as measured by the RTLS were compared for dispensers within sight of auditors and those not exposed to auditors.

The study found that hand hygiene event rates were approximately threefold higher in hallways within eyesight of an auditor compared with when no auditor was visible and the increase occurred after the auditors’ arrival. The system that set up the dashboard was delivering the desired result, not the actual condition.

If we were to challenge safety compliance metrics, I suggest that we would find a similar result with much of the audit, compliance and competence testing we do. I know of examples of organisations that require compulsory online induction training to be completed before a worker can enter the respective worksite. Ingenious workplaces have assigned the job of undertaking the induction for other workers to a specialist who can complete the test quickly and efficiently. The resulting dashboard subsequently reports 100% compliance for all approved workers.

I propose that if our cars dashboards were run under the current paradigm for safety performance dashboards, the following would be the result.

  • Your fuel gauge would always read full, no matter what the amount of fuel was in the tank.
  • The oil pressure would always be optimum.
  • Your speed would always indicate the correct speed for the zone you were driving in.

As the driver of this vehicle, you would inevitably be shocked when the fuel tank runs dry, and the engine sputters to a halt without warning. You would be flummoxed when your license was unexpectedly revoked for multiple speeding infractions in one day. You would be panic-stricken when the engine unexpectedly seized on a rail crossing when the car had lost its engine oil from a leak. I can only hope that safety specialists do not build my car dashboard.

The question that I pose to our safety profession is this: Is the purpose of measurement to prove the good safety work of leadership and safety teams, or to engage management with the real operational conditional knowledge that truly reflects how work is done?


  1. Dekker, S. W. A., & Pitzer, C. (2016). Examining the asymptote in safety progress: A literature review. Journal of Occupational Safety and Ergonomics, 22(1), 57-65.
  2. Manon Heldens, Marinelle Schout, Naomi E Hammond, Frances Bass, Anthony Delaney and Simon R Finfer: Sepsis incidence and mortality are underestimated in Australian intensive care unit administrative data, Med J Aust 2018; 209 (6): 255-260.
  3. Srigley JA, et al. Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: a retrospective cohort study; BMJ Qual Saf 2014;23:974–980.


  1. James Norman Reply

    Bob, great link! Kelvin, I would be interested to hear more about your thoughts on overcoming this situation. Dashboards, graphs, metrics, KSI/KPIs are widely accepted as standard safety management across many industries. This is largely derived from regulatory necessity. In a perfect future world, how would safety be managed given our data rich / information poor reality?

  2. carrilloconsultants Reply

    Good article and well written, Kelvin. I have seen many of the unintended results from focusing on managing the system. I think “spinning” is another word for it. One additional observation would be that some leaders do facilitate good results using the same metrics, etc. Of course, they seem to use a “secret sauce” as well. What could that secret sauce be?

        1. Kelvin Genn Post author Reply

          Rosa, I look forward to the read! It has become clearer to me that relationships are central to successful work and therefore successful safety. I think as safety practitioners we have been subjugated by safety and leadership culture dogma, and lost the connection to people and the fostering of prospering relationships. Congratulations on the book.

  3. Jeff Dalto Reply

    Interesting article and argument here. I’ve never seen a safety management system or its dashboard, but it’s easy enough to get your point that the dashboard is set up to make the safety program look superior instead of to show valuable information.

    The signal and the noise. Even if it’s good-looking noise in a nice dashboard.

    Good article–thanks.

    1. Kelvin Genn Post author Reply

      Thanks Jeff. I am just hoping to create engagement and critical thinking about what we measure and its intended and unintended consequences.

  4. teamsworking Reply

    Good read and a nice analogy Kelvin. The LGI has been my experience in a range of organisational disciplines. Polanyi’s wisdom on explicit and tacit knowledge is as relevant as ever on this. This is the opportunity for work in the risk space. The creation of methods, processes and tools that:
    – uncover the real risks and challenges present
    – create the psychologically safe organisational conditions where this surfacing can occur

    1. carrilloconsultants Reply

      Just watching some of the videos analyzing the Deepwater Horizon disaster and was reminded that the information was there. Employees did feel safe to speak up, but once more power did not listen. So we need to add a third step to your list which I am pondering what to call.

      1. Craig Marriott Reply

        This is a good insight – it may simply be confirmation bias, corporate inertia/overload, or something else. I’ve been thinking recently about our tendency to not take action even when we know it is required. This can range from significant issues such as Macondo, to our inability to routinely put into practice even basic well-understood principles. I read recently that a senior officer in the navy wanted torpedo nets around Pearl Harbour having identified the vulnerability to attack, but it never got implemented – so these things re not necessarily about hierarchy or authority. If we could crack that third step, it would be a major improvement opportunity.

        1. carrilloconsultants Reply

          @Craig Marriott, Mike Williams the last one to jump off Deepwater Horizon was quoted, ““Most of these accidents are not accidents,” he said. “They are decision problems. Someone made an incorrect decision or incorrect assumption or incorrect data analysis and believed what they were doing. We need to train people to make better decisions.” He also said that all the decisions were made with the objective of speed, not safety. So I’m leaning towards making step three, “Enforce a safety component in decision making protocol.” What do you think?

          1. Craig Marriott Reply

            I fully agree with the idea of decision making problems. The difficulty, of course, is knowing in advance whether the decision is a good one or a bad one. I’m not sure that a safety specific step in decision making would necessarily change a decision made without the benefit of hindsight to guide it. Although routine sensitivity analysis looking for cliff-edge effects might give some greater insight into the potential for disaster.
            As he suggested, it is perhaps more about improving the decision making process across the board – understanding local rationality, biases etc – or perhaps including more ‘devil’s advocate’ processes, what-if reviews and so on. Although this is less a third step and more an enabling approach. But many decisions, particularly field-based ones, are taken outside of any formal decision making process so maybe an enabling approach is the way to go, with a more formal step in earlier phase decision-making (e.g. during design) where decisions are taken with more time and space for reflection and analysis.
            It remains a very tricky balance to give unexpected or unlikely outcomes due consideration without at the same time catastrophizing every possible decision in a way that would hamstring routine operations.
            I wish I had an answer! But interesting food for thought.

            1. Jim Whiting Reply

              Improving the decision making process can be achieved by including a requirement for a recorded risk assessment by at least 2 knowledge experts and 1 “intelligent” ignorant to ask the apparently “stupid” questions which make the experts question their assumptions and biases. Another requirement would be that the risk assessors need to write down their assessments independently first, then vigorously discuss any wide discrepancies of those first estimates.

              1. carrilloconsultants

                Jim, how do you assess when to use the process or how long it should go? The big disasters like Fukushima tsunami wall, the Challenger decision and so forth were guided by political and financial pressures. So that has to be acknowledged as well in the process.

  5. Jim Whiting Reply

    Good article but surely the answer to your final question has to be “BOTH”. Not an either / or. Just because many traditional performance measures have problems, they – or their future validated versions – will always need to be a part of safety risk management

    1. Kelvin Genn Post author Reply

      I agree that measurement is important to what we do, however, I am challenging the performance of measurement as a perfunctory exercise, which is the overwhelming current state. As executives in the organisations we serve, we need to cease providing data that we know is constrained by bad news disincentive, with what remains massaged and interpreted to meet he expectant state. I will know that we are in a better space when the measures we use constantly fluctuate and poor performance values are celebrated as good system performance seen as opportunities to learn, understand and engage.

  6. Kelvin Genn Post author Reply

    Thanks. I agree that psychological safety is central a robust well functioning safety state for an organisation.

  7. Goran Reply

    Author raises great points. Firstly to measure anything related to safety and risk, right things need to be measured such as proactive business, systems and leadership inputs which create safety and reliability rather than becoming captivated by cosmetics, outcomes and consequences of failures.
    Secondly, to measure anything useful, organisations need to actually treat safety related data with the same importance, seriousness and attention as any other critical business data rather than ‘easily tweaked’ and ‘looking good’ factor, as author rightfully points out. This in itself really paints the picture of the real organisational commitment to safety and leadership.

    Many organisations are obsessed with measuring, especially numerical measuring which stems from various STEM disciplines, however one of main issues with this thinking is that not everything in safety and risk can be measured in numbers, not even close. This very notion seems to be paralysing for lots of people today and will be a source of frustrations for a long time yet as we continue to battle reliance on consequence based number driven safety.

  8. Tony Cartwright Reply

    My observation is that it is not the “safety profession” that create and drive these metrics but Boards, CEO’s, General Managers, etc. Safety departments are then charged with the responsibility of gathering and reporting on the data. I’ve been in the safety game for 20-odd years and performance measurement has been a consistent bug-bear of many safety practitioners. Until Boards and the C-suite start to understand these concepts, we will continue to chase our tails.

    1. Kelvin Genn Post author Reply

      Tony cant say I disagree. It may be a chicken and egg issue. I have been working in safety for 35 years. Safety professionals worked hard to create measures that could get board traction, and over the years that effort succeeded. The measures have now taken on a life of their own with boards and the C suite. We now need to re-imagine and present narrative and analytics that is meaningful to them, whilst being real and connected to how work is done. This is the challenge before us.

  9. John Evans Reply

    A well written piece and I agree with the content. As some other commenters have said, as much as we, as safety professional may dislike metrics, we are never going to get away from them because management and clients will always want to measure our work and ‘know’ the status of safety in the project and/or organization, so what metrics should we use in a Safety Differently/Safety II environment?

    1. Gary Wong Reply

      A different approach is to stop measuring outcome and focus on monitoring impact, just like a car dashboard does. It monitors the Present by providing strong signals (you’re running low on gas) or weak signals (something doesn’t feel right so I’m flashing a warning light.)

      In the workplace, our signals can be stories collected in real-time (https://www.youtube.com/watch?v=ugtCr81C8H4). We can also collect Safety-II stories which describe a key decision made to successfully adjust performance due to unexpected varying conditions. Stories provide context or what Tricia Wang calls “thick” data.

      However, it doesn’t matter how great the insight system is if Management chooses to do nothing or remains blinded to old measurement paradigms.

  10. Ed Reply

    When I started my current role in 2016, the monthly report going to our senior leadership (for a company operating in 60 countries) was a 54 page PowerPoint deck. Within a year I cut it to 9 slides and now it’s 7. The only feedback I’ve gotten through that entire process was from one of our company presidents who did a reply all saying “love the new format”. I’m fortunate to have a CEO that actively supports simplification and my board of director reports now have only 2 pages of simple charts and tables with any other sidles focused on initiatives we’ve undertaken (such as Learning Teams).

    My point is that sometime you just have to drop stuff and see who notices. You might be surprised.

    I also eliminated Incident Rate reduction targets from my global team’s bonus scheme, and although not everyone in the global leadership followed suit, it’s a step that I had the authority to make (and yes, my team struggled with the change at first).

    1. Ryan Reply

      I think you are absolutely correct. We should always be asking ourselves what would happen if we stopped doing this, not just relating to metrics but all aspects of our profession. Theee simple questions can aid our approach:
      1. What
      2. So what
      3. Now what
      This approach helps to weed out the myriad of activities and so on that doesn’t add any real value and help us to focus on what is important

  11. Andrew Hughes Reply

    Kelvin, spot on as always. I believe that measurement that focuses on negative outcomes and particularly separating these outcomes into green and red creates an operational paralysis. Lack of negative outcomes over time creates supervisors and managers who become actively delusional about the state of safety and they lose their situational awareness. It also creates an atmosphere of fear on behalf of the workers that to stop the line is tantamount to treason. I have seen that intervention in this paralsysis has always been a welcome relief to both parties as if there is some invisible barrier that prevents each of them from taking action. Remove the paralysis by measuring positive outcomes and giving people the tools to communicate more openly without the threat of penalty or loss of incentive is the key.

Leave a Reply to Kelvin Genn Cancel reply

Your email address will not be published. Required fields are marked *