file0001149983473“Go up there and spend some time.” The rather underspecified work assignment was exactly what I had been hoping for. For me, it was a chance to do safety differently.

A month earlier I had carried out an incident investigation on the same site. I had used ICAM (Incident Cause Analysis Method), which is commonly used investigation methodology in the Australian construction industry. But using a tool designed to find holes in barriers had left me troubled. The findings had been interesting and potentially far-reaching, but in hindsight I started to question whether it was a meaningful way to improve safety. Sure, if the recommendations were addressed, the same event was not likely to reoccur. But the result of the ICAM was problematic for a number of reasons:

  • The report had put focus on negatives. Seeking to explain failures we had found errors, breaches and flawed designs. As such, we had based our improvement efforts on past failings.
  • If plugging holes is our best chance to change things for the better, the path ahead will be painfully slow.
  • Unearthing organisational flaws and substandard behaviours had generated defensiveness. I have no doubt that whatever energy the report had induced would, at best, evaporate as soon as the recommended actions had been closed out.

Struggling with these thoughts, an idea of my former supervisor Erik Hollnagel came to mind: WYLFIWYF. What You Look For Is What You Find. Looking for negatives we are bound to find negatives. And the negatives had triggered negative reactions.

What are the consequences of a constant focus on negatives? For every accident we discover more holes to be plugged (how many can there be?). For every investigation we create more and more focus on negatives. If we constantly ask questions about the negative, isn’t it likely that that is the direction the system will grow? Isn’t it likely that we’ll see more defensive posturing, people covering their asses, less organisational learning, more workarounds, a growing gap between work as imagined and actually carried out, more rules and procedures, and stricter policing of these rules? Is it possible that current safety improvement efforts contribute to create a negative, tedious culture around safety, in which engagement is increasingly difficult?

It was when the same project had yet another incident a couple of weeks later that my boss sent me away with the instruction to have a broad look at safety in the project. The nasty aftertaste of the previous investigation fuelled my ambitions to put a rather different set of ideas into practice.

I wanted to see if it was possible to assess safety and explain incidents based on a more ‘ecological view’ of organisations (instead of organisations as problems to be solved or perfected):

  • Was it at all possible to understand normality as the cause, instead of errors or deviations?
  • What would happen if the organisation was looked at with the presumption that it was a living dynamic system having constructive capacity?
  • How could a safety intervention be designed that created an upward spiral of performance, energy, and growth for the project?

So it was with a sense of adventure that I began to ‘spend some time on site’. I participated in pre-start meetings, various administrative get-togethers, and some safety assessment events. But more importantly, I was fed a constant stream of project members that I could learn from about what it was like to work in the project. I asked them to describe a normal day, and give examples of when work was difficult. And to tell about days when work was a delight.

I learnt that people relied heavily on their colleagues. Access to information was a critical factor for people succeed, or not. I learnt about things, small and large, that put stress on people and made it difficult to cope. Most importantly, however, everyone I met was passionate to do a good job, and the system sometimes made success (too) difficult.

Gradually I began to see that some of the people’s difficulties in carrying out work were connected to the basic setup of the project. For example, since the site was relatively remote it had been necessary to organise work around a fly-in fly-out roster. This in turn caused strain on communication, handovers, and necessity to cover for colleagues on leave. Some of the plant and equipment used in the project were new and unique to this part of the business. Consequently, there were lot of ambiguities and ad-hoc learning. When starting to look at the incidents that had occurred as events that could have gone right, it became apparent that there were many constraints and conditions safety systems were not set up to capture.

There was an area for the organisation to grow, to become better, and around which the organisation could start constructing its future. The challenge became how to relay this view to the project management group. After cutting through some usual suspects of ‘more compliance’ argument we managed to reflect that there wasn’t anything missing, but that there was an opportunity for the organisation to set themselves up to make it easier for people to achieve their goals. My boss and I suggested that some sort of mechanism to look for improvement opportunities would assist both in improving safety, but also productivity and efficiency. (Who can say no to that?)

The practical consequences? Well, the guys on site started running with the notion, and some 20 ideas came out of the discussion. Some of them have already been put into practice. For example, the following day/week the project made a promise to its workforce that any improvement suggestions raised will be answered within 24 hours. This will be actively asked before and after every shift. I have no idea where it will go from here, or if it will prevent further incidents. But if WYLFIWYF works, then I’m convinced that this particular project is likely to spiral into success.


  1. Ben Kirkbride Reply

    Hi Daniel,

    Currently researching a university assignment re: accident investigation and your post activated my Reticular Activating Device (RAS). The thing I found the most interesting was I made a list of pros and cons of the ICAM process to assist in building my case. Funnily enough, as you have also highlighted, the cons outweighed the former.

    I also would like to praise you on the ‘different’ approach you took second time around. This is exactly what its going to take to change safety, the time to ‘pause’, reflect on your approach and challenge the norm in a positive, innovative way of thinking.

    Awesome post, cheers.


  2. mikebehm Reply

    I really like your approach Daniel, but I do need some help in understanding.

    Your approach to the investigation was with one of an open mind, collaboratively working with your colleagues, and in an environment of high trust and mutual respect. This is wonderful and is the epitome of where safety can evolve. However I need help…

    You say, “using a tool designed to find holes in barriers had left me troubled” and “For every accident we discover more holes to be plugged (how many can there be?).”

    You then say “Gradually I began to see that some of the people’s difficulties…”, and list some examples, like the site was relatively remote causing strain, etc., new equipment, and ambiguities in learning.

    Aren’t those negatives…holes in barriers to be filled? Couldn’t you learn the same things from a well conducted open-minded incident investigation collaboratively working with your colleagues, and in an environment of high trust and mutual respect?

    My point here is not to disrespect your great example of organizational learning, but my confusion stems from trying to understand how an incident model like ICAM squashes the investigator’s ability to think differently about safety? Not setting out to look for negatives but yet finding “holes” is just too a subtle difference for me to discount the value of good incident investigation models used in an environment of high trust and respect.

    In my mind, you engaged the organization in learning and they are improving. If ICAM isn’t the right tool, then fine. Aren’t there tools and incident investigation models available that allow for more evolved thinking as you described here? I think there are. I also think that people’s perceptions, their misperceptions, the organizational culture also heavily influence the WYLFIFYF, which is why I’m a bit dismayed that you would not have found the same organizational learning using the ICAM or any tool for that matter…you Daniel, as the lead investigator, have a huge impact on the process…driving the questioning, the thinking, the relationships, the respect, all of which are more powerful than any tool…

    I recently published an article entitled, “Application of the Loughborough Construction Accident Causation model: a framework for organizational learning” in Construction Management & Economics so the topic is very recent to me. I believe organizational learning is key to safety, and our approach to building trust and respect within organizations is key to that learning…incident investigation being one small piece of the bigger picture.

    Sorry for this long-winded post, and I hope I do not come off as being negative…I’m just trying to understand…thank you.

  3. Daniel Hummerdal Post author Reply


    Your comments and reflections are greatly appreciated as it allows me to calibrate/improve the argument.

    The very first sentence of your paper says ‘is essential to learn from previous accidents and near misses’. While I agree that there is value in such a focus on things gone wrong, it is also the first ‘step’ into ‘traditional’ ‘safety’ =)

    Relying on past (negative) events to prevent incidents means using only a minuscule portion (in organisations with safety professionals less than 1/10000 events). Can we assume that just because 99,999% of events go right that they have different causes than accidents/that e should learn from negatives? Does success happen because a lack of holes? How do we know? The point is, using negatives to inform prevention is based on an assumption about accidents having separate causes, than success/things that go well. But using a safety vocabulary based on negatives create many problems (as outlined in the post) and may be a dying strategy/not able to yield more benefits. But I’m happy to debate whether accidents and successes have separate causes!

    Nor is it meaningful to talk about holes and imperfections as the cause of success (that’d be semantically confusing), which is why I want to leave those categories behind and start using words/labels that reflect variability, continuum, more dynamic and changing conditions. Perhaps it is not very clear in the post (thank you) that if we step away from looking at negatives, holes and imperfections, we can start looking at more holistic reasons for why things succeed, and how this capacity sometimes is stressed/not enough. Furthermore, a focus to make things better (not just less bad) has a different set of psychological ramifications.

    Perhaps holes and imperfections can be meaningful categories in worlds where things are static, and we have the opportunity to perfect. I’m yet to see such an environment, but some factories with linear processes perhaps come close. Construction is too open/dynamic to fit that description.

    When situations are dynamic/open, when organisations use cutting edge technology or undertake pioneering work, a perfection lens will be too reactive (we can’t wait for accidents to occur). So to improve safety we must look at other things than negatives and holes – but what do we look at when things are going well? I’m convinced that we must look at those situations that put stress on people, situations in which the functions and strategies people rely on for success are threatened. We can look for ‘Shearing points’ as Eder Henriqson says..

    So perhaps it’s not so much the ICAM, but rather the act of investigating a negative that bias the investigator!? As to whether there are other tools out there that have a more holistic/dynamic approach? There is STAMP and FRAM, but I see little opportunity for my organisation to start using those (the tools are still far too advanced). I’m keen to learn where I can find good alternatives? But as far as I know there’s an opportunity to start building a ‘success based’ accident investigation tool (probably need to define success first, but could be many things: resilience, adaptiveness, motivation, structure). You game?


  4. Zinta Satins Reply

    I believe that we should accept that there are holes in our ‘barriers’. Instead of trying to identify them and fill them we should be helping our people to recognise them when they appear and to respond. By responding I mean making an informed decision as to what course of action to take. To me, adaptive safety is about supporting our people to make informed decisions. It follows that to make an informed decision you need access to information (be it through knowledge & experience, seeking it out or gained through observation). What if we viewed an incident as evidence that people were not supported in being adaptive, in making an informed decision? What if, when investigating incidents, instead of trying to identify holes we try to identify why people did not recognise the holes or were unable to make an informed decision as to how to respond when they did? Maybe instead of changing the language we use in applying traditional safety processes (incident investigation, risk management etc) we should break the mold and come up with different processes.

  5. mikebehm Reply

    I agree with everything in your second post…that’s why I follow this blog…

    I’m confused because your initial blog started out with an “incident” and then a second “incident”.

    You used that second negative to learn, gain trust, show mutual respect, but also find holes, deficiencies, and learning opportunities…and this is great and I applaud that…it’s part of thinking differently. But why would two incidents (reminder, you started with a negative in both cases) in the same organization by the same investigator turn out so differently…that is puzzling…an incident investigation isn’t plugging holes…it’s organizational learning…and yes, of course, your post#2 I totally agree, but post #2 doesn’t relate (in my mind) to the first blog b/c you start with the negative…and that’s fine.

    1. Daniel Hummerdal Post author Reply

      I did not investigate the second incident. It was, however, the trigger for ‘organising learning’ about safety in the project, in which a more ‘positive’ focus was used. I think it is difficult, or at least unusual, to apply such a focus when the task is to explain incidents (who want’s to be responsible for negatives?)

      Is there a difference in asking “why did the accident occur?”, and “how could events have gone right?”

      1. Zinta Satins Reply

        To me it the difference between “how could have events gone right” and “how could we have supported our people in being successful”. More about understanding why people were unable to make an informed decision. Less about what decisions they could have made.

  6. Andrew Townsend Reply

    Mike – Are you talking from practical experience or from academic theory/research? Experience of having carried out many injury/incident investigations and also having studied UK prosecutions are that objective ‘prevention of re-occurrence’ does not happen. Instead there is allocation of blame/punishment. This produces a habitual avoidance of blame the response. It does not even “plug the holes”.

    Trust, mutual respect and investigation in the real world are mutually exclusive!

    1. Rodney Currey Reply

      Well, all good perspectives, but what about this. My experience relates to this, if a knife is used to cut things, then, why are we suprised that it cut someone.?
      A knife cuts all things so, shouldn’t we be suprised that it didn’t cut someone? Why isnt it designed not to cut humans?
      We create & do things and know its not 100% fail safe like a knife if used in a particular way that results in being injured. Then we try to make sure we use it in a manner not to cause us harm. But all humans make mistakes, and we forget or dont concentrate or whatever. But we know all this information & we “hope” someone wont get hurt using a knife.

      Then when someone gets hurt we think we can prevent it happening again by conducting and investigation to find out what? Dont we already know? The holes are always there! So do we roll the dice and hope the holes dont line up-ever! I reckon eventally they have too!

Leave a Reply

Your email address will not be published. Required fields are marked *