Is ‘human error’ the handicap of human factors? A discussion among human factors specialists.

DSCN0484Following most major accidents, one phrase is almost guaranteed to headline in the popular press: ‘human error’. The concept is also popular in the ergonomics and human factors (EHF) discipline and profession; it is probably among the most profitable in terms of research and consultancy dollars. While seductively simple to the layperson, it comes with a variety of meanings and interpretations with respect to causation and culpability. With its evocative associations, synonyms, and position in our own models, ‘human error’ is a key anchoring object in everyday and specialist narratives. Despite our efforts to look at what lies beneath, explanations (in industry, the media and the judiciary) often float back to what people can see – with hindsight – at the surface. Our scientists’, designers’ and analysts’ perspectives are overpowered by ‘common sense’, populist, management and legal perspectives. And the pervasive nature of the term clouds other ways of seeing that recognise the need for adjustments, variability and trade-offs in performance.

Might ‘human error’ be the handicap of human factors? (My reason for asking this question is detailed here.) I wondered if we might be unwittingly contributing to an unhelpful populist explanation of accident causation, by reason of our use (and possible over-use) of the concept as a fundamental explanatory device or as a key part of a causal narrative. At Ergonomics and Human Factors 2014, over 70 people from various countries and industries attended a workshop discussion with this as the main question. The discussion was recorded with the agreement of the participants, and some of their insights are given in this post. The quotes are grouped depending on whether they relate to industry or EHF perspectives (not in the order in which they were made during the discussion).

Reflections on industry perspectives

The discussion on ‘human error’ started by reflecting on the limited view in industry, and attempts by human factors specialists to expand this view. EHF specialists usually promote the narrative of ‘human error’ as a symptom (e.g. of ‘deeper trouble’), or as something with systemic ’causes’, for instance: “My experience in major hazard sites in <country>, on the main land – oil refineries, chemical plants, chemical processors – is that they have a very basic understanding of human error … and generally speaking what we see in safety reports is statements like ‘we have a competent and motivated workforce’ – full stop … So they assume that the human beings that they have on their site won’t make an error… I think industry is quite immature in its understanding of error and perhaps the reason why we focus on error is to try and get them to start seeing error as the beginning of the story rather than the end!” When asked whether this works, the participant responded: “That’s a very good question. Perhaps what I can’t see is any other alternative.” Whether there is any other alternative, or whether one is needed, is a moot point. While there are many synonyms and subcategories (e.g. inattentiveness), they can end up reinforcing the same sorts of connotations (e.g. ‘could do better’), and solutions (e.g. reminder or more training). I’ll come back to this later.

The narrative of human error as ‘a symptom of deeper trouble’ (a phrase often used by Sidney Dekker) raises questions about definition. The ‘deeper trouble’ view makes sense when we are referring ‘human error’ with respect to hazards, adverse consequences, system defences, and related causes or system conditions. This notion can be helpful, though on the surface it sounds little different to James Reason’s Swiss cheese model. It makes less sense in from everyday  perspective where ‘human error’ is used in a more mundane way (like putting the milk back in the cupboard, rather than the fridge), or with reference to psychological constructs: expectations, memory, habits, and so on. This psychological view of ‘human error’ is not so much a symptom of ‘deeper trouble’ as a by-product of normal human experience. Hearing or seeing what we expect, forgetting, and doing something out of habit, is completely normal. And normally, there are no adverse consequences. Mostly our performance is aided. It we didn’t forget, we would be unable to process information and make decisions. If we didn’t have habits and expectations that influence our performance, we could not work so efficiently. It really depends on the context, and the context changes. Where such normal human experiences are implicated in disasters, ‘human error’ takes on a very different tone with very different implications. This is a key source of tension in the human factors and safety community concerning the use and abuse of ‘human error’. We are often talking about different things in different contexts without making these differences clear.

Another participant noted the difficulty of encouraging even the ‘deeper trouble’ view, with reference to an industry occupational health research project: “In <country’s> industry occupational health, we have a research project on human error in conventional workplaces not safety critical places but for example maintenance trucks and stock logistics. What we found there is that … when you define that something happened because of ‘human error’ then there is nothing you can do so that is the atmosphere in those companies … nothing can be done because it was the ‘human error’ and they don’t understand or try to find an explanation of what is behind that human error … so in that sense it is a very dangerous term to use in those conventional workplaces. It stops everything. It stops the discussion of what might be wrong in the system.” This hints at a defensive reaction. While we are happy to discuss ‘human errors’ in our everyday, rather uneventful lives, the discussion is very different when we are part of a complex and hazardous system, when lives and livelihoods are at stake.

Where an industry response was cited, it was limited according to another: “…in the railway if there was a human error, the only answer was more training. And the dangerous thing about that is that when you have found a solution you send the driver for more training and you think you’ve solved the problem…” Many of us are familiar with this sort of response. It seems better to do something, even if it is ineffective, than to do nothing.

The possible over-use of the notion of ‘human error’ was attributed partly to regulatory pressure, “what it does is make the priority only, only human error and if it isn’t about human error then they don’t do anything about it, and what we should be doing is going back to where we started which is fixing the system.” A reasonable portion of the work of EHF specialists is influenced by regulation as this will drive industry demand – first things first.

It was also suggested that ‘human error’ can be a political device and serves a business need: “I get the feeling it might be politically expedient to use ‘human error’ and I think maybe some of these helicopter crashes. I mean, if it was a mechanical error, before anything has been proved or the investigation has been done, all these vehicles would have to be grounded and people would be stuck in places. So at least there is maybe an interim judgement that human error may not be necessarily correct but an important step in the process of moving forward.”

Similarly, a participant suggested by that ‘human error’ is a convenient, if flawed, shorthand: “Ultimately society doesn’t want an answer it can’t understand. So maybe it’s what society needs at the moment. It’s a shorthand version of what we all know is inadequate.” This appears to reflect the use of ‘human error’ by the media and politicians, for whom it is a simple and convenient pseudo-explanation that can be included in headlines and short accounts.

The foreseeability of ‘human error’ came up in the discussion with regards to accident analysis and risk analysis: “… if you are reporting human error you should be able to qualify whether it was a foreseeable human error or an entirely unforeseeable human error. And that would probably lead to a different perspective. Because we spend an awful lot of time trying to quantify and qualify human error.” I wondered whether we might be caught in a hindsight trap, because human error is by its nature often definable only in hindsight. The participant responded, “But there are usually high hazard industries a risk assessment so if it is documented somewhere in that risk assessment hat there is a probably of human error could lead to an incident, the reporting of that incident should reflect on that risk analysis.” This might assume you can define all eventualities, but might at least make a link between accident investigation and risk assessment; a link that is currently often missing.

The discussion also addressed emotive connotations and defensive reactions: “The FAA has a snitch patch in their system which reports infringements of separation … the pilot in question will then be questioned and asked what happened. I came across a report in a private pilots’ journal in America, which said if you are rung up by the FAA, hang up your telephone and call your lawyer. This is the direct result of an assumption that it must be somebody’s fault. It also makes it damn hard to investigate incidents.” This comment again illustrates how populist notions of ‘human error’ seem to be wedded to notions of responsibility and blame, and so trigger defensive reactions that have systemic effects (e.g. on reporting). For several decades, human factors and safety specialists have tried to to crowbar these ideas apart.

A pilot participant went on to mention a sociological aspect: “There is a sociological aspect to this with regards to your profession because if you are a pilot, you hold pilots in high esteem, you put them on pedestals, we put our lives in their hands, and the same with surgeons. But if you talk abut a pilot as being ‘dangerous’ or making mistakes, that is the end. And you rarely hear us in a forum talking about mistakes we make … I listened to a lecture in a forum similar to this and at the end of it the chaps were saying they’d spent 40 years each in flying and the message is if your pilots are making mistakes, get rid of them. There is this standard that we have, if you are a ‘dangerous’ surgeon or a ‘dangerous’ pilot for whatever reason, that’s the end of your career. So we are actually our own worst enemies because they’ve never made a mistake in their lives and they’ll tell you where you went wrong and there are some professional pilots in the network who are very quick to point out where people went wrong. I don’t think we do ourselves any favours. But then, no one wants to fly with a pilot who makes mistakes. And in my years of flying, I’ve made lots of mistakes.”

I thought this was a really interesting reflection. Culture seems to influence our willingness to talk about adverse events, which can mean covering up our activity when things don’t work out as we expect. Again, notions of blame (and personal attributes – fecklessness, inattentiveness, poor judgement, riskiness, and so on) seem tied to actions or decisions not-as-planned.

The notion of ‘just culture’ – an industry response to reduce blame associated with so-called honest mistakes was also mentioned. One participant quipped: “And the other thing is just culture. We don’t blame anyone around here but the MD does like to know who we are not blaming.”

At a societal level, a participant reflected that, “Are all domains subject to this? I was thinking for instance of the financial crisis. We don’t tend to use or have seen the word human error associated with that … We have a long history of seeing humans as being not in a state of grace. And we need to find ways to express that as a society. Have other domains achieved this diffusing, and not stuck on this particular phrase?”

It seems to me that few have. The concept (and its label) is so embedded that there is little room for any other, except in some cases where the situation is too complicated, with no one person at the front line (finance being an example). I tend to think that it is our human instinct to want to find someone to blame, or at least hold responsible. In the case of the Spanish train disaster, I mentioned the argument here that perhaps we are trying as a society to absolve ourselves of the demands that we are making. We demand that somebody drives a train at very high speed. We demand that that person slows down at a critical point in time otherwise there will be disastrous consequences. We don’t want to spend the time or money to ensure that this cannot happen by design. It is predictable that it will happen, if we were to do some experimental studies or enough simulations, or to consult expert opinion (the driver himself predicted the accident: “I told those guys at safety that it was dangerous and one day we would lose concentration and pay for it. I already told those guys at safety that it was very dangerous! We are human and this can happen to us. This curve is inhuman!”). And then when it does go wrong we are not going to put the blame on ourselves. The person who will take the blame is the one to whom we can most easily attach the label of ‘human error’: the driver. If there were no ‘driver’ (as in the case of finance), then we would need to look at the situation very differently.

But some organisations are thinking carefully about language. One forward-thinking organisation was cited as implementing a just culture initiative, and in doing so changed the language: “They latched onto this idea that language is important, that words matter. ‘Investigation’, they said, is a very criminal type of word to use so we are going to move to ‘review’ and would you please come and talk to our incident reviewers about the bit that still remains that is human. Because we know we shouldn’t blame people but they are still part of the system. What should we look for?” … So I think there is broad spectrum of people understanding some of this and moving to it but not being able to articulate exactly what that means or apply it to themselves.” It is also my observation that there is a body of receptive people, especially front-line staff (e.g. pilots and air traffic controllers), who have no trouble understanding alternative concepts such as performance variability – a very straightforward and obvious notion that people have to adjust and vary their performance in order to meet variable demand in variable conditions. These adjustments are normally wanted, but sometimes are not, often depending on the consequences.

Reflections on EHF perspectives

Whether simplistic notions of ‘human error’ are actually an issue for EHF was subject to some discussion. One view was that, since EHF takes a systems perspective, EHF practitioners do not contribute to the misuse of ‘human error’: “Chapanis’s original work said pilot error is normally designer error. Reason said what we call human error is the product of a system that has permitted the continuation of certain practices that seem to lead to what we call human error. That was repeated by Dekker about a decade later. So I don’t think anybody in the discipline really believes that … Sure, the other disciplines do, and people like the FAA and manufacturers are only too happy to blame pilots because then they don’t have to do anything … If you take that sociotechnical systems view then you see failure of systems as an emergent property of that system, which is due to complex interactions … I thought the profession had moved on and we were talking more of a system view nowadays and that the individual decomposition view is the ergonomics of the 1980s. The 2010 view is very much a systems perspective.” I agree with this view, and I am sure most others would too, but the point is not that EHF specialists have a simplistic view of ‘human error’, but that we perhaps do not understand its complex set of connotations, especially in the context of accidents.

Another interesting perspective was that perhaps even a systems focus could have unintended consequences: “… people in the media complain that everything is too clean and our children are always sick because they are never exposed to the germs. Are we with our role in the narrative by making the system seem so infallible actually promoting the pigeon hole of human error because you are sort of, it encourages the belief that, ‘this system is totally safe’ in the sales pitch and actually it turns out that it is not and then you are looking for someone to blame rather than blaming the system, you’re blaming the user.”

But for some, the connotations of the term were seen as problematic and hard to undo: “‘Human error’ it is automatically a negative. It’s like ‘near miss’, ‘whistleblowing’, anything like that, it’s the attachment that anybody puts to that phrase or word…most of us are probably converted to where we need to be going, however we are dealing with society and we are also dealing with the safety world … Attach a label to it and it becomes an end state.” Here we started to address the connotations of the phrase itself.

Noting the issue of language, one participant actively tried to find alternative causal narrative: “What I did was, don’t use the word ‘human error’ … this is simply a social evaluation of the behaviour after the fact. It has nothing to do with the action as it was. ‘Human error’ is a normative evaluation of behaviour it is not a description of what happened. And it doesn’t have any causal validity at all. So how can we promote a different narrative?… [it] is very difficult but it relies on communication ability … We need to describe what happened and why it happened. That is something different from saying that it is an error. And finding a good way to do this is very difficult.” This comment neatly summarised the ‘new view’ of ‘human error’, and recognised that we don’t always have to use this device to anchor our explanations. We can just talk about what happened.

And finally, it appears that it is not just ‘human error’ that is problematic for us EHF practitioners. “I find that ‘human factors’ itself is a really difficult term. In the healthcare industry and when I talk about human factors I try to explain that human factors is the system and about the tools and technology and the environment. But that doesn’t get across straight away because human factors are the factors of the human. And I think our dialogue around that is really challenging…”

It was fascinating for me to hear this range of views on ‘human error’ from practitioners. A key source of our trouble seems to be that we are still without a definition, and we are talking about different things. Normally, we in EHF think of ‘human error’ quite innocently. This is the classic psychologist’s perspective. But in other quarters, this view is not shared. We would rather like the media, industry, and the judiciary to share our enlightened view, and we keep on pushing that agenda. But, with some exceptions in the most developed industries and regulators, I am afraid that the HAL 9000 explanation of human error dominates the common narrative (and those of us situated in more enlightened contexts sometimes assume the rest of the world is similar). What we can do it make it clearer what we mean when we are talking about ‘human error’, or perhaps just describe a situation instead of by-passing explanation via an ambiguous label. This is not a ‘new view’; it is, in fact, a rather old one, stemming from Erik Hollnagel’s ‘No view‘ (see his 1983 paper here), and much earlier in philosophy.

Perhaps, by introducing other non-binary vocabulary and ways of thinking – and focusing more on performance adjustments, performance variability, trade-offs and compromises –  we can help to shape the popular narrative, moving ‘the human’ from a hazard to a resource necessary for system flexibility.

Further information

Shorrock, S. (2013). Human error’: the handicap of human factors, safety and justice. Hindsight magazine, Winter, 32-37.

Shorrock, S. (2014). Life after ‘human error’. Keynote address, Velocity Europe 2014, Barcelona.

Shorrock, S., Leonhardt, J., Licu, T., & Peters, C. (2015). Systems thinking for safety: Ten principles (A White Paper). EUROCONTROL.

This is an expanded version of an article in The Ergonomist, April 2015, originally blogged at

#EHF2015 in April includes interactive workshops on Is safety culture still a thing?, Safety-I, Safety-II and human factors, and many others.

See also


  1. Rodney Currey Reply

    “What seems to lie at the heart of this issue is the institutional dilemma of blame”. As Douglas (1992) reminds us, danger and blame have been ubiquitous features of societies over the years as one means of defending favoured institutional arrangements. Pidgeon and O’Leary, (2000)

    Does the organisation trust or allow each individual operator (fitter, surgeon, pilot, machinist and so on) to take the specified (or agreed) performance goals and translate them appropriately into process and action rules for every eventuality they meet; or does the management (or even the regulator) centrally make the translation at a design stage and hand down to the operators binding specific action rules for all situations they can envisage? Hale, Borys and Else (2012, p.16).

  2. Ron Butcher Reply

    Thanks Steven for an outstanding discussion. I also very much enjoyed the linked discussion on human error and psychology. I’m sure at some level we ascribe that label as a statement of denial that we could possibly be as fragile or subject to irrationality as the next person is.

    These discussions are certainly important as we continue to facilitate the evolution of our profession into the 21st Century. I think we need to begin considering the lessons we’re learning from the behavioral economists that describe many of our ‘predictably irrational’ behaviors as Dan Ariely describes them. Human Error and loss of Situational Awareness are handy buckets to point to when we don’t want to explore any deeper or consider our shortfall at putting some time, space or barricade between the loss of situational awareness (whatever that is) or the commitment of a perceived error that resulted, at least in this instance, in an adverse outcome.

    We’re all likely more comfortable in the illusion of control than we are in accepting the general frailty of our species against self-induced hazards. In the end, I think we’re getting closer to understanding the limitations of our understanding and that, I think, will be a good starting point.

    Thanks again for some great posts Steven. I very much enjoy the discourse.

    Best regards,

  3. Rodney Currey Reply

    On the issue of language as you describe, I think is an important aspect that often is overlooked and linked to preconceived perceptions, especially on or attached to human variability. To describe a conclusion as a ‘human error’ conjures up accountability and responsibility. To describe it as a ‘human variability’ reduces the perception of blame and retribution to an socially accepted conclusion that better describes what it really is.

    One other such word that gets my attention is ‘violate or violation’. This is often attached to someone breaking a safety rule or committing a ‘human variability’. This language again just sets up a blame culture in which punishment quickly follows. The word violate better describes a deliberate or intentional act, not a aspect of a human variability.

    1. Ron Butcher Reply

      Thanks for your comments Rodney. Excellent points very well taken. Our goal is discovery and we need to remain mindful (mindfulness) so that our language doesn’t create barriers to that process.

  4. Rodney Currey Reply

    I need to add that when we use a term like ‘human error’ we tend to stop our investigation or thoughts there. And when we use another terminology we continue to pursue the issue further to another resolve that would serve us in a more effective way.

  5. Rob Long Reply

    I never use the language of ‘human factors’ or ‘human error’, it is about as meaningful as ‘be careful’ or ‘common sense’. Safety has a long way to go before it gets the sense of semiotics.

    1. Ron Butcher Reply

      Excellent point Dr. Long. These are functional elements of the operational process and we very much need to remain focused on that to minimize the influences of bias, in all it’s forms. Thanks for all your work on Risk and for your feedback in this discussion.

Leave a Reply

Your email address will not be published. Required fields are marked *