In 1960, shortly after his election, President Kennedy asked Robert McNamara to become secretary of defense in his new cabinet. McNamara, known as a star and a whiz-kid, had been president of the Ford Motor Company for all of five weeks, so it took a bit of cajoling. But he eventually joined the administration in 1961, taking with him the modernism of Ford’s production lines. A few years into his tenure, with Vietnam taking up ever more resources and political airtime, McNamara wanted to know from his top generals how to measure progress in the war. He told General Westmoreland that he wanted to see a graph that would tell the defense secretary whether they were winning or losing (McMaster, 1997). Westmoreland did as he was asked, although he produced two graphs:
One graph showed the enemy body count. Under pressure to show progress (and knowing that political fortunes of their masters, promotions for themselves and their comrades, decorations, rest- and recreation decisions, and resourcing all depended on it), those who did the accounting made sure that not a single dead enemy body was missed. Soon, the lines between soldiers and civilians had blurred completely: all dead bodies became enemy personnel. Implausibly, the total number of enemy dead soon exceeded the known strength of the Viet Cong and the North Vietnamese Army combined. Civilian casualties mounted, the frustrations and incentives even leading to some massacres. In the field, of course, the ‘enemy’ was nowhere near all dead, and certainly not defeated.
The other graph showed a measure of civilian sympathies for the US and against communism. It tracked the effects of the so-called Winning Hearts And Minds campaign (or WHAM), which had divvied Vietnam up into 12,000 hamlets, each of which was categorized into ‘pacified,’ ‘contested,’ or ‘hostile.’ Pressure to show McNamara progress here was relentless too. Militias on the side of the Americans were invented on paper. Incidents of insurgent activity or hostile takeovers of hamlets were ignored. In an ambiguous, messy and protracted war, it wasn’t difficult to skew numbers in favor of making the graph look good. It soon seemed that the entire countryside had become pacified.
The progress charts demanded by McNamara had produced a monstrous auditing system (Scott, 2012). It erased all meaningful difference and distinction: a dead body was a dead body. It could be counted, and that was all that counted. And a pacified hamlet was a pacified hamlet—with all the cross-currents, fluidities and complexities of a shredded social order collapsed into a single number on a chart. McNamara’s system may well have played its own small part in contributing to the continuation of war and the stifling of meaningful, rational discourse about its merits and demerits. The political back tapestry, for instance as painted by McMaster in Dereliction of Duty (1997), was one of civilian leaders who were obsessed with reputation and who had leaned further into military operational matters than turned out to be healthy.
We can safely say that what gets measured, gets manipulated. Do company leaders and board members—concerned with worker safety but also their own reputation and liability—who promote a hearts and minds campaign for safety, even know this ugly history? Today, ‘hearts and minds’ is an often-used cover for behavioral safety interventions. Behavioral safety programs tend to point the finger away from leadership. Behavior-based safety, after all, allows leaders to say that safety problems are created by all those other people, that those other people, the workers, are the problem. Even when the workers have been given everything to do the right thing, their behaviors still get the organization in trouble. Targeting the worker conveniently means that the manager, director or board is not the target.
Without any serious thought, questioning or critique, those promoting behavioral safety use a version of Heinrich’s highly dubious ‘finding’ that 88% of occurences are caused by ‘man failure’ (Heinrich, Petersen, & Roos, 1980) or human error:
The popularity of this approach stems in part from the widely held view that ‘human factors’ are the cause of the great majority of accidents … As the general manager of Dupont Australia once said, ‘In our experience, 95 per cent of accidents occur because of the acts of people. They do something they’re not supposed to do and are trained not to do, but they do it anyway’ (Hopkins, 2006, p. 585).
Behavior-based safety interventions typically center around observation of behaviors, and feedback to those performing them. The four basic steps of a typical behavior-based program are:
- Define the correct behaviors that eliminate unsafe acts and injuries;
- Train all personnel in these behaviors;
- Measure that personnel are indeed behaving correctly;
- Reward worker compliance with these correct behaviors.
Hearts and minds programs offer a variety of ways to achieve these steps. Some involve penalties, demerits or disincentives for undesirable behaviors. Many involve surveillance of worker behavior, either by people or by technology (e.g. cameras but also computerized monitoring systems installed on equipment, like vehicles). Others involve coaching and guidance. All assume that there is one best way to do a job, and that straying from that one best way can be bad for safety. Today, many programs are also supportive and training-oriented. Some deliberately pursue worker commitment to injury- and incident-free behavior, akin to achieving a religious conversion. Yet over all, “its emphasis is undeniably on behavior modification and that is how it is understood by many of its advocates as well as its critics” (Hopkins, 2006, p. 585). Indeed, a focus on behavior modification and conversion of the heart and mind are obvious from the way behavior-based safety programs are promoted:
Reinforcement occurs when a consequence that follows a behaviour makes it more likely that the behaviour will occur again in the future … For example, a toolbox talk addressing correct manual handling techniques might result in correct techniques on the day of the talk; however, over time employees will revert to old practices. This is because nothing has occurred after their correct behaviour to indicate that it is correct, or that it has benefitted the individual or the organisation to be so safety-conscious (HSA, 2013, p. 7).
Does any of it actually work? Without offering any studies or results, the U.S. Department of Energy boasted a few years ago that “Intensifying the Behavior Based Safety (BBS) observation cycle will often prevent an injury or accident” (DOE, 2002, p. 7). It continued that:
BBS is a process that provides organizations the opportunity to move to a higher level of safety excellence by promoting proactive responding to leading indicators that are statistically valid, building ownership, trust, and unity across the team, and developing empowerment opportunities which relate to employee safety. Secondly, but equally as important to the organizational culture, BBS provides line management the opportunity to prove and demonstrate their core values on the production floor (p. 9).
Where does the strident language come from? Achieving “higher levels of safety excellence” would need to be demonstrated if they really wanted their people (who include many nuclear scientists and physicists) to be convinced. But the DOE doesn’t offer such a demonstration anywhere. They don’t even bother to define a “higher level of safety excellence.” Just saying it doesn’t make it so—even if we would be able to agree on what a higher level of safety excellence actually means (other than tedious, tiresome management-speak). But of course, it is not evidence.
Studies that seem to show the success of behavioral interventions are exclusively published by those who have an active economic stake in promoting the practice (Geller, 2001; Krause & Seymour, 1999). And none of these studies meet even the most rudimentary criteria of scientific quality: that of a control condition so that the results of the intervention can be contrasted against a part of the organization where the intervention wasn’t done, or done differently.
The safety-scientific literature still offers no objective, reliable, valid empirical study or data to prove the efficacy of moral-behavioral safety interventions. Put simply: there’s no evidence that they work. And this should not be surprising. Even Frank Bird and Heinrich remind us that the conditions under which worker behaviors emerge are to be traced back to the organization. Fatigue, pressure and insufficient knowledge, for example, come from somewhere. They are consequences of organizational trade-offs and decisions, rather than causes of trouble brought to the workplace by frontline operators. Organizational conditions set the stage for them, enabled and even invited them. Think of Bird’s ‘pre-contact control’ (Bird & Germain, 1985). Is the organization doing enough of that? Think of, for example:
- Adequately resourcing people’s work with appropriate knowledge and tools;
- Recognizing and reducing goal conflicts;
- Instituting safety-by-design.
And most hideously, behavior-based safety has done nothing to reduce the risk of fatalities. Workers had to strictly follow driving and walking regulations on a Texas chemical plant site, but then four of them died in a toxic gas release in a building on that same site—two of them brothers (Hlavaty, Hassan, & Norris, 2014). Their deaths had nothing to do with any of the purported ‘safety behaviors’ that were on signs and posters on the site. And in a supreme and tragic irony, workers for a copper mine in Indonesia were taking part in a compulsory behavioral safety course in an underground training facility. Then the roof of the tunnel in which they were gathered collapsed. It killed 28 miners and injured 10, while they were attending the behavioral safety course (Santhebennur, 2013).
Behavioral safety practices can even contribute to a climate where the safety conversation dries up, and thus allows fatality and other risks to build up in a way that is unrecognized by the organization:
A behavior-based approach blames workers themselves for job injuries and illnesses, and drives both injury reporting and hazard reporting underground. If injuries aren’t reported, the hazards contributing to those injuries go unidentified and unaddressed. Injured workers may not get the care they need, and medical costs get shifted from workers compensation (paid for by employers) to workers’ health insurance (where workers can get saddled with increased costs). In addition, if a worker is trained to observe and identify fellow workers’ ‘unsafe acts,’ he or she will report ‘you’re not lifting properly’ rather than ‘the job needs to be redesigned.’ (Frederick & Lessin, 2000, p. 5)
If you want to change behavior, don’t target behavior. Target the conditions under which it takes place. Those conditions are not likely the worker’s responsibility. Or only in part. Think for a moment about whose responsibility the creation of those conditions is in your organization. And then take your aim there.
In the end, no indictment of ‘hearts and minds’ is perhaps as powerful as the title of McMaster’s book: “Deriliction of Duty.” Leaders—in any organization—who surrender to a worldview where the little guy is the problem, and where the little guy needs a change of heart and mind, are derilict in their duty.
Bird, R. E., & Germain, G. L. (1985). Practical loss control leadership. Loganville, GA: International Loss Control Institute.
DOE. (2002). The Department of Energy Behavior Based Safety Process: Volume 1, Summary of Behavior Based Safety (DOE Handbook 11/05/02). Washington, DC: Department of Energy.
Frederick, J., & Lessin, N. (2000). The rise of behavioural-based safety programmes. Multinational Monitor, 21, 11-17.
Geller, E. S. (2001). Working safe: How to help people actively care for health and safety. Boca Raton, FL: CRC Press.
Heinrich, H. W., Petersen, D., & Roos, N. (1980). Industrial accident prevention (5th edition). New York: McGraw-Hill Book Company.
Hlavaty, C., Hassan, A., & Norris, M. (2014). Investigation begins into 4 workers deaths at La Porte plant. Houston Chronicle, Sunday(November 16), 1-4.
Hopkins, A. (2006). What are we to make of safe behaviour programs? Safety Science, 44, 583-597.
HSA. (2013). Behavior Based Safety Guide. Dublin: Health and Safety Authority.
Krause, T. R., & Seymour, K. J. (1999). Long-term evaluation of a behavior based method for improving safety performance: A meta-analysis of 73 interrupted time-series replications. Safety Science, 32, 1-18.
McMaster, H. R. (1997). Dereliction of duty: Lyndon Johnson, Robert McNamara, the Joint Chiefs of Staff, and the lies that led to Vietnam. New York: Harper Perennial.
Santhebennur, M. (2013). Picking up the pieces: Indonesian mine collapse. Australian Mining, 25, 4-5.
Nice read. Thanks for telling the flip side of BBS programs.
Some great examples. One of the challenges with BBS is that, because it involves people and directly engages the workforce, it comes with a misunderstanding that they are already treating people as the solution. But without the associated paradigm shift away from ‘the system is right’, organisations are limiting themselves by a superficial understanding that convinces them they’re doing the right thing.
Sometimes we forget that man failure or human error are also applicable to equipment or systems. They are desined and deployed by “humans” (men and women). But as you said, it looks easier to blame the weakest link of the chain.
Great piece Mr Dekker. Thank you on behalf of countless of safety professionals who have been saying this for years as we lived and fought very fallacies you described in this masterpiece.This article sumarises BBS extremely accurately and it deserves to be posted far and wide.
Even Tom Krause published a linked in article that BBS is dead. I would like Sydney to comment on Dominic Cooper’s 47000 construction BBS study that did show tremendous safety improvements using the observation methodology. http://www.behavioural-safety.com/articles/Safety_Leadership_in_Construction.pdf
I can offer two comments re the Cooper paper:
1. Confirmation bias. As Mr Dekker points out in his article “Studies that seem to show the success of behavioral interventions are exclusively published by those who have an active economic stake in promoting the practice”.
2. TRIR is a bogus metric on the output side, since there are multiple layers of probability (luck) sandwiched between an at-risk act and an injury receiving the threshold level of medical intervention required to for it to become a data point.
Very informative article, and very true.
Thank you, I needed this right now! I wonder… what do we call the kind of hybrid behaviour/self awareness programs these days that talk about the brain, and how attitudes impact safety and behaviour, and how we can encourage the little guy to be more conscious of this (so he doesn’t inadvertently cause/fall victim to a problem). Are they the same BS as BBS, or can they be something more constructive if positioned correctly and supported the right way?
In Canada, psychological safety has a standard published that explicitly lists the psycho-social factors in the workplace that negatively impact mental health. I am suspicious of organizations that instead of getting a handle on these known workplace stressors, push out “mindfulness” training to employees. I see this trend as mental health standards gain momentum as the same trend I have been involved with for the last 21 years. As an ergonomist, I have more often than not been asked to provide “ergonomics training” to employees, versus improving the design of the work, workstation, or workplace. There is nothing wrong with mindfulness training, but it is a guise just like BBS that redirects responsibility for workplace health and safety to the lowly worker.
Also, be wary of human performance programs that go on and on and on about human error. If the program presents the human as the fallible, unreliable component in the work because of human susceptibility to bias, confusion, error, etc. – but then go on to say but don’t blame the poor human, it is just another BBS dressed up in “victim” language.
BBS can be a “Big Brother Survey” and can be misused as an “explanation” by management to distance themselves from errors in culture, systems and of course: human behaviour.
It can also be a slightly over eager attempt to “understand” by management who need to satisfy Board needs to improve.
It can also be a useful tool, a means, to enlighten groups of staff including managers in order to improve circumstances.
Do individuals feel under pressure – quite possibly – especially when it is obvious who the anonymous survey is referring too! Likewise some groups will feel the scrutiny as well.
So where do “management” fit into this? Are they hapless/witless/insecure/ignorant/desperate/ambitious/understanding/concerned/communicative/cowardly/inspriring?
Well lunchtime over so till next time.
Flavor of the Month
It is an inescapable fact that many of today’s investigators will interpret their current project in terms of today’s popular concept (flavor of the month) without considering previous popular concepts (flavors of the month) and without considering many of the inescapables that should apply. Closely associated with flavor of the month is a tendency to select something to criticize, perhaps a “devil of the month?”
Observation: Most, if not all of today’s popular concepts (flavors of the month) as well as previous popular concepts (flavors of the month) have something of value that is lost when the popular concept is abandoned.
Observation: Most, if not all of today’s popular concepts (flavors of the month) as well as previous popular concepts (flavors of the month) have been used to justify or obscure incompetence, lack of integrity, noncompliance, and/or lack of transparency.
Observation: Those who criticize today’s popular concepts (flavors of the month) are often shunned, ostracized, blacklisted, or the like. “The Emperor’s New Clothes ” is still a relevant story.
Quotation: “Anything that works will be used in more and more challenging situations until it results in a fiasco.”-Bill Corcoran’s rendering of the Generalized Peter Principle
Observation: The Capability Maturity Model (CMM) is a popular concept (flavor of the month) being used at least in the software field, but can be expected to be applied widely in other fields.
Quotation: “There is a Gresham’s Law of safety and quality culture. The bad drives out the good.” –Bill Corcoran
Observation: The U.S. Navy’s misuse of “situational awareness” to obscure the causation of a mid-air collision is an example of flavor of the month errors. The 2015 death of Lieutenant Nathan Poloski, USN in an over water mid-air was not explained by “situational awareness .” Was “situational awareness” used to justify or obscure incompetence, lack of integrity, noncompliance, and/or lack of transparency?
Observation: Non-rigorous investigation approaches, such as “Critiques”, “Fact Finding Meetings”, “Apparent Cause Analysis”, and the like persist and are seemingly impervious to criticism based on their failures.
Quotation: “When you notice that the horse you are riding is dead, dismount.” Plains Indian Native American wisdom
Observation: “Normalization of Deviance ” is a formerly popular concept (flavor of the month) that was probably allowed to fade too soon, perhaps it created discomfort in organizations that allowed normalization of deviance.
Observation: “Theory of Constraints ” is a formerly popular concept (flavor of the month) that seemed to compete with those of Deming and the Japanese schools. It implies the very useful concept of the limiting weakness.
Observation: Resilience is a popular concept. “Resilience” is used both as a physical phenomenon and as a metaphor .
Observation: “Just Culture” is still a popular concept (flavor of the month) in some communities.
Observation: “Bowtie Risk Management ” is still a popular concept (flavor of the month) in some communities.
Observation: “Behavior Based Safety (BBS) ” is still a popular concept (flavor of the month) possibly because it sounds plausible and appeals to managers who want to shift the burden of safety.
Observation: When previous popular concepts (flavors of the month) are abandoned at least some of their proceduralized requirements are often left in place even when not adding value, thus resulting in bloated, difficult to follow, and/or self-contradictory instructions, procedures, and drawings.
Observation: It has been fashionable to give popular concepts (flavors of the month) Japanese names such as Kaizen , Poka Yoke (aka Mistake Proofing), and the like.
Quotation: “Change is the mother of trouble.”-Unknown for now
Quotation: “Trouble is the mother of change.”-Unknown for now
Quotation: “There is no such thing as a free lunch.”-Milton Friedman and others
Observation: “Reduction of Cumulative Impact” as promoted in the U.S. nuclear power community may become a popular concept (flavor of the month).
Observation: “Tough Love ”, including “Boot Camps”, was a popular concept (flavor of the month) for managing aberrant behavior in several contexts.
God save us from MBAs.
Well, perhaps that’s not fair. Some of my best friends may possibly have friends who are MBAs. And anyway the problem isn’t MBAs per se but rather the fetishistic fondness that bureaucrats have for ‘metrics’ and (shudder) ‘dashboards.’ When talking about individual and organizational behavior, I like to point out that there is no metric that people won’t try to game. Dr. Dekker says it better – what gets measured gets manipulated. Besides being more concise than mine, Dekker’s maxim has the advantage of taking a slap at Deming’s observation that “if you can’t measure it, you can’t manage it.” Except that Deming never said that. What he did say was this:
“It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth.”
The insidious effects of an infatuation with ‘metrics’ during the Vietnam War underscores just how costly myths can be. Nicely done, Dr. Dekker.
Very good paper. Spot on in so many ways. Problem is that BBS has been debunked since early 1990’s and indisputably by late 1990’s. I wrote a Professional Safety article “Safety’s Silver Bullet” specifically on this point in 1997 then in an offer of how to improve measurement of safety Dr. Brooks Carder and I wrote a book “Measurement Matters” suggesting alternate measures to improve Safety in late 1990’s. My point is not that you should read our work, only that with plenty of supportive data against BBS and only anecdotal or non-repeatable stories supporting BBS there are still a lot of people buying the easy way out in business.
Your best point was the point that BBS not only does noting to prevent High Consequence Low Probability events it actually miss directs efforts to find and address system and organizational issues that might have a chance of preventing them.
Keep up the fight. Will likely be another 20 years to get rid of this bogus process. Skinner did a lot to sell this idea. His ideas are still around and should have been relegated to mythology.
Don’t give up the fight.
Skinner ideas and BBS are amazingly appealing to so many senior decision makers as they fundamentally support deeply entrenched ‘people at fault’ beliefs, and those are based on a variety of factors such as illusion of free will, fundamental attribution error and just world hypothesis. As a commercial product, they have been incredibly successful by hitting the right tunes and professing to be able to ‘fix’ people. Fascinating stuff is to be able to observe to what lengths some organizations have gone to manufacture evidence and in turn justify what is often a huge expenditure for little, if any benefit. In many ways, this is just as important driver of skewing injury data as is the flawed zero harm philosophy.
We are still seeing variants of BBS floating around and being sold and I am afraid this will continue for as long as senior people are harboring the same old ideas of people being the main causes of accidents and in need of ‘fixing’.
An excellent article on this subject …
This is another article that falls under Mr Dekker’s statement in his article “Studies that seem to show the success of behavioral interventions are exclusively published by those who have an active economic stake in promoting the practice”.
I want to add to the discussion forum that behaviour is not just impacted by the physical conditions of work, but is also impacted, maybe more so, by the psycho-social conditions. We all know that behaviour persists if it is being perpetuated by a positive reinforcer – so let’s not forget the importance of managers’/leaders’ actions. Employees take their cues from leaders and act accordingly; leaders have a profound effect on behaviour (social learning theory). No matter what the “physical” conditions, people will work safely if the leader acknowledges it and does what he/she can do to manage the risk. Workers know work is not 100% risk-free, and workers know when the manager is really trying to manage those conditions versus when the manager is not.
I have seen the difference and maybe you have too. Every work site has a different set of physical conditions, all have dynamic risks, but the “safe” work sites had an awesome manager – one who is able to manage the goal conflicts that often occur, one who acts to espouse the value of safety. The safety practitioner seemed irrelevant to the psycho-social conditions. It was all about management.
So, let’s take BBS and turn it onto management. Let’s have workers observe and measure management behaviour (oh, in consultation with management of course, lol). I think that might be a worthwhile endeavour for a BBS program.
Great article, Mr Dekker, thank you.
The current flavor of the month, Human Factors, is doing a fair job of displacing BBS and is an improvement in that it at least accepts that people are people, and are thus fallible. HF also deems causality to reside in large part in conditions set up by the organization rather than the individual somehow being “at fault”, except in a very limited number of cases. HF >> BBS but is by no means the silver bullet.
I can’t help wondering how ‘human factors’ could be seen as a candidate to become the next safety management fad. Human factors, as I understand it, is a body of knowledge and a collection of techniques that have been applied to good effect for over 50 years in a variety of contexts, including military and commercial aviation, nuclear power, process control, personal computing, and more recently (belatedly) medical technology (and practice) – to name only a few. I’d hate to think there was a cottage industry growing up out there where people with questionable qualifications are hawking human factors ‘programs’ to managers looking to check a box. (But, if in fact there is – how do I get in on it?)
Sloppy writing on my part, I should have referred specifically to the human failure facet of Human Factors. And yes, in my world there is a more than cottage industry around this.
Very good article.
To err is to be human…..
Thank you for this thought provoking piece with its powerful examples. There are some lessons to learn here in the process safety management field where Tier 3 and 4 process safety performance measures start to look at the underlying behaviours or attempt to measure “cultural” indicators.
A very thought provoking article and about time this was shared with all industries. The marine industry has toyed with this for some time and now some industry bodies make is “mandatory” for companies to have some form of BBS. I really wish all were well informed about the true nature of BBS and that it is a myth that the worker is the error in the system.
As Dr Todd Conklin says, it is easier for regulators to make a change that has true impact rather than change the worker who is at the pointy end of the stick.
As Mr Dekker says, “Studies that seem to show the success of behavioral interventions are exclusively published by those who have an active economic stake in promoting the practice.”
The problem is that those who debunk BBS are sometimes drowned by the loud noise of those who promote it.