The discussion was about pre-start meetings. A couple of workers had spoken up about the meetings as ineffective and confusing: After the pre-starts people could spend considerable time trying to figure out what they were supposed to do, and where, that particular day. What could be done to improve the effectiveness of these gatherings?
As I listened to a group of 5-6 employees, I learned about the limited time available, the quantity of information to be transmitted to a large number of recipients, the media used to facilitate the message, the differences in effectiveness from day to day. But I also heard about simultaneous activities sometimes taking place in the interest of being efficient (e.g. breathalyser tests and crib arrangements). Furthermore, the room where the pre-starts took place was dirty, had rather dim lights, and noisy air-conditioning.
The discussion was beautifully divergent and gradually expanding as people made use of each other’s comments to enrich the common understanding. People had the opportunity to create their own interpretation of what went on. Their normal organisational roles were temporarily suspended and the group made good use of the available perspectives. This polyvocality gave a rich but indeterminate description of interconnected conditions that coincidentally gave birth to the difficulties at hand (and probably much more). The very messiness of the speculative, reflective and bar-like conversation fuelled the discussion, if anything. We were engaging the ‘wisdom of the crowd’ to understand work difficulties. I thought it was pretty cool.
Then, a supervisor voiced his perspective: “The cause of this is that people are lazy. They simply don’t listen. They sit there half asleep, hard hats over their eyes. Chatting to each other. They don’t pay attention. Anyone can see that. We need to make sure that there are some consequences from not listening.”
There it was – a quick and sudden zooming in on the behavioural component. A crude, clumsy and antiquated kind of psychology. One which takes human agents as the starting point of trouble – as if behaviours, thoughts, or attitudes, somehow originated in some human core, a nucleus completely disconnected from the complex entanglements of the outer world.
The group fell silent. It was as if someone had water bombed a fire just as it was beginning to crackle and glow. The search was over. The initial discussion that had gone up and out, gradually charting the difficulty and messy, had been overtaken by a simple search down and in. As such, the seed of trouble had been found in the murky and shoddy morals of those lowest on the hierarchy; the lazy human operators.
As I travelled back to the office I thought about what happens when we try to make sense of things that are messy, diffuse and disordered. From early on, we are taught that causes bring about effects. First the cause, then the effect. But in most cases we first see an effect that creates a need for explanation and a search for the cause. So it is indeed, an effect-cause relationship. First we notice that information is not conveyed effectively during pre-start meeting (the effect), then we look for a cause (lazy workers). And when we find it, the search is over. (As such, the search for root causes is both naïve and mis-directed, away from an observer that ‘finds’ such things.)
However, singling out causal nodes (lazy workers) or locating the sources of difficulties from an interconnected and interdependent web, is not only tricky but ultimately an arbitrary task. As historian Peter Galison (2000) points out in his take on accidents investigations: there is something unstable about causes; a failure to pay attention (by operators) can equally be understood as a failure to make noticeable (by supervisors). And a failure to perform a given task can be considered an organizational failure to provide supervision, training, or instructional clarity. Put differently, there is nothing in an issue itself that decides where the roots are to be found. A failure can thus be described as an operator failure, organizational failure, oversight failure, etc. ‘Causes’ are not found, but constructed.
And people prefer to explain events as something they have control over. Whatever explanation is formed to account for something, will, even despite pledges to rationality and methodological objectivity, be expressions of different interest groups’ capacity to extend their influence by suppressing, or overwhelming, competing accounts (Brown, 2000). ‘Causes’ are thus an invention by us for us to allow us to project our will to power (Nietzsche, 1967).
As such, any account of accidents, failure, difficulty or indeed anything, is inherently political. Whatever truth we believe in feeds a particular social order, and simultaneously marginalizes another one. If the ‘cause’ is a workforce that’s lazy, then there is no need to change anything about what the supervisors do. Or, if dumb people can be made the problem, then there is little need to invest in design solutions.
But back to the pre-start discussion. How could an innovative, collaborative, exploratory, and open discussion, ripe with potential for finding contextual improvement, collapse so easily when confronted with a behavioural discourse? How can we stay away from single causal construction of the human as the problem? How do we empower people to bring forth and improve their local context at work, instead of constantly being reconfigured in a behavioural discourse that points to the limits in human performance and motivation?
Stephen Healy suggests that causalism can be countered by other ways of knowing. He suggests that we should favour ways of knowing in which issues of context, process and procedure take precedence over metaphysical, or for example inner, abstractions. Emphasising what is between people, rather than what is within people, offers a ‘wider knowledge politics’ and opens up the many futures that are available to us (Healy, 2003, 689).
I’m increasingly keen on using a ‘deliberate imprecision’ (Law, 2004) in describing work conditions and processes. For example, the role of safety professional could be about highlighting when work is difficult, and instead of providing a solution, calling for ideas as to how to understand the issue and make things better. The uncertainty and ambiguity provide space for imagination. By fostering this polyvocality, more people can participate with their experiences and perspectives. This way we move away from the singularity of human causalism, toward increasingly rich descriptions of the nature of situations. This way, we have a much better chance/opportunity to explore the complexities of work, resulting in a more integrated understanding, and new ways to embrace the future.
References:
Brown, A. D. (2000). Making sense of inquiry sensemaking. Journal of management studies, 37, 45–75.
Healy, S. (2003). Epistemological pluralism and the ‘politics of choice’. Futures, 35, 689–701
Galison, P. (2000). An accident of history. In P. Galison & A. Roland (Eds.), Atmospheric flight in the twentieth century (pp. 3-44). Dordrecht, NL: Kluwer Academic.
Law, J. (2004). After method: mess in social science research. London: Routledge.
Nietzsche, F. W. (1967). The Will to Power. Translation Walter Kaufmann and R.J. Hollingdale. New York: Vintage books.
Are we kidding ourselves trying to find a true root cause to these effects? I often wonder about practices to establish causes. In science, establishing a cause is extremely difficult. Even in controlled studies, there are typically numerous limitations and years after there can be fundamental shifts in paradigms. So, in a workplace, the comment ‘workers are just lazy’ is not only a weak explanation, it’s probably completely useless. Actually, not useless. It’s perfectly useful in lowering morale and nullifying the progress you’re making.
A few thoughts that I had while reading:
– The role of leadership and accountability are not given the prominence/focus/importance they require in understanding what drives performance of any kind. In the example given the behaviour was allowed to occur and continue.
– the discussion about ‘deliberate imprecision’ reminds me of the process of developing a JSA or work procedure. There has been much discussion around approaching this development differently, but it actually how it was always intended to be, through wide consultation, collaboration and coordination wit the risk owners taking control and ownership of their risk. The role of ‘safety’ in many organisations has moved away from the ‘true’ path to a place where ‘safety’ sits in a box and works in somewhat isolation. From what I have observed in my own experience, this occurs as a result of organisation and individual (line mgt as well as the safety professional), a lack of commitment by the leaders involved (and risk owners), a lack of time available for the people who should own it and drive it, and a seemingly increasing misunderstanding of what ‘safety’s’ role actually is and should be. This failure in itself does in a number of ways (for me) come back to a discussion on leadership and accountability.
Daniel, I think part of this is also contributed by the myth of optimization. People actually believe they make decisions on a full cup of knowledge, rather than an approximate level of knowledge. By maintaining a simplistic sense of ‘facts’ rather than a subjective sense of problems as ‘wicked’ someone is able to read and attribute cause as both objectively knowable and aligned with their perception of reality. The last thing people want to hear is that risk is so complex that it cannot be known. The truth is the way we make decisions is astoundingly subjective and based on so little knowledge. Most of our decision making is unconscious and we rationalise our decision after the event. The univocal sense of knowing helps people objectify risk as a mechanistic process and, then easily attributes to those who fail a cause as lazy, fools and idiots. The nature of human fallibility is not a weakness but strength as we step beyond the illusion of certainty in systems to the adaptability of decisions through heuristics and intuition. Law’s ‘deliberate imprecision’ is what Simon called ‘bounded rationality’, Gigerenzer’s work on this is excellent especially in understanding the mythology associated with statistical data. Of course what is ‘between’ people is culturally determined, something that the safety community is yet to grapple with. Moreso, the safety community is yet to also develop a sense of importance in followership, still fixated on the leader as hero myth.
I read with interest perceived problems with the safety community grappling with the causality, complexity theory and perhaps even human behaviour theories which saddens me as a health and safety professional. Safety professionals, like any other professional in the changing political landscape need to be open to new ways of thinking, maintaining competency and fostering an environment that values democracy and innovation in a language that is understood top- down and bottom-up, and..globally. Growing research on emotional intelligence has implications for managers and how they use their capabilities to get the best out of their people on various levels. Knowledge is not always about elitism, rather, it is about professionalism and opening up one’s worldview . Perhaps we need to broaden the perspective from safety to governance and recognise a collective mindset. I think we all agree on the need to dispel myths, and create pathways for new ways of thinking about safety/governance.