
When General Stanley McChrystal took over the U.S. Joint Special Operations Command[1] (JSOC) in Iraq during the mid-2000s, he inherited an organisation struggling to overcome the Al Qaeda insurgency plaguing the country. After a few weeks in the job, he realised his new team had been viewing their enemy through the wrong lens, and therefore had been using the wrong strategies to defeat them. Ultimately, this insight led him to revolutionise the Command’s structure, and challenge its very core beliefs about how it could win the war.
At the heart of McChrystal’s revolutionary strategy was an appreciation for the difference between systems that are complicated and those that are complex.[2]
When people describe something as complex, what they usually mean is they think it’s really complicated. This suggests that there is a continuum of ‘complicatedness’ and that the difference between a complicated and a complex system is one of degrees, rather than type. In reality, a complex system is fundamentally different from a complicated one. It’s critical that we understand how they are different, and why this knowledge is important if our goal is to manage the safety of a system.[3]
What is a system?

A system can be defined as anything that involves ‘a set of things working together as parts of a mechanism or an interconnecting network’.[4] Examples of systems include an analogue watch, an underground rail network, an air conditioner, a business, a car, a skeleton, an aeroplane, a person, or a government. The way we think about the systems around us influences the methods we choose to solve the problems they pose.
It’s complicated
Traditional thinking tends to lead us to see all systems around us as complicated. A complicated system is usually something technical or mechanical and has many interacting parts.

Think of a jet engine. It contains thousands of mechanical parts, and to understand how it works you can read a manual that will tell you everything you need to know. If it stops working, you can take it apart, locate the broken component, replace it, and return the engine to service. This type of problem-solving works well with complicated systems because they work in a linear way, and are fully knowable (with enough study). The whole is equal to the sum of its parts.[5]
In complicated systems, unwanted events and outcomes (e.g. oil leak) are usually the direct result of component failures. The possible range of outcomes is finite because the system has been carefully designed for a specific purpose.
Unfortunately, problems start to arise when we treat complex systems as if they were complicated. This is exactly where the JSOC found themselves in the fight against Al Qaeda when McChrystal took over. They had been imagining their adversary as a traditional, hierarchical army with clear lines of vertical command and control; a complicated system. In reality, Al Qaeda was a complex web of cells interacting and operating in unpredictable ways, for which traditional battle tactics were useless.
No, it’s complex!
Complex systems are fundamentally different from their complicated cousins. They contain the same technical components (e.g. physical equipment and computers) but also consist of human elements and vast social networks.
A prime example of a complex ‘socio-technical’ system is an organisation, such as an airline. Airlines consist of many technical elements, like the aircraft and the IT, but also many forms of social systems, like management teams, frontline workforces, and customers.

Systems typically become complex by default when individuals or groups of people are added to them. Returning to the example of the jet engine – which we recognise as a complicated technical system – as soon as we decide to perform some maintenance on that engine, the new system we’ve created, ‘jet engine maintenance’, automatically becomes complex. This new system contains human, social and organisational elements (policies, procedures, culture etc.), as well as technical parts.
In complex systems, unwanted outcomes do not occur solely due to individual component failure, but most often they emerge from the unpredictable interactions between the components. For example, the way an engineer interacts with company policies, procedures, goal conflicts, organisational culture, their team, the environment, etc. when maintaining an engine.
If we think of the total aviation system, acknowledging that it is complex, then we recognise that the millions of sub-systems within it (ATC, airports, airlines, manufacturers, maintainers, etc.) will all interact with each other in complex and unpredictable ways. That means that any attempt to assert control over the system will ultimately fail because complex systems cannot be controlled in the same way that complicated ones can.

Critically, in a complex system, the whole is greater than the sum of its parts because outcomes emerge in ways that cannot be totally controlled or predicted.
McChrystal helped his team to see Al Qaeda as a complex web of unpredictable and adaptive elements. This meant employing fundamentally different strategies of battle, which ultimately led to significantly greater success in their fight.
What does this mean for how we manage safety and risk?
It’s natural and normal for us to treat complex problems as if they were complicated; to reduce them down to their parts and change out the troublesome component. This is after all how most formal education teaches us to solve problems; by ‘analytical reductionism’.
But in 21st-century airline safety, most of the time we’re dealing with human work performed by pilots, engineers, ground staff, and cabin crew. We can’t truly understand human work – how it normally goes right and sometimes go wrong – using the same methods we use to understand technical objects.
This means that when we’re looking for strategies to solve human-centred safety problems, we need to apply complex systems thinking to the task. This means avoiding the temptation to disassemble the problem to find the broken ‘component’ (human).

As safety leaders and practitioners we should use the thinking, methods, and tools that help us to understand the complex and dynamic nature of the systems we operate. We need to study the interactions, patterns and feedback loops in our systems and identify how small changes can lead to disproportionately large and unintended consequences.
Whether we’re designing new policies and procedures, investigating a maintenance error, or risk assessing a new piece of equipment, we instead need to consider safety in the context of the overall system – to think holistically and embrace complexity.
When writing about systems, one can’t finish a piece without quoting the great Russell Ackoff. A little Ackoff wisdom goes a long way:
“To manage a system effectively, you might focus on the interaction of the parts rather than their behavior taken separately.”
References
[1] https://en.wikipedia.org/wiki/Joint_Special_Operations_Command
[2] https://tinyurl.com/y4vcogp9
[3] https://tinyurl.com/y2ty5wkw
Good article, Adam. Nice that you referenced Sonja Blignaut’s article on complexity implications. I penned a similar article for the 7 Implications of Complexity for Safety. http://www.safetydifferently.com/7-implications-of-complexity-for-safety/
In Shawn Callaghan’s review of the Team of Teams book, he mentions Dave Snowden’s Cynefin Framework. https://www.anecdote.com/2017/07/role-of-storytelling-in-team-of-teams/ The McChrystal Group use Cynefin in their consultancy. Prez Chris Fussell writes about the Cynefin Framework in his follow-on book One Mission.
Stories provide the right level of granularity to make sense of patterns and complex adaptive system constraints.
Thanks for the comment, Gary. Sonja’s writing was one of the foundations for the article. Thanks for the link to Shawn Callaghan’s review of Team of Teams. Storytelling is so powerful and is under-utilised in technical domains when talking about safety.
Very good article, well explained and broken down into its constituent parts. Not complex or complicated to understand. A genuine good read. Thanks.
Thanks for your comment, Sean. Much appreciated.
Thank you for this paper. Collective Intelligence also takes this important definition and difference between complex & complicated into account as well as the necessary switch between order & chaos whent it comes to collective decision making.
Thanks for your comment.
Excellent article that mirrors much of my experience in healthcare. Ironically, in a field centred entirely on nurturing biological, complex living systems of systems, the institutions are obsessed with reductionism, machine metaphors and linear oversimplifications. Thanks.
Thanks for your comment. Personally I have no experience of the healthcare sector, but do follow what’s going on there. When I learned that complexity science originated from the study of biological and ecological systems, it certainly did bring out the irony of the lack of awareness and appreciation for it in human healthcare.
A great article. Practitioners of safety must remember, when human dimension is added to the system, it becomes a sociotechnical system. The properties are complex because humans are governed by science of physiology; hardware is governed by science of physics. The strategies to manage risks , generally emergent risks, are different from those of linear engineered systems. The emergent risks are managed by building organizational resilience to succeed under varying conditions. Organizations looking to perform resiliently need to adopt Systems model of accident prevention.
A great comment, thanks Captain Varma.
Excellent article. Refreshingly clearly written.
Unfortunately the regulators of many complex industries are staffed with people with a background in complicated (engineering) risk management.
I hope your article reaches a wide and receptive audience in aviation regulation – but don’t hold your breath!
Thanks for the kind comment, Richard. Many areas of industry need to understand complexity, regulators included. I’m confident that the more people talk about complexity in the industry, the more we will start to see a shift in macro thinking. I’m already seeing a shift towards Safety-II in many conversations in the industry, especially airlines. Forward-thinking regulators won’t be too far behind. Since the science that underpins Safety-II is Resilience Engineering, which sits largely on Systems Theory and appreciating complexity, I don’t see how a move in the Safety-II direction can’t involve greater complexity knowledge too.
Excellent article. Thank you. I think a challenge in making any change seems to be overcoming the “not-needed” inertia that exists in all organizations. I think McCrystal’s experience and insight are invaluable to those who are engaged in making their case. In his book, Team of Teams: New Rules for Engagement for a Complex World, he provides real insight and perspective into how this change feels from a leader’s viewpoint. I have used this book to make the case for change to our leadership. https://www.mcchrystalgroup.com/insights-2/teamofteams/
Thanks for sharing the link, Dan. I hadn’t seen that page before writing the article, but it articulates what McChyrstal did much better than I have! Personally I think the book should be required reading for anyone in a leadership role in any kind of organisation. I truly believe that the more people understand complexity and adjust their thinking and actions accordingly, the better organisations and societies will become.
Many thanks for this simple and clear explanation. A great read for all interested in safety but especially students.
Thanks for your kind comment, Pam. Hopefully the article can be used a ‘taster’ for anyone new to these concepts and encourages them to go on and read more from the experts!
Have you considered the issue of Wickedity? ie. that is things beyond the complexity of systems?
Hi Rob. Wickedity is something on my radar, but I haven’t quite got that far yet. I know this is something you have written about quite a bit. Can you suggest any primer articles that would be good for the first time reader?
Hi Adam, my colleague Craig Ashhurst has just completed his PhD on it at ANU. His work is frontier breaking. Happy to put you in touch. robertlong2@icloud.com
I’m still puzzled as to the difference between wickedity and complexity. Every reference I see makes them sound basically the same or simply misunderstands one or the other. Can anyone explain the difference? Or perhaps there isn’t one?
Ron, not even closely the same. The difference is massive. Perhaps you need to read up more on them.
Wicked problems have no solution neither any stopping point, there is no black and white neither ‘scientific’ method to explore them. Indeed, the traditional scientific approach to knowledge and culture actually makes things worse or more deeply wicked.
The idea of complexity carries with it the ultimate assumption that there is a solution and that ambiguity and paradox are ultimately solvable, it then becomes just a condition of time, knowledge and resources.
I referred to a recent publication here: https://safetyrisk.net/independent-thinking-in-an-uncertain-world-a-mind-of-ones-own/
The moment one acknowledges ‘wickedity’ then a new transdisciplinary approach needs to be acknowledged. This steps way beyond anything that complexity studies proposes or that is suggested by Safety and moving beyond the idea that safety is a ‘science’.
Thanks Rob.
I get that you see a difference but my concern is that I do not (you and I have had this conversation before you and you agreed with me at the time). I worry now that when you talk about complexity you’re talking about something different than what I am talking about and what this article is talking about.
For example, complexity carries with it not assumption that there is a solution, and the assumption is that they are irreducibly ambiguous and unknowable. Even saying there is a “solution” misundertands what we are talking about because complexity is not a “problem” but a state of something. It’s not normative. This makes it hard because reducing or eliminating complexity is not only not possible in a complex system, it often is not even be desirable.
A further problem with complex systems is they are dynamic, so it’s not a condition of time, knowledge, and resources. That is complicated systems, using the theory used in this article. Complex systems change rapidly so by the time you model the system it has changed and your model is out of date and potentially dangerous. Dealing with complex systems is not a matter of science, at least not in the sense you mean it (positivism, reductionism, etc.). It is a function of leveraging sensemaking, learning, and improvisation to find a way forward.
So I am very interested in seeing what the distinction is between the two, if there is one, but everything I’ve seen suggests we are using different words to describe the same basic phenomenon. What am I missing?
Ron, the language and discourse of both are very different. Unfortunately I don’t recall the conversation you are talking about. Have you read much on wicked problems? It is in an entirely different genre than complexity theory.
The language and discourse of complexity is nothing like the language and discourse of wickedity. I didn’t read anything in this article about things that are intractable unsolvable ‘messy’ and paradoxical. I also didn’t read anything about what you said you assume ie. that complexity is ‘irreducible, unknowable and ambiguous’. It would also be interesting to see what Adam means by ‘holistic’.
As yet I can’t see anything much holistic or transdisciplinary in the safety sector or put forward by this article. The concept of wickedity invokes transdisciplinarity, boundary objects and the need for much a broader sense of knowledge that seems to be assumed in this discussion.
Hi Rob, Ron. Interesting conversation. I guess I’m in the same boat as Ron in terms of my appreciation for the differences between complexity and wickedity, and will admit that a lack of study of the latter will be why!
The article above was intended as a primer for those relatively new to the ideas and written in an accessible style, so there was no intent to describe complexity in more scientific terms.
Of the language you’ve used above, Rob – intractable, unsolvable, messy, paradoxical, irreducible, unknowable, ambiguous – many of these are terms that I would associate with complexity based on what I have studied; although again I can’t comment on the differences or similarities with wickedity and wicked problems.
Within the article, by ‘holistic’ I simply mean look at the system as a whole to understand the behaviour of its parts, as you need to have an understanding of the dynamic connections between them as well. Perhaps a fairly basic meaning for holistic, but as I’m very much a student of these sciences, I’ll leave the higher order research and writing to the experts!
One final point I’d like to add is that this idea of complexity is still very new to many people, and some people who need to understand it still aren’t even aware of it yet. Is there a risk that by introducing wickedity as well, we could confuse more people that we enlighten?
(The question and all my above comments are intended with ’emotional neutrality’, just in case you perceive me as being defensive (can’t easily convey tone)!!)
Cheers,
Adam
Thanks Rob. This gets to my overall point – that the difference is in the language. Yes, the language is critical. But the language of wicked problems is very problematic as well. To me it primes itself towards normativity, as both being “wicked” and having a “problem” are both bad things. Further, problems prime one to find solutions.
Sure, you could say that this is not the case for you and perhaps for wicked problems scholars, and if those were the people we were trying to influence in organizations then there would be no problem. But we are not. We are dealing with people in the world who have read nothing of wickedity or complexity. So we have to find what language works best for them and claiming that one set of language is privileged over another set of language seems counter to the very principles both wickedity and complexity are espousing. Yes, this leads to paradox and contradiction, but isn’t that the point?
And to point about the above article not speaking to the things I spoke to, I sincerely hope that you are not using one article as your reference point for complexity. In fact, I know you know more, because I know you wrote a glowing review in the past of Sidney Dekker’s book on Drift into Failure which is based on complexity. I would also invite you to read the applied work from practitioners in Dialogic Organizational Development, which combines complexity theory with social constructionism. These are just examples of how people are applying complexity theory in the real world that coincides with my description and with Adam’s description above.
And regarding transdisciplinary approaches, if you look at the mainstream then no wonder you cannot see anything transdisciplinary. But there is wonderful work being done in a variety of industries and sectors that combines multiple disciplines. My own PhD work is combining sociology, complexity theory, and industrial engineering, using qualitative and quantitative data. Am I excited about the engineering part? No, because I have similar concerns about positivism, reductionism, and dehumanization as you. But if I want my work to influence real engineers I have to do something that reaches them on their level before I ask them to move to my level. I don’t get the luxury of avoiding the paradox of how STEM can work with the post-structural social theory, so I have to work with it. But to me, that is the essence of being transdisciplinary which is necessary in a complex world or when dealing with wicked problems.
Ron, the difference is not language. There is a great deal you are missing in this. What is your email address? It would be better to discuss this elsewhere.
rongantt@hotmail.com
Hi Adam. with apologies for an extended response. I acknowledge neither are being defensive but whenever tackling such matters one needs much more space and time to even grapple with basic definitions. It may be risky to discuss such concepts but similarly there is no value in the simplistic rubbish that is served up to the industry in the name of safety eg. zero.
At the outset I need to make clear that I don’t come to this discussion from traditional risk and safety discourse nor the disciplines on which it depends. The concept of Discourse itself is critical for this discussion. My understanding of Discourse comes out of Poststructuralism and Social Psychology. These traditions understand Discourse as the embodiment of power in semantics, linguistics and the social politics, ethics and psychology embedded in language. Language and semantics are not understood as either neutral or benign. Indeed, how language is defined and used directs the trajectories of meaning associated to the disciplines. See further Potter, J., and Wetherell, M., (1987) ‘Discourse and Social Psychology’. Sage. London. And; Crawford, J., and Jussim, L., (eds.) (2018) ‘The Politics of Social Psychology’. Routledge. London. An understanding of Discourse is foundational to understanding differance (as in Derrida’s ‘Differance’) between Complexity and Wickedity.
In order to understand the differances between Complexity and Wickedity one needs to embrace a transdisciplinary approach to knowledge. That is, the capability to embrace knowledge cultures and knowledge paradigms outside of ones’ disciplinary tradition (understood through that traditions/discipline’s discourse and language). The knowledge cultures from which my theory of knowledge emerge are: Education, Learning, Anthropology, Metaphysics, Philosophy, Ethnography, Ethics, Theology, Social Psychology, Poetics and Social Politics. I think the clash of knowledge cultures often explains misunderstandings between my background and the common discourse of the industry. Language from the disciplines I am most familiar and others are missing from complexity discourse indeed, it appears that such disciplines and traditions have little to offer the complexity dialogue.
The tradition of risk and safety is clearly founded in Science, Technology, Engineering and Mathematics (STEM). Indeed, much of the language used to discuss complexity in the risk and safety space is scientific, technological, systemic and engineering in nature even when it discusses uncertainty and ambiguity. Complexity theory uses the language of ‘systems’ to explain complexity whereas wickedity uses the language of ‘ecologies’ to explain ‘being’. This distinction between ‘systems’ and ‘being’ cannot not be over emphasized. Wickedity is more likely to seek metaphor, models, semiotics, phenomenology and poetics to understand itself (being) than the language of systems, engineering and science which seeks structuralist language to understand itself even when it acknowledges similar concerns.
Complexity theory uses the language of ‘complicated’ and ‘complex’ to understand systems. This approach tends to understand organisations scientifically (see. Axelrod, R., (2000) ‘Harnessing Complexity, Organisational Implications of a Scientific Frontier.’ Basic Books New York.) Wickedity uses the language of ecology, messy, imagination, poetics, transdisciplinarity, knowledge cultures, myth, uncertainty and paradox. See Brown, V., Harris, J., and Russell, J., (eds.) (2010) ‘Tackling Wicked Problems’. Earthscan. London.) The language one speaks creates ones culture, identity and worldview.
Wickedity doesn’t privilege scientific or engineering knowledge cultures over others. Rather Wickedity understands that even science and technological language is located in its own worldview and political history. In the Wickedity worldview the work of Lyotard, Lacan, Kristeva, Derrida, Deluze, Bateson and Foucault are significant. In such thinking the language and discourse of complexity theory itself must be deconstructed as a social-political-ethical discourse.
Whilst Wickedity acknowledges the existence and dynamic of complexity and understands it through its discourse it doesn’t seek to understand itself through scientific or systems language. Indeed, it views structuralist discourse as problematic to understanding being. Wickedity is more interested in the Anthropocene than Systems and understands systems as subsets of ecologies. Complexity theory tends to frame its worldview through the discourse and language of systems.
Why does this distinction matter when discussing risk and safety? At the outset we need to understand the challenges posed by fallibility, mortality, randomness, messiness, ecological dynamics, embodiment and transcoherence (Ashhurst) for risk and safety. Understanding these as foundational to being helps frame challenges as ‘wicked’. Such an acknowledgement conditions they way one tackles risk. Wickedity doesn’t view the challenges of risk through the lens of systems indeed, founding a discourse in systems anchors one to the constraints of such language. If risk and safety could step outside of systems discourse it might see the challenges it faces differently.
Many, many years ago we developed these ten useful ways of applying systems thinking in organisations:
1. Effectiveness comes before efficiency – it is more important to do the right things than to do things right; doing the right things poorly is better than doing the wrong things well, for example improving the way that first aid is provided to employees after an accident does nothing to prevent a similar accident occurring in the future
2. Go an inch wide and a mile deep – we need to identify the key leverage points of change and probe deeply to identify the underlying systemic causes of change, for example by asking “Why five times”
3. Focus on patterns and flows to solve or exploit rather than culprits to blame – recurring patterns of events are indicators of systems; use the phrase “one is happenstance, twice is coincidence and three times is enemy action”
4. All work is a process, a series of actions that produces a result – use the SIPOC or other Process Models to understand the impact of standards, procedures, training and knowledge, and facilities and equipment on performance and process capability – most of which are supplied by the organisation’s (management) systems
5. Use “both/and” rather than “either/or” decision making – our organisations are full of paradoxes to be managed as well as problems to be solved; for example stability versus change – we have to both “keep them going” and “change them”
6. Pay attention to both the inner and outer context of the organisation – organisations only need to change for two reasons; they either want to change their internal performance or respond to changes in their external context
7. Requisite variety – the model or control of the system requires as much complexity as the system it is controlling; we need to match the variety in the control systems that we use with the variety within our organisations
8. A system without an aim is not a system – without a purpose there is no system; managers must communicate a clear purpose to everyone in the organisation to ensure the alignment of their actions
9. Managers should manage the “arrows and white spaces” – these are the transactions and feedback loops between activities; managers are the ultimate internal suppliers to the organisation; they provide not only resources but goals, plans, priorities, smooth flow, focus and productive working environment
10. Remember to think in circles not straight lines – managers often forget that there is not a simple cause and effect relationship between their actions and the results, there is usually a delay between action and reaction; they often confuse correlation and cause, for example believing that employee engagement creates better financial performance whereas research indicates that the reverse relationship is stronger
Some of these ideas ‘creak’ a little with the passage of time but are offered to encourage the discussion.
Hope they help
Regards
Hello. Thanks for your inputs. Do you have a name so we can converse more personally? Your list is really interesting and some great nuggets of wisdom in there. The only one I’d like to ask for more information on is number 2, where you mention ‘asking “Why five times”’. The 5 Whys technique has a number of detractors, especially in the Safety-II / Systems Thinking world, and is strongly linked to Root Cause Analysis. Please could you explain a bit more how you see 5 Whys playing a part in Systems Thinking. Thanks, Adam.
Hi Adam My name is Ricky Gleed. Sorry for the anonymity, it wasn’t intentional! I will attempt a brief response:
– I have no problem with balancing the concepts of Safety I and Safety II – it is after all another paradox to manage. For more information read any of the stuff written by Steven Shorrock on the subject
– Accidents come in all shapes and sizes so we need different models and approaches to managing safety to be successful. For more information read Professor Rene Amalberti who states:
“The idea of a single model of safety that applies to everything and aims to have zero accidents is naïve. There are many different responses to risk, which provokes many different authentic models of safety, each with their own approach, advantages and limitations. The differences between these models lie in the trade-offs between the benefits of adaptability and the benefits of the level of safety.”
– Systems come in different forms. Read the ‘Intelligent manager’ by Alistair Mant who identifies two types of systems in organisation’s which he calls ‘bicycles’ and ‘frogs’. Both need managing differently.
– the concept of “going and inch wide and a mile deep’ is fundamental to Japanese lean thinking. Root cause analysis has continued to be developed to make it more holistic. I would probably update the comment about the ‘5 Whys’ technique to reflect that.
Hope this helps
Regards
Richard Gleed