(Dekker; ISBN 978-1-4724-3905-5)

Preface

"In the Old View, 'human errors' are the cause of most of your trouble. People, in this view, are a problem to control. People's behavior is something to control, something that you have to modify. You might believe you have to start with people's attitudes, because those influence their behavior. So you try to shape those attitudes with posters and campaigns and sanctions, which you hope will impact their behavior and reduce their errors. You might even elect to sanction some people under your 'just culture' policy (even though there is generally no evidence that any of this works).

"In the New View, the behavior which we call 'human error' is not a cause of trouble. It is the consequence, the effect, the symptom of trouble deeper inside your organization. The New View assumes that people do not come to work to do a bad job. So when there are bad outcomes, you must look beyond those people, at the conditions in which they worked at the time. You and your organization may well have helped create those conditions. Leave those conditions in place, and the same bad outcome may happen again--no matter how many sanctions you impose, posters you put up, or safety attitude campaigns you launch."

"... promise yourself this:

A focus on 'human error' makes you do all the wrong things.

If the HR department is involved in safety in your organization, then an incident investigation can quickly degenerate from learning opportunity into performance review.

Some believe that they need to keep beating the "human" until the 'human error' goes away.

Chapter 1: Two Views of 'Human Error'

First view (Old View) aka Bad Apple Theory:

Or, in other words:

In order to not have safety problems, people should do as they aer told. They should be compliant with what managers and planners have figured out for them. Smart managers are others above them have put in place those rules; all the dumb operators or practitioners need to do is follow them! The reason they don't is clear: operators' negative attitudes, which adversely affect their behaviors. So more work on their attitudes should do the trick.

Bad People in Safe Systems, or Well-Intentioned People in Imperfect Systems?

Stories of error seem so simple:

Most errors seem so preventable

These don't help in the long run; in fact, they are counterproductive:

It's important to note:

Underneath every simple, obvious story about 'human error', there is a deeper, more complex story about the organization.

Old View New View
Asks who is responsible for the outcome Asks what is responsible for the outcome
Sees 'human error' as the cause of trouble Sees 'human error' as the symptom of deeper trouble
'Human error' is random, unreliable behavior 'Human error' is systematically connected to features of people's tools, tasks, and operating environment
'Human error' is an acceptible conclusion of an investigation 'Human error' is only the starting point for further investigation
Says what people failed to do Tries to understand why people did what they did
Says what people should have done to prevent the outcome Asks why it made sense for people to do what they did

People Do Not Come to Work to Do a Bad Job

The psych basis for the New View is the "local rationality principle": People do what makes sense to them at the time--given their goals, attentional focus, and knowledge--otherwise they wouldn't be doing it. If people did things that seem at first to be inexplicable, it is not beause they were doing inexplicable things. We are not positioning ourselves to understand why it made sense for them to do what they did. That burden is on us--we need to put ourselves in their shoes, see the world through their eyes.

If it made sense for people to do what they did, then it may make sense for others as well.

New View investigations have the following characteristics:

The New View does not claim that people are perfect. It keeps you from judging and blaming people for not being perfect.

But... What About the Idiots?

The New View seems overly charitable. "There needs to be some accountability"--by which we mean that we should be allowed to blame the truly blameworthy.

"Bad Apples clearly exist"

Pushback against the New View.

Examples:
* British Medical Journal study showed that a small number of doctors accounted for a very large number of complaints from patients: 3% of doctors generated 49% of complaints, and 1% of doctors accounted for 25% of all complaints.
* In 1913, Tolman and Kendall (pioneers of American safety movement) strongly recommended managers to be on the lookout for men who always hurt themselves, and take the hard decision to get rid of them.
* Around 1925, British and German psychologists suggested that there were particularly "accident-prone" workers, and that the inclination to "accident" was proportional to the number of accidents previously suffered.

But... pushback began against the percentage-based nature of the analysis. Are these apples-to-apples comparisons?

Practitioners are not all exposed to the same kind and level of accident risk. This makes it impossible to compare their accident rates and say that some, because of personal characteristics, are more accident-prone than others.

"Individual differences exist but accommmodating them is a system issue"

When you observe a consistent, repeated "Bad Apple" in your org, you may be looking at a mismatch between the task and the person. Where the responsibility for creating/resolving that mismatch lies:

A "Bad Apple" problem, to the extent that you cna prove its existence, is a system problem and a system responsibility.

"There has to be accountability"

Holding people accountable means getting them to take responsibility for their work; such individual responsibility, and the expectation of being held accountable for outcomes, is vital to much safety-critical work, and often intricately connected to people's professional identity. The job would not be fun, meaningful, "worth it", if it weren't for that responsibility and accountability. This accountability forms the other side of professional autonomy and competence, to be seen to be good at what you do, and accepting the consequences when things do not go well. Such accountability gives people considerable pride, and it can even routine operational work deeply meaningful.

But people do wnat to be held accountable fairly. They want to be held accountable by those who really know the messy details of what it takes to get the job done--not by those who only think they know. It is also unfair to make people responsible for things over which they have no or little authority.

The authority-responsibility mismatch

You cannot fairly ask somebody to be responsible for something he/she had no control over--it is impossible to hold somebody accountable for something over which they had no authority.

The authority-responsibility mismatch brings us back to the basic goal conflicts that drive most safety-critical and time-critical work; such work consists of holding tother a tapestry of multiple competing goals, of reconciling them as best as possible in real-time practice: Efficiency-thoroughness trade-offs (ETTOs).

This lays down a good rule:

Accountability and the systems approach

A systems approach understands that each component or contributor in a system has specific responsibilities to help attain the system's overall goals. These responsibilities cannot just be put on other people or other parts of the system.

There is no evidence that a systems approach dilutes personal accountability. In fact, second victims show just how much responsibility practitioners take for things that go wrong.

Accountability doesn't have to be about blame. Accountability can mean letting people their account/story. If you hold somebody accountable, that does not have to mean exposing that person to liability or punishment. Storytelling is a powerful mechanism for others to learn vicariously from troube.

To report or not to report

Depending on the safety level of the activity, a confidental (not anonymous) reporting system is a key source of safety-related information. Anonymous: the reporter is known to nobody. Confidential: the reporter is known, but only for safety people (not people who could create career jeopardy).

Accountability os the "just culture"

Many orgs struggle with how to create a "just culture". They try buying it off the shelf and then it doesn't work.

The challenge is to create a culture of accountability that encourages learning. Every step toward accountability that your organization takes should serve that goal. Every step that doesn't serve that goal should be avoided.

Instead, think about creating justice in your response to incidents by addressing the points below:

  1. Don't ask who is responsible, ask what is responsible.
  2. Link knowledge of the messy details with the creation of justice. Make sure you have people involved in the aftermath of an incident who know the messy details, and who have crddibility in the eyes of other practitioners.
  3. Explore the potential for restorative justice. Retributive justice focuses on the errors or violations of individuals. It suggests that if the error/violation (potentially) hurt someone, then the response should hurt as well. Restorative justice suggests that if the error/violation (potentially) hurt, then the response should heal. It acknowledges the existence of multiple stories and points of view about how things could have gone wrong (and how they normally go right). Restorative justice fosters dialogue between the actor and the surrounding community, rather than a break in relationships through sanction and punishment.
  4. Go from backward- to forward-looking accountability. Backward-looking accountability means blaming people for past events. Forward-looking accountability focuses on the work necessary for change and improvement, and connects organizational and community expectations for such work. What should we do about the problem, and who should be accountable for implementing those changes and assessing whether they work?
  5. Put second-victim support in place.

Chapter 2: Containing Your Reactions to Failure

Chapter 3: Doing a 'Human Error' Investigation

Chapter 4: Explaining the Patterns of Breakdown

Chapter 5: Understanding Your Accident Model

Chapter 6: Creating an Effective Safety Department

Chapter 7: Builing a Safety Culture

Chapter 8: Abandoning the Fallacy of a Quick Fix

Epilogue: Speaking for the Dead


Tags: reading   management   psychology  

Last modified 24 November 2023