Why Square Pegs Fit Into Round Holes

Oftentimes after an error or failure event leaders, managers and other team members may retrospectively examine the events with curiosity and an aim to determine why the failure or error occurred. This is natural and is a good thing. After all, we should try to learn from failure and attempt to prevent its reoccurrence while learning how to improve our processes. Unfortunately, when examining error and failure, many investigators miss the big picture and how design influences decisions.

I recently finished a book by Jeremy Baines and Clive Howard called UX LIFECYCLE: The business guide to implementing great software user experiences. Now, you may be thinking, “Hey Randy, you’re an operations performance guy, what’s with the software stuff?” That’s a great question. This book makes some excellent points about the design of systems and although it relates to software, it describes many challenges that users and workers face on a routine basis, from workflow inefficiencies to technology that may not be optimized for the task.

Fundamentally, I believe that most error and failure in the workplace and with operational teams can be linked back to causal factors related the design of the system and how workers interact with the components of this system. This could include procedures, tools and equipment, workflow, or other parts of the work system.  We need to start looking at workers and employees as users of the systems that leadership and management provide them to accomplish their tasks and achieve the mission of the organization. That is where studying UX (or User Experience) starts to get really interesting. Some of the key principles of good UX include Consistency, Familiarity, Expectation and Trust.1 I like abbreviations, so let’s call this CFET.  Imagine you are a consumer trying to purchase something online and you visit a website that has an inconsistent look or feel. Perhaps you are on your mobile device and the site is not optimized for mobile. You have to swipe all over the screen to read the entirety of the text on the page. The lack of familiarity may run counter to your expectations of what a mobile site should look like in 2017. What kind of experience does this give you? Does it build your trust in the site? How will that influence your decision to buy?

While these are simple questions related to perhaps everyday situations, I think there are parallels faced by workers as they attempt to use the operational work methods and tools to get the job done.

Consistency: Do the procedures and work methods developed for the workers provide them with a consistent experience? I remember a while ago when I flew the legacy version of the KC-130 Hercules we had some aircraft with different configurations and upgrades. Some had various types of equipment that were different from others. The lack of consistency tended to slow things down.

Familiarity: Is the experience familiar to workers? Related to the points above, consistency tends to lead to familiarity and that tends to increase proficiency. When a task becomes more and more familiar to workers they tend to become more proficient.

Expectation: When we do something for a while we tend to expect the same thing again and again. Have you recently sat in a new automobile and looked at the layout and design of the automatic transmission console? What used to look like PRND1,2 (or something similar) may not look that way at all in some vehicles. In some cases there are button layouts or a combination of a shift stick and buttons. This may require some adjustment when trying to get used to an unfamiliar layout. Does this ever happen to workers? Have you ever seen workers attempting to use a new tool without training? Oftentimes the way the tool performs is not as they may have expected and without training, it can be hard to adjust expectations. This can lead to errors as well.

Trust: Trust is one of the most fundamental requirements in operational teams and business. Without trust performance tends to break down. The legacy version of the KC-130 Hercules I used to fly had an automated system to detect when the aircraft was in close proximity to the ground and would provide warnings and aural alerts. The system consistently failed to pass its self-test. Additionally, it seemed to give false positive alerts and crews grew more and more wary of trusting the system.  The solution was to pull the circuit breaker so we wouldn’t have to listen to it and we relied on our own planning and situational awareness to maintain terrain clearances. Does this sound like a trustworthy system? When I flew the newer, more automated version of the aircraft (referred to as the J Model) the system was much more improved and reliable. Because it worked consistently we trusted it and had a system we could rely on as a strong backup to help avoid Controlled Flight Intro Terrain.

If leaders and managers violate the principles of consistency, familiarity and expectation how can they expect workers to trust the processes and work methods provided for them? When either no standardized work methods are provided (or if Standard Operating Procedures are deficient), if the proper tools and technology are not provided and validated as useful and if production demands are not aligned with team capabilities, workers will often find a way to get the job done. If or when failure occurs, why should workers be blamed?

To me this is why the intersection of UX, process improvement methods and crew performance systems (such as Crew Resource Management) are necessary to help create the best possible work system and to equip workers to do their best work to meet the operational goals of the organization. In 2017 we will be focusing a lot more on things like workflow, process improvement, and overall improvement in operations. I would be honored to help you meet your operation performance goals! If you want to receive more content related to material in this post, please subscribe below. Additionally, if V-Speed can be of service to help you improve your processes and operational performance, please fill out this contact form and let us know how we can help.

1. Baines, Jeremy, and Clive Howard. UX LIFECYCLE: The business guide to implementing great software user experiences. : Uxlifecycle.com, 2016. 

Why Stories and Not Just Big Data?

In the age of big data, why are stories important? Corporations collect so much information on customer experience and purchases to understand what they may want to purchase in the future. Organizations track Key Performance Indicators (KPIs) to capture and report on their progress towards meeting these metrics. Safety Managers track and report Lost Time Injuries (LTIs). In this mass of data, are stories really that important? In my opinion, the answer is an emphatic yes because data doesn't always capture the context behind the outcomes and without an understanding of context it can be tempting to draw incorrect correlations or to mistake correlation for causation. Data is extremely important for tracking and measuring performance, so the point is not to negate the importance of data, but data without context may simply be noise. Stories may help get to the meaning behind the data points.

Stories add a new dimension to assist leadership in understanding context. In fact, storytelling is playing such an important role in organizations that many companies actually hire Chief Storytelling Officers. If you want to try a quick experiment, open a web browser and search for Chief Storytelling Officer and see the article links that are returned. Many companies in various industries have recognized the value of stories and the importance of expert storytellers. From Nike to Etsy, the spectrum of industries is pretty wide and numerous companies have realized the value in telling stories to inspire their customers, but also to help motivate, influence and inspire their employees. In fact, entertainment executive Adam Leipzig writes,

"Surely leaders do not want to tell people how to do every aspect of their daily job. Storytelling can persuade them–enlist them, even, to help enact a compelling narrative, one that takes the team from here to a better place, and unifies a complex environment to a single, clear mission." 1

Additionally, stories may play a powerful role in working towards improving human performance, safety and operational performance. On page 24 of his book Pre-Accident Investigations: Better Questions-An Applied Approach to Operational Learning, Todd Conklin writes, "Our job is is to help the worker tell the story of how work happens in both success and unfortunately in failure." 2

Data may not provide the full meaning of how work happens in failure events or when teams successfully accomplish their work. Stories provide the depth of meaning behind data so leaders may make informed decisions and so the organization may learn. As a leader, manager or consultant working with safety, operational, quality, reliability, product management, and/or project management teams (or really with any teams for that matter) it is important to understand how stories impact change and improvement.  So, if we can make the case for storytelling, how does one become a great storyteller? I hope to answer this question with our online storytelling "Master Class," titled Powerful Storytelling for Organizational Success, which is now open for enrollment. The course enrollment will only be open until Saturday October 22nd at 11:59 PM. This is so I can run the first class as a cohort through this 4 week program. If you want to learn how to tell stories in a coherent and consistent manner to help improve your operational and human performance as well as your safety efforts, please click here to enroll.

                            COURSE ENROLLMENT CLOSES IN:

motionmailapp.com

Stories play such an important role in the way organizations operate, but unfortunately, most people really don't understand what goes into telling a good story, and even the best intentions may fall flat when trying to influence organizational change. That is why I have put so much into this storytelling course and I hope you will join me on this storytelling journey by enrolling in the course today. 

In this course you will learn:

  • A repeatable process for telling a coherent story
  • How to find the hero or heroes in your story so you may lift up those important people and help others to understand their challenges in order to influence needed change
  • A consistent method for injecting emotional tension and release points in a story as you move your audience towards a desired future
  • Techniques that have been used by master storytellers to influence sweeping change
  • Methods for selecting the right story archetype for influencing others in a compelling manner

I am accepting registrations now and will close registration for this course at 11:59 pm Central Time on Saturday October 22nd. This is so I can work with the first class of students together. I am going to leverage my experience teaching at the master's level to help make this a really great course. I have been researching this subject for many months and have been applying my research and best practices to my own storytelling methods. Part of the course will actually include me breaking down one of my keynote presentations titled "From Cowboys to Ninjas: A Story of Transformational Change" and show you how I applied these best practices to my story to create a coherent and consistent structure. We will work through the learning modules one week at a time over four weeks. 

Here is what you get in the course:

  • 4 Learning Modules with instructional video content
  • Interactive discussion board through commenting integrated into the learning modules
  • Downloadable slides from each module  
  • A workbook with the course information, and hands-on exercises to help you actually apply what you are learning
  • I am also considering holding a live online Office Hours session for interactive online discussion if there is sufficient enrollment

If you want to join this course, please click here to enroll. Spots are limited and once enrollment closes, it won't open again until we start another class, and I'm not sure when this will be. Also, if you know of others who might benefit from the course, please share this email. It might be fun for you to go through the course together as colleagues as well!

There is an FAQ section on the course enrollment page, but if you have any other questions not listed there please let me know using our Contact Us page. Thank you and I really look forward to having you in the course

With much appreciation,

Randy


References: 

1. http://www.adamleipzig.com/blog/chief-storytelling-officer/

2. Conklin, Todd. Pre-Accident Investigations: Better Questions-An Applied Approach to Operational Learning. Boca Raton: CRC Taylor and Francis Group, 2016. Print.

What Can Waffles Teach Us About Resilience?

One of the hallmarks of resilient organizations is their ability to anticipate challenges and opportunities and to adjust performance during expected and unexpected events. In fact, Dr. Erik Hollnagel, using the Resilience Assessment Grid describes four “potentials” of resilient systems; the potential to Respond, Monitor, Learn and Anticipate.1 Organizations must be able to respond to disruptions and opportunities, and adjust performance accordingly in their effort to operate. They should monitor the signals for change that could affect performance both within the organization and in the operational environment. They should be able to learn from experience (after all, if organizations fail to learn they may miss opportunities to improve). They should also be able to anticipate changes (both positive and negative) that could impact the organization.

I like these four potentials because they give us something to consider when designing resilience into our organizations. In fact, I recently had a conversation with a colleague about how to “operationalize” resilience into companies. That certainly is a challenge, as it requires a shifting towards systems thinking and an understanding of the impacts on system components (such as business divisions) and how they work together to accomplish the goals of the overall system (such as the entire company), but it is possible. In fact, as part of my Ph.D. research this semester I am studying how organizations use sustained adaptability and work to create resilience. One area where we can learn about resilience is by studying responses to crisis.

Although I should point out that resilience is not only related to crisis planning and response and if we only look at it through that lens we may miss key opportunities to learn and improve. However, crisis planning and response does offer a unique perspective on resilience. We can learn a few lessons about resilience as we reflect on Hurricane Matthew and how one organization plans and reacts. The restaurant Waffle House has developed a set of plans it uses during severe storms. In fact, the restaurants close so rarely closes that it has been said a former administrator of the Federal Emergency Management Agency (FEMA) actually coined a term called the “Waffle House Index” to help identify how an area is responding during and after a storm. The index has 3 levels: Green (open), Yellow (serving a limited menu) and Red (closed). The company starts tracking the storms and planning for their response. To me what is really interesting is the ability to plan for how the restaurants will adjust performance to meet the needs of customers. For example, if the idea is that it is better to be able to serve some food than a full menu, then offering a limited menu may help the restaurant management prioritize and “choose sausage over bacon because the sausage takes up less grill space, and to actually not serve waffles because the cooking processes uses too much electricity.”2 If one of the markers of a resilient organization is the ability to continue operations despite disruptions, then Waffle House certainly has some points we can learn from.

Learning is important because that is how we improve. In resilient organizations leaders, managers and team members learn and take action on what they have learned. Stories like how the “The Waffle House Index” impacts disaster response are interesting and play a big part in learning. In fact we hear stories about the amazing resilience of people and communities as they handle the devastation during and after storms and start the recovery process. Storytelling plays a large role in history and the way we shape our organizations.

Understanding not only the story itself, but also how to tell it in a compelling and repeatable way may make the difference between achieving our vision and goals with the story or falling short. One of the problems is that most people don’t understand the process and framework for telling a story. I hope to solve that problem with my Storytelling “Master Class” Online Course. Many of you have already signed up for more information. In fact, I have seen more rapid interest in the products I have created around this course than any other product so far. The course is currently open for enrollment and will begin on October 24th, 2016. If you would like to enroll please click here. 

As you work to create resilience in your organization, perhaps take time to consider how you operationalize the potentials to respond, monitor, learn and anticipate. I hope this post has been useful and I wish you a great, safe and productive day! If you liked this post, I would appreciate it if you would share it using the share buttons on this page or simply copy and paste the link. Thanks again!

With much appreciation,

Randy

1.     http://erikhollnagel.com/ideas/resilience%20assessment%20grid.html

2.     http://news.wabe.org/post/how-fema-uses-waffle-house-measure-disasters

Powerful Storytelling To Improve Your Organization

I will not forget that moment... It was last year during a focus group with a team who performed high-hazard work for a major industrial client. I was working on a project to try to gain insights about the gaps between top level vision and ground-level perception. One of the goals was to find out where these gaps were and how safety was perceived at multiple levels of the organization. I had conducted several sessions, but this one really struck a chord with me. 

This one group seemed to "pour their hearts out," describing how they enjoyed their work and how they took care of each other as a team. They told deep stories about how they worked and how they tried to actively create safety. Their stories were so rich that in a way I felt as if I was alongside them during their safety and operations journey. At the end of the session one of the workers thanked me and told me how good it felt to be able to get the information out there to someone who would listen. It was at that profound moment when I realized I needed to do something different.

As a university instructor, who teaches John Kotter's material on strategic change, I was aware of how powerful storytelling can be, and the importance of helping others "feel" the need for change, but this was different. Rather than reading about it I was "feeling" it, and I experienced the power of storytelling. I started on a quest to identify the hows and whys behind storytelling, and how to integrate the process in a compelling way to help organizations improve. I studied other books and masterclass instructors, and learned how they told stories. This process shaped how I speak today and helped me redesign one of my powerful keynote presentations, which I call "From Cowboys to Ninjas: A Story of Transformational Change." 

Here are a few tips I learned in the process: 

1. Listen to your audience to find out their needs. If they are workers, spend some time with them and listen to their stories in an open way. The idea is not to judge, but to learn. Sure, if you see safety hazards, you will likely need to take action to mitigate them, but the point is to get into the field and learn about how they do their jobs. What struggles do they go through? How is it that they achieve success most of the time?

2. Identify who needs to hear the important stories. Even the best stories are just that... stories, unless they reach the right people. They don't really impact organizational change until they are elevated and conveyed to key people across the organization. Who are the peers, seniors, and subordinates who need to hear the stories?

3. Craft the story so the hero or heroes resonate and inspire with the listener, clearly identify their steps along the journey, and articulate the ups and downs as the hero or heroes move along their journey. The story could be based on an event that happened in the past, a journey that is currently underway or a journey that needs to take place in the future. 

4. Pick the right story archetype for your hero/heroes to follow. Picking the right type of story, such as "The Quest" or "Voyage and Return" may help you fully develop the story so that when you tell it (such as during a company presentation or even an informal meeting) it has the structural elements to keep the listener engaged. 

5. Identify what changes you would want to see if the right people were to hear these stories. Stories can be inspirational, but it can help to have an outcome in mind when we tell a story. Think about what kind of action you would like someone to take after they have heard the story. 

There are many more tips that could go along with these 5, but these should help to you to get started. 

If this material resonates with you I would love for you to join my
 storytelling course, which consists of a 4-module online workshop, with Instructor engagement, and will be designed to teach you what I have learned so far along my own storytelling journey. I presented a core portion of this material to the American Society of Safety Engineers in June of this year and they liked it so much I have been invited back to present it again as part of a virtual symposium in November. The course is open now for enrollment and will begin on October 24th, 2016. I would love to have you in the course! Please click here if you would like to enroll. Thanks so much for reading, and I wish you a great, safe and productive day! 

With much appreciation,

Randy
Founder and Product Manager
V-Speed, LLC

DevOps Teams and Combat Flight Crews-An Interdisciplinary Approach to Learning and Improvement

In most organizations, operations are complex processes, with many interconnected parts. We often make linear plans, hoping that things will move smoothly from Point A to Point B, yet when planning moves into execution, we can often find ourselves in difficult situations which rapidly involve non-linear actions. This makes decision-making more difficult. I think that DevOps and IT Operations teams may find this familiar, just like I did when I was a pilot in the Marine Corps…

A little over ten years ago I found myself in a very interesting, yet precarious situation. It was not one that I had anticipated, nor predicted, yet one that required rapid decision-making while eating a “soup sandwich.” A soup sandwich is a term we used in the Marine Corps to describe a really messy situation with no good way to solve it. Just imagine trying to eat a sandwich made out of soup. It is sloppy and messy, but when you’re hungry, you’ll eat it. I was flying the KC-130 Hercules (which is sort of like a giant flying gas station) into Baghdad on a tactical approach profile, which is essentially the process of flying the aircraft in a safe and efficient manner to help avoid the enemy threat. The goal was to get into the airport as quickly as possible using the safest route possible. Essentially, flying from Point A to Point B, which should be simple, right? (See Image below).

Well, it would have been, but I quickly found out how a linear process can rapidly turn into a non-linear mess. While approaching the airport the air traffic controller directed us to enter through a different geographical area than we had originally planned and began vectoring another aircraft towards the airport at the same time. We had no information on the other aircraft, except where we thought it was coming from. The controller did not provide us with landing instructions or clearance. This is when I had to start eating the soup sandwich. (See Image below). 

You see, when flying in a combat zone we really strive to stick to our motto “first pass, full stop.” What this means is that we want to nail our landing, just like a pitcher landing a perfect strike in baseball or a basketball player hitting that perfect 3 point shot. “First pass, full stop” is both a tactically proficient method, but shows pride in our work as aviators. We want to do our best. Additionally, in a tactical combat situation we try to avoid executing what is called a “go-around,” which is when we overfly the airport low and slow and come back around for a landing. We avoid this because a go-around leaves us excessively exposed to the enemy threat.

But there my crew and I were with some decisions to make in a matter of minutes. As the aircraft commander I felt the heavy pressure to make the best decision possible, given that there was really no perfect decision. Do I land the plane without permission? Do I perform a go-around? I slowed the plane as much as I could to give the other aircraft a chance to get ahead of us, hoping it would land and taxi clear of the runway. If so my intent was to land with or without permission. My mission was to safely land the aircraft and I didn’t care what this controller told me. As I slowed the aircraft to what we called Max Effort speed, we saw the other airplane arrive in front of us. We rapidly discussed as a crew what our options were and decided that we would take the approach as low as possible and land the aircraft if the plane ahead of us was clear of the runway. We got to about 10 feet above the runway and since the other aircraft had just cleared the runway we felt it was too dangerous. Multiple voices on the aircraft shouted at once “GO AROUND!” I immediately reacted, having executed the procedures more times than I can count in rehearsals/practice flights. It was not optimum from a tactical standpoint, but we had more than one safety consideration at that point and it seemed like the right thing to do. I am still here, writing this post, so I guess it was the right decision, but does the outcome justify the process in all cases? More on that later…

I have reflected on this event over the past several years and realized how much it taught me about the way work is performed in the real-world as compared to how it is designed and planned on paper, the amazing power of adaptable teams and the adaptive capacity of humans and technology resources to adjust under times of stress. But how far can individuals, teams and systems be stretched before they break? Are their resources to help team leaders, system managers, and team members to contend with the dynamic reality of work? While we are on this subject? Are the concepts I just described about my aviation experience and the questions posed that much different from situations and questions DevOps teams face on a regular basis? The more I learn about DevOps the more I’m convinced that DevOps teams face many of the same challenges we tackled in USMC aviation while transitioning to more automated and software-intensive aircraft. In the next section I provide some guidelines that I have learned both through my knowledge, application and teaching Crew Resource Management and aspects of system safety. In my opinion the divide between the issues faced by combat aircrew and DevOps teams isn’t really all that big and in reality we can probably learn a lot from each other.

7 Guiding Principles for Successful and Resilient Organizational Performance:

Resilient performance means organizations can identify risks in advance, preempt those risks and/or diffuse them in a way that allows them to continue operations, despite disturbances in the system. Just like in military aviation, where we had to deal with risks, such as enemy threat, bad weather, terrain, and breakdowns in situational awareness, IT organizations also contend with risks to development and operational performance. Whether the risks are related to delays in software releases or server uptime, DevOps teams would be well served to devise strategies to improve their ability to detect risks, and increase their abilities to handle these risks if or when they occur. Here are several guidelines that may be useful to help DevOps teams increase resilience and overall organizational performance.

1.     Wipe out the zero defect mentality with regard to human performance. A zero defect mentality is an attitude where people believe workers must never make mistakes. When leaders, managers and coworkers are intolerant of mistakes, this can create an environment of fear and distrust. We need to start the conversation about resilience and improved team performance by wiping out the zero defect mentality. In complex work environments people can and will make mistakes. While we strive to set up rules and heuristics for decision-making, sometimes there is no one-size-fits-all rule and teams have to rapidly make decisions based on the information available and their goal hierarchies.

2.     Acknowledge the reality that there is a gap between Work-As-Designed and Work-As-Performed. Systems will often function the way they were designed and if there are system deficiencies, it will often be the human and team that make up for these deficiencies. In Marine Corps aviation we would often self-organize and create techniques to make up for the gaps in deficient software and hardware design. We would then teach these techniques as a means for informal knowledge sharing.  This is not unique to USMC aviation and I believe others have stories like this. In fact, Sidney Dekker addressed the need for operational workarounds in his keynote address at the 2014 American Society of Safety Engineers Professional Development Conference in Orlando, Florida. During his presentation he described how workers “finish the design” and make up for the shortcomings designers may not have realized during the system design, construction and deployment process. On page 158 of the Third Edition of The Field Guide to Understanding Human Error he describes how pilots placed a paper cup on the flap handle of a commercial airliner so as to not forget to place the flaps in the correct position.1Sometimes designers and planners don’t foresee every circumstance where humans may be required to adapt to the operational environment.1 Sure, designers and planners can (and should) attempt to develop a hierarchy of hazard controls to optimize the system for human performance, but in some cases the need for specific controls themselves may not be understood at the time the system is designed or deployed. Alternatively, they may actually design hazard controls into the system, but those controls may still be bypassed (intentionally or unintentionally) as workers perform their tasks and make what Erik Hollnagel describes as Efficiency-Thoroughness Trade-Offs.2 In some cases workers may even adapt procedures in an attempt to make operations less risky and more effective/efficient, given their perspective and the operational context. This holds true with multiple forms of risk controls and operational performance tools, such as checklists. I believe DevOps teams use their teamwork and creative problem solving skills in much the same manner. By identifying the gaps between system design and how work is actually performed, leaders and managers can find out if the human workarounds may be injecting unintended harm into the development and production process and they may even find that DevOps teams have created a solution that can be implemented across the business to reduce risk, and improve effectiveness or efficiency.

3.      Understand that human error and blaming people for problems doesn’t fix system problems. A lazy investigation process will often point to human error as the cause of a problem or failure. Even if investigation teams have the best intentions, they may simply not understand how to investigate beyond human error. While human error may be a causal factor in the accident or failure chain of events, it is often the proximal cause, occurring at the last point before failure is actually realized. Deeper investigation will often reveal system deficiencies (distal causal factors) that may have made it very difficult for humans to recognize and respond appropriately to early signals of failure. This is sometimes referred to as an error-provocative environment, where system design actually induces people to make mistakes or serves as precursors to error. In fact, it is often because of (not in spite of) people’s creativity and capability to produce good work that organizations are able to achieve successful performance. Processes are not perfect. People are not perfect. Investigators need to have a degree of empathy when conducting post-mortems and investigations. They also need to conduct After-Action Reviews on successful events to understand what people and systems are doing right and how those processes may be repeated. Finding and rectifying system deficiencies may go a long way in helping people to do their jobs right and for making it harder for them to do their jobs wrong.

4.     Don’t base success simply on outcomes because the end doesn't necessarily justify the means. Process is just as important as outcomes (and maybe more) because if we only focus on outcomes we may end up using flawed work methods or processes and still get the end result we desire. In my aviation example, what if my actions (which seemed correct at the time) had resulted in an accident? Would investigators, succumbing to hindsight bias, have felt that we should have made a different decision? If we don’t examine if our processes or work methods are flawed or if they have deficiencies “baked into the recipe” we may never know if the seeds of failure are planted in the process and we may experience failure the next time we try to execute with those processes. Does the phrase, “Sometimes it is better to be lucky than good” come to mind?

5.     Acknowledge the need for adaptability and adaptive capacity. It is often because of a team’s ability to anticipate and respond to problems that organizations achieve success. For example, even for small releases that may not impact a database an organization may still have a Database Administrator on a teleconference because there may still be a risk that something could happen in the late night hours, mid-way through the release that might impact the database and would require the DBA’s involvement. If the organization doesn’t plan to have the DBA on the call in advance, the DBA as a critical resource could be asleep or other wise occupied and unavailable or not easily recalled. If we simply try to create a plan and force people to stick to that plan when the operational and working environment conditions clearly indicate adaptation is required we will likely set ourselves up for failure. As Eisenhower once said, “Plans are useless, planning is everything.” While plans may not actually be useless, the value is really in planning because it elevates the individual and collective awareness of the organization’s and team’s objectives, resources, timelines, and activities. Then, when the operational environment throws a curveball during execution, the teams know how to adapt smartly and safely. 

6.     Break down the authority gradient between ranks or positions for open communications to speed up execution and foster a bias for action. I am not advocating that everyone simply be allowed to make their own decisions willy-nilly, but I am advocating that organizations must learn to empower those on the “front lines” in DevOps teams to make decisions based on their functional and/or technical expertise. When people are overly intimidated by a senior team member’s rank and/or experience this can stifle information sharing and decision-making. It took us years in Marine Corps aviation to solve this challenge, as we are very hierarchical and the aircraft commander is the one in charge. That being said, in my aviation example above, even some of the more junior crewmembers called the Go-Around. If I were to have shut them down because of my rank or position power the consequences could have been much worse. DevOps teams can create team methods that break down these barriers to effective communication, decision-making and learning. I do, however, think it is important to create processes for sharing information across the team and with those who have the ultimate responsibility to “answer the mail” when things go wrong. Additionally, if there is a critical decision that could have dire business consequences if the team gets it wrong organizations should consider having a “risk hotline” so teams know whom to call to get help with the decision-making process.

7.     Build a shared understanding of each team member’s work, so that team members can understand the immediate impact of decisions and cascading impacts across the team. While I was a pilot, I tried to understand what the Crew Chiefs’ and Loadmasters’ jobs entailed. In fact, because our actions were so tightly coupled, there was little room for error, so I had to understand what they were doing. For example, if we were conducting aerial delivery of cargo (where we launch the cargo out the back of the plane with parachutes) and I pulled the nose of the aircraft up too early, I could have caused injury to personnel. So, our checklists were designed to build awareness of each other’s tasks, but we also had to have knowledge about these tasks, beyond simple perfunctory adherence to checklists. This collective mindfulness becomes important in building highly coordinated teamwork, where each member can anticipate what is required to happen in the future, with individual actions and system performance. This helps build higher levels of situational awareness, and I feel this gets to some of the key goals of creating DevOps teams. By creating a collective awareness, these teams can harness the power of both developers and operations teams for improved continuous delivery. Like in Marine Corps aviation, this could help improve situational awareness, reduced risk, and higher levels of precision.

While this is just a short list of guidelines, I hope you find them useful. These were some issues we discovered in Marine Corps aviation, and several I realized later on while as a student in, and later teaching in a Master of Engineering in Advanced Safety Engineering and Management curriculum. Sure, there is room for improvement in the military aviation community and in industry. These will not solve all an organization’s DevOps challenges, but they may go a long way in helping DevOps teams and the organization as a whole build a more open, honest, trust-based environment to improve collaboration during software development and deployment. The key is not simply to read these guidelines and understand them, but to inculcate the guidelines into daily habits, which are practiced until they become second nature. Try them out, see what works, and commit to them. Then improve them over time. They should become a “way of life” in the organization. When team members feel “this is the way we do things around here” you know you are on your way to cultural transformation. Then you are on your way to improved resilience and organizational performance. This helped us improve team performance and reduce risk in USMC Aviation and I think it can help DevOps teams. 

P.S. If you liked this article, please send me a note using our contact page. Let me know what you liked, didn’t like or what you would like to see in a future article. I am considering a future article with specific examples DevOps teams could use in each of the 7 guiding principles in this post. If you are interested please let me know. Also, I would greatly appreciate it if you would share this using the share buttons on the left of the page, or simply forward the link. This Fall I am working on some of my PhD work and will be investigating sustained adaptability. I hope to report my progress and observations later this Fall.

Also, if you want no-nonsense info designed to help you improve team performance and to help your DevOps teams think like special forces or combat aviation flight crews, enter your email address below. I won’t send you spammy junk. Just good stuff to help you improve. 

Footnotes:

1.     For a description on how workers “finish the design” see Dekker, Sidney. The Field Guide to Understanding Human Error 3rd Burlington : Ashgate Publishing Company, 2014. Portions of this section were originally in the following post: http://www.safetydifferently.com/flaps-coffee-cups-and-nvgs-a-tale-of-two-safeties/

2.     For a detailed explanation of the ETTO Principle see Hollnagel, Erik. The ETTO Principle Efficiency-thoroughness Trade-off : Why Things That Go Right Sometimes Go Wrong. Farnham, England: Ashgate, 2009. Print.

For more information on balancing risk and organizational performance, see Cadieux, Randy E. Team Leadership in High-Hazard Environments Performance, Safety and Risk Management Strategies for Operational Teams. Burlington: Gower Publishing Company, 2014.

About the Author: Randy Cadieux is the Founder of V-Speed, LLC and the Product Manager of the Crew Resource Management PRO team performance system. He routinely works to educate and train organizations on improving team and organizational resilience, and operations performance. 

Achieving Sustainable Organizational Performance

In this Intelex Community Webinar Ron Gantt of SCM and Randy Cadieux, Founder and Product Manager of V-Speed discuss ways to achieve organizational goals through safety.

If you like this video or our other posts, why not subscribe to our newsletter? Simply enter your email address below. No SPAM. Just solid content to help you improve organizational performance.

15 Key Points for Leading in High-Risk Environments

Leaders operating in high-risk environments tend to face unique situations that have to be dealt with in a certain manner. In many cases, these leaders need a certain skillset to help them cope with the high-risk nature of their work. I recently collaborated with Bill Murphy Jr., author of The Intelligent Entrepreneur and columnist for Inc.com. If interested, you can follow him on Twitter at @BillMurphyJr.  I worked on this piece for Inc.com, and it is titled "15 Things Great Leaders Do When Things Get Really Dangerous." You can read the full article here. This version was edited down from the original draft, but I think you will like it. Also, if you are like me and think that safety talks can become too boring if we don't inject some creativity and humor, maybe you will like the video associated with point number 7.

P.S. If you like this post, would you mind forwarding it to a friend or colleague who might also like it? Also, if you haven't signed up for our newsletter you may do so by entering your email address in the box below. We will begin sending you content to help you improve organizational performance. 



With much appreciation,

Randy
Founder and Product Manager
V-Speed, LLC

Is Sustainable Performance Really About Achieving Zero Incidents?

I recently wrote this as a guest blog post for Intelex on the subject of adaptability, sustainable performance and learning. The original post may be found here. If you like this post, why not head over to this post on the Intelex website and add your voice to the conversation?

With an increasing awareness of the importance of safety, we are seeing a growing trend of zero accident goals. Inherently, this is a laudable moral goal and if we truly value people in organizations, then our intentions should indeed point to a goal of not harming people during operational work. Protecting people is a good thing, but one of the problems with “zero goals” is the lack of acknowledgement about how complexity makes it impossible to predict and prevent all risk in an organization.

 Acceptable Risk and Safety Margins

Two of the principles of US Marine Corps Risk Management are to “Accept No Unnecessary Risk” and to “Make Risk Decisions at the Right Level.” Although predicting all risk is impossible, risk-based approaches are preferable to chasing “zero goals” based on lagging indicators because they explicitly acknowledge the existence of risk during planning and operational execution.

Zero harm is a worthy moral goal. However, while “zero goal” approaches don’t necessarily mean risk isn’t examined, it’s easy for zero accident initiatives to get sidetracked by focusing too much on reducing the apparent causes of specific injuries/incidents that occurred in the past. This may lead to a reduced ability to imagine new ways risks could emerge. Additionally, overemphasizing “zero goals” could lead to a reduction in learning from small failures. Risk-based approaches force leaders to identify the organization’s risk appetite and level of risk tolerance and provide a framework for allowing those in key leadership positions to make risk decisions. Once risk decisions are made, the remaining risk is often referred to as residual risk and oftentimes that residual risk allows for a space of positive action oriented towards achieving successful outcomes.

When we add the complexity of numerous interconnected parts in an organization, such as work methods, schedules, supply chains and project funding constraints, the ability to detect the changing face of risk within safety margins can become even more difficult. As Shane Parrish explains, predicting system behavior in complex adaptive systems can be very difficult. This is why it is extremely important to have a questioning attitude about how risk may occur, to espouse the notion of adaptive capacity and to create organizational capacity to manage the unexpected. Adaptive capacity may be thought of like a glass of water: as we build resources into our organizations and outfit our operational teams with the right equipment, planning systems, communication systems, and the leadership attitudes and behaviors to support these tools, our glass begins to fill. As our glass fills, our organizations grow the adaptive capacity necessary to proactively create safety.

Tools for Managing Safety at the Margins

Many industries have known for a long time that while humans can have a tendency to be unreliable under many circumstances, they have the capacity to do things that simple components and machines are unable to do. While the human is often the weakest part of the system, it is the only one that can actively create safety in complex systems.

An example of humans creating safety in complex systems comes via the aviation industry, where workers built and implemented Crew Resource Management (CRM) systems to help create adaptability and resilience in crews. CRM has found an increasingly important role especially in military aviation, helping aircrew manage safe operational performance despite the high-threat/high-risk nature of their work. CRM has also found its way into other industries, such as the oil and gas, rail, and maritime industries. CRM affects safety by emphasizing the need for adaptability, decision-making, and questioning among crews as they work together to understand risk as it unfolds and to make decisions for safety and mission success. While the human may still be the weakest part of the system, risk-based approaches that use human creativity and imagination may go a long way towards raising both risk awareness and organizational adaptability to unexpected risks.

Here are some suggestions to help with actively creating safety within complex organizations:

  1. Equip, train, and plan. One of the hallmarks of effective teamwork is a system that views training, equipment, and planning as essential elements of excellent performance. By equipping teams with the right resources, training for initial and advanced competence, and implementing a robust planning system, leaders can create the foundations necessary to building sustainable safety and operational performance success.
  2. Develop adaptability as an individual skillset and adaptive capacity within organizational planning and management systems. When organizations are unable to adapt quickly, they miss opportunities to discover and adequately deal with risk. In other words, organizations miss the opportunity to fail gracefully. From a practical standpoint this may mean implementing systems, communication tools, information channels, the mindset required to facilitate adaptability when events force plans to change, and a process to review how and why these changes occurred.
  3. Create a risk “hotline” employees can call when time-sensitive risk decisions must be made. In fast-changing environments, leaders may consider developing a “hotline” communication system that facilitates the flow of information and that helps line crews determine who can help them make risk decisions.
  4. Consider debriefing a normal part of the workday. Just like putting tools away, driving back to the facility, or clocking out, there are certain things crews, teams and workers do after every shift. Debriefing should be integrated into the normal flow of work so that a good debrief is conducted after every shift to identify success and failure points from an operations and safety perspective. To take full advantage of this time, give your workers tools to use while debriefing so they can figure out what went well and where they can improve. This will allow your organization to make debriefing a habit.

Striving for “zero” is indeed a moral goal. However, in complex systems risk can be hard to predict and chasing zero may be more difficult than we realize. Organizations may not be perfect, and they may not be able to prevent all accidents, but with the right attitudes, behaviors, systems and tools to adapt and learn how  to minimize damage, fail gracefully, protect people and operations, recover proactively, and learn for the future.

If you like this post, why not head over to this post on the Intelex website and add your voice to the conversation?

Randy Cadieux is the Founder of V-Speed, LLC (www.v-speedsafety.com), a consulting firm specializing in safety leadership, and organizational adaptability and resilience coaching and instruction. He is also an Instructor in the University of Alabama at Birmingham’s Master of Engineering in Advanced Safety Engineering and Management program and the author of “Team Leadership in High-Hazard Environments: Performance, Safety and Risk Management Strategies for Operational Teams.” Randy holds a Master of Engineering in Advanced Safety Engineering and Management and is a graduate of the U.S. Navy Aviation Safety Officer and Crew Resource Management Instructor Courses.

Join Us for Webinar on Organizational and Safety Performance

On July 28, 2016 from 10:00-10:30 am Eastern Standard Time Ron Gantt and I will be co-presenting a webinar on organizational performance. I think this is going to be fun and informative. If you'd like to join in, please see the details below, including the registration link at the bottom of the page. Thanks!

Title

"How to create sustainable performance and achieve organizational goals through safety"

Description

Ron Gantt and Randy Cadieux will provide an overview of how to create sustainable performance and achieve organizational goals through safety. 

In this webinar, they will identify: 

1. The goals of a safety management program and their relationship to organizational performance. 
2. Factors and Barriers that enable or disable sustainable performance. 
3. The best practices that organizations can implement to facilitate building sustainable expert performance. 


About the Intelex Community Industry Experts: 

Ron Gantt is Vice President of SCM. He has over a decade experience as a safety leader and consultant in a variety of industries, such as construction, utilities and the chemical industry. Ron has a graduate degree in Advanced Safety Engineering and Management as well as undergraduate degrees in Occupational Safety and Health, and Psychology. 

Randy Cadieux is the Founder of V-Speed, LLC, a leadership, risk management, and team performance consulting and training company. Randy is also the Program Manager and an Instructor for the University of Alabama at Birmingham's Master of Engineering in Advanced Safety Engineering and Management program, and the author of "Team Leadership in High-Hazard Environments: Performance, Safety, and Risk Management Strategies for Operational Teams", published by Gower Publishing. 

I look forward to you joining us for this Community Webinar. 

Registration URL: https://attendee.gotowebinar.com/register/8877148295350507012

Best Regards,

Randy

A Tale of Two Safeties

In a recent post on the Safety Differently website I describe operational workarounds and how sometimes teams and workers can end up trading one form of safety for another form of safety or for operational efficiency and improvement. Here is a link to the post: http://www.safetydifferently.com/flaps-coffee-cups-and-nvgs-a-tale-of-two-safeties/

Control, Illusion of Control and Improving Productivity and Resilience

In his book Smarter Faster Better Charles Duhigg mentions something called the locus of control. Locus of control refers to peoples’ belief in their ability to control events that affect them. A strong internal locus of control means that people feel they have the ability to affect circumstances around them and have the ability to influence what happens to them. A strong external locus of control means that people will tend to place blame on external factors.  Generally, people with a stronger internal locus of control will be more likely to consider what they can do to influence a desired outcome, whereas people with a strong external locus of control may be more likely to feel that they are less in control of what happens. When negative outcomes occur they blame external forces rather than thinking about what they might have done to influence a different outcome. Sure, it is possible that we can do everything right and random events may occur to change the outcomes, but when we have a stronger internal locus of control we tend to believe more that we can influence our circumstances and improve areas like production performance and safety performance.

I remember during US Navy flight training (all Marine Aviators go through Navy flight school), I had some flights that went better than others. When I performed poorly, I would get frustrated, but I would always learn something I could improve. Even the best flights offered room for improvement. One thing we tended to do well, was to conduct a consistent debrief. The debrief wasn’t a process to simply tell us how bad we were or how well we performed, but was a method to help us understand what we could improve upon the next time we were to fly. This mentoring by more senior Instructor Pilots was an affirmation that we had an internal locus of control and that we could influence our outcomes. So, getting an internal locus of control can be beneficial in many circumstances.

Isn’t this a good thing? Isn’t it great to exert more control to improve performance? Yes, but up to a point. In some cases it may be possible to actually get ourselves into a situation where we don’t actually have control, but have the illusion of control. The illusion of control may actually create excessive turmoil in the workplace as those in positions of power seek to exert more and more control over worker behavior, thinking that if they can only control behavior enough then accidents may be avoided. This belief, while ostensibly well intentioned, and likely fueled by years of assumptions that the primary cause of accidents is unsafe behavior, may guide leaders off course. If in their quest to control worker behavior, management squashes workers’ ability to provide feedback about why they are making their specific choices (even if those choices may seem like the development of improvisation and workarounds), then this could stifle creativity, innovation and learning, and could perhaps even lead to workers’ reduced locus of control. After all, if they are constantly told to simply follow rules, never deviate and never innovate they may feel helpless in improving the conditions under which they work, even if they have great ideas for improving production and safety performance. Additionally, in numerous industries and occupations no two jobs are exactly alike and workers must adapt and use rules of thumb as opposed to prescriptive procedures.

So, what are leaders to do? I think they need to shape the organizational climate to provide a balanced approach to worker behavior and compliance with rules/policies and worker innovation and improvement. Sure, we can’t have a workplace where nobody obeys the rules and where they make procedures up as they go. On the other hand, it may not be possible for workers to actually obey every single rule all the time because in some cases, rules or directives can be unintentionally diametrically opposed to each other. If leaders establish an environment that encourages and expects worker feedback on what is working well and what needs to be improved, this may go a long way in balancing a sense of control on the part of workers, leaders, managers and supervisors. I think this process starts at the top, where leadership models the behavior they wish to see in others (including compliance with rules as well as being receptive to those with new ideas and feedback about how certain rules may oppose each other). I think this process will help lead organizations to improved safety and production performance and if leaders will be open to learning, this could lead to improved reliability and resilience.

This is a great segue into some great news! At the links below you will find this week’s Kicking Boxes podcast interview with Dave Christenson, who worked for the US Forest Service and studied under Sidney Dekker. We had some great conversations about high-reliability and organizational resilience. I really think you will enjoy it.

https://itunes.apple.com/us/podcast/episode-6-high-reliability/id1104052535?i=368046799&mt=2 

Also, you can listen to the episode on our website here: 

http://www.v-speedsafety.com/podcast/2016/5/4/episode-6-high-reliability-and-resilience-lessons-with-dave-christenson 

If you like this post and want to receive content designed to help you improve leadership, operations and safety performance delivered straight to your inbox, enter your email address below. We don't SPAM and you may unsubscribe anytime.

 

Can Safety Be Number One?

We often hear organizational slogans like “Safety is Number One,” yet there may be very little substance behind the slogan. Then, when managers are faced with production demands that push them closer to the edge of safety as they  drift towards safety boundaries, they may have trouble making decisions that favor safety over production. After all, it is production that takes designs and turns them into a sellable product or service that will bring revenue into the company. Reduced revenue can have ripple effects that may be wide-sweeping, such as layoffs, which requires workers to do more with less, further reducing safety levels in some cases. What I often find is that managers are not given the training and tools to make sacrifice decisions, which are decisions that help to prioritize and balance, sacrificing one attribute for another. In this case, it would be a decision to sacrifice production in order to protect people and material assets. Sacrifice decisions are needed to balance safety and production in a useful manner that is congruent with the values of the organization. I think it is important to consider safety as something we do, a value to be upheld, and a process to be integrated into the fabric of the organization, from design and planning, tool and equipment selection and setup, procedure design and execution to debriefing and learning for continual improvement. If safety is a value it is not a priority to simply be pushed aside.

Let’s take a similar example. If one of Company X’s Core Values is “We will treat our employees and customers with courtesy and respect” and its mission is “To produce the highest quality widgets in the industry,” we have something to work with. Suppose while a new quality manager is seeking to push through a new contract with a supplier that makes the highest quality components available, and in the process it requires him to circumvent another manager, his actions in seeking quality would be incongruent with the company’s values because he would not be treating his colleague with respect. Some leadership training could help managers understand how this works, and some guidelines could be provided to help managers with the kinds of sacrifice decisions that are often required to place things like safety over production when necessary, or perhaps even to use some bricolage, experimentation, or tinkering to improve processes. On the surface this experimentation might seem like reducing levels of safety temporarily in order to expand the company’s capacity to succeed, but it may actually be possible to improve safety through this process as well, such as using creative brainstorming to make a process safer or implement more effective safety controls. Sacrifice decision-making skills are extremely important and it is better to train our leadership and management on how to do this, than to pretend it is not a critical skill.

We can say the same thing about managing risk to other business areas, such as financials or logistics, where decisions may need to be made to protect finances or supply chains rather than pretending that threats to those areas do not exist. Sure, there are definitely times when we may need to push harder than other times to meet acute production goals, but leaders need to provide managers, supervisors and team members with decision guidelines so they know what is too far. Then, when the hard calls have to be made, there is a framework to help make those decisions. Organizations should develop a leadership mindset at multiple levels within the ranks and empower people at varying levels so they understand how to make these decisions.

In this week’s podcast Bob Conway talks about Disney’s 4 Keys and how they are used to prioritize decisions. We had a great conversation. This is a longer (err… should I say epic?) episode, but I think it is worth it.

Here is the iTunes link:

If you don’t use iTunes, you can listen to it on our website:

If I have provided you with helpful information, would you mind sharing this post with others? You can use the share buttons right in this website. 

P.S. Thank you for being a loyal reader of the V-Speed Blog. Do you want to improve leadership and organizational performance? If so, why not subscribe to our newsletter to get premium content reserved only for subscribers? Just enter your email in the box below and we'll get you started with our latest eBook titled "7 Key Non-Technical Skills for Outstanding Leadership Results."

With appreciation,

Randy

 

The Know, Like and Trust Factor: 3 Pillars for Creating Powerful teams

In this Post:

  • The Kicking Boxes Podcast is Now LIVE!
  • Leadership Lessons to build the “Know, Like and Trust” factor
  • A Free Gift to You Today: “7 Key Non-Technical Skills for Outstanding Leadership Results” eBook

First off, I am excited to announce that the Kicking Boxes podcast is now live! iTunes, Google Play, and Stitcher Radio approved it much faster than I expected. Our first 3 podcasts are available and I would appreciate very much it if you would subscribe and download them, as we try to provide you with a lot of valuable tips in the interviews. In Episode 1 I describe the podcast and the format. In Episode 2 I interview Ron Gantt, from SCM, about leadership and engaging with front line teams. In Episode 3 I interview Bill Brown, from Secutor Solutions, to discuss the importance of organizational learning, lessons learned systems, and how managers can become better leaders. I am trying to get the podcast featured in iTunes New and Noteworthy section. Would you help me to do this? Please subscribe and download the episodes and give the show a rating and review on iTunes. I really appreciate your help and I hope you enjoy the show.

https://itunes.apple.com/us/podcast/kicking-boxes-podcast-become/id1104052535?mt=2

If you can't access through iTunes or Stitcher Radio you can listen on our website at v-speedsafety.com/podcast or v-speedmedia.com/podcast. If you like the podcast can you please help me spread the word? Word of mouth is our best advertising! 

Now onto the newsletter article for today (and your FREE eBook)!
 
In the leadership chapter of my book Team Leadership in High-Hazard Environments: Performance, Safety and Risk Management Strategies for Operational Teams I describe some of the requirements to be a successful leader. These include what I call “The 3 Cs and the L”-Competence, Confidence, Credibility and Likeability. I also describe how Likeability may be the least important, because, let’s face it, some people simply won’t like us for one reason or another, but if we have the 3 Cs then we can build a mutual relationship between ourselves and our teams that is built on trust. Workers will understand that we have their best interests in mind and that as a team we are set up for success and to help achieve our goals.
 
As a consultant and coach, I am also an entrepreneur, who is constantly seeking to find ways to expand and scale what I am doing with V-Speed. In that effort I spend a lot of time reading about how other companies were founded, how they developed products and services, how they treated their employees and customers, and how they succeeded or failed. In my studies of business development, I have learned there are 3 key pillars that help leaders create successful organizations. These correlate well with the 3 Cs and the L and are closely aligned with Emotional Intelligence. When customers know, like and trust organizations they are more likely to do business with them, but isn’t this true within companies as well? Aren’t employees more likely to put forth their best effort if they know, like, and trust their leaders? If so, then I think it is important to understand a few ways to build the “know, like and trust factor.”

  • Know: Do you get to know your workers? Do you try to find out something about them and what motivates and inspires them? Do you let them know something about you and what motivates you to do the work you do? You don’t have to become best friends, but getting to know each other is important in helping team members and leaders to like and trust one another.
  • Like: Do you engage in real conversations with your workers? It is understandable that during business, focus must be maintained on the work goals, but sometimes taking a few minutes to address your team to let them know you have their best interests in mind and that you are all after a unified goal for the organization can help build likeability.
  • Trust: Are you asking workers what they need to help optimize their performance? Are you doing what you can to honestly communicate with them? It is understandable that you can’t grant every request they make, but building trust doesn’t mean giving them everything they ask for. If you openly communicate with them about what you can and cannot do, but let them know you are trying this can help build trust. Additionally, if you do take action where you can and show them the results, this can go a long way in building trust. 

The world is not a utopia and there isn’t one secret recipe for achieving these 3 pillars. However, even if workers don’t like their leaders, building common understanding and trust may help to develop some common connections and understanding about work goals that can make the work more enjoyable. I remember working with some Marine aircrew members years ago, and I indeed liked most of them. We didn’t hang out off duty, but I enjoyed working with them because we could find some common ground and get along with each other. There were a rare few that I didn’t like, and honestly, they probably didn’t like me either. However, we knew each other and we had a common bond and trust between each other and we knew we had each other’s back. That didn’t necessarily build up likeability, but it made the work more enjoyable, which is similar to liking the other person.
 
So, as a leader trying to develop and maintain resilient and highly reliable teams, what can you do to start improving your emotional intelligence, and helping you and your teams build mutual ways to know, like, and trust each other? I would say it starts with the leader setting the example, getting out into the field or on the production floor and engaging with your workers. Creating opportunities to find out what workers need and how you can help make their jobs more efficient, or how you can help them with safety while meeting the objectives of the business may pay big dividends in the long run. If teams see leaders working to serve their best interests (which should be aligned with the mission of the organization), they may open up more and give you greater effort. What are you doing with near-miss reports? If a worker submits a near-miss report and no action is taken or if the results are not explained to the worker this can break down trust in the process and trust in leaders.  Leaders you should walk the talk and model the behaviors they wish to see in others and follow through on what they say you will do. One of the leadership principles in the Marine Corps is “Know your troops and keep them informed.” Sometimes just keeping the communication cycle going and providing feedback can go a long way in keeping up motivation.
 
I won’t pretend the leadership is easy, but it is worth the effort. Sometimes leaders need tools to help them along the way because leadership is not the end, it is a journey and we should never stop learning on the journey. Therefore, I want to give you a free gift to help you. In this eBook, titled “7 Key Non-Technical Skills for Outstanding Leadership Results,” I describe some key skills, which I think can help you along your leadership journey. I hope you enjoy it! If you would like a copy, please enter your email address here and we'll send it to you:

 

P.S. Related to this subject of leadership, I am considering developing an online course specifically targeting key leadership skills that are needed by tactical leaders who work closely with teams at “the sharp end” (where the work gets done). The tentative course title is "Leadership for the Real World." There’s a link in the eBook to get on the list if you’re interested, or you can click here to get on the interest list. If there is sufficient interest we'll let you know!

If you know of others who may benefit from this post, please share using the share buttons on this page, or simply forward as an email link so they can subscribe. Here is a link to subscribe. 
 
Until next time, thanks for reading, and have a great, productive, and safe day!

Productivity, Safety and Happiness: The Storytelling Connection

In Charles Duhigg’s latest book Smarter Faster Better one of the points he makes is that a defining characteristic of effective teams is that of psychological safety. In other words, when team members feel safe to explain themselves and have dissenting opinions this can contribute to effective teamwork. This may seem like a no-brainer, but think about it for a second… How many times have you been part of a team where you didn’t feel like you could truly speak your mind? Perhaps you felt you would be reprimanded, ridiculed or would not be taken seriously. I think this happens more than we like to admit even when there is a body of research demonstrating the benefits of openness and candid communications between team members.

As the founder of V-Speed, I try to help organizations improve leadership and teamwork to help them improve specific performance in the areas of safety and operations. Last year during a series of focus groups I saw first hand what happens when team members don’t feel comfortable voicing their opinions. I was interviewing several team members of a client and realized how a lack of trust and mismatched perspectives at multiple levels across the organization led to a large disparity between the way work was designed and perceived at the top and middle of the organization and how it was actually performed by front line teams. Many front line team members felt extremely disconnected from top management and leadership, yet felt they had no voice to speak up. Without a way to get their voices heard and without a culture that is receptive to dissenting opinions and change it is unlikely any organization will experience great results.

But how can this type of culture, a “just culture,” be created and thrive? I believe the power of storytelling can help bridge the gap between what is perceived at top levels of the organization and at lower levels in the organization where the “real work” gets done. If you are a leader you should be very concerned with what happens on the front lines because without workers and people to influence leaders are out of a job. Before we jump on poor leaders or managers, though, we need to lead in with a degree of empathy because just as most workers go to work every day to do a good job, most leaders and managers also go to work each day to do a good job. Oftentimes they just don’t understand this problem exists and that there is a major gap between perceptions from the top to the bottom of the organization. However, it is conversations like we can have through this post that can help initiate change, and part of the change process should include storytelling to help companies and workers feel their way into change. It is one thing to speak to the logic of change efforts, but organizations change through the process of feeling and experiencing the big emotional challenges of people at multiple levels. I truly believe that the gap between the way work is designed by management and imagined on the part of managers and leaders and how work is actually performed by front line operators and supervisors can be reduced through dialogue and storytelling. We must start the conversations and stories by first creating psychological safety to open and shape a receptive climate. These rich stories, where leaders can describe their beliefs and feelings about the good of the organization and where workers can explain their feelings and beliefs about the challenges of their work, the potential workarounds they feel compelled to create which are often necessary to meet competing and sometimes mismatched goals (like on-time production and safety compliance), their goals for themselves and their goals for their teams and the larger organization, can be a powerful tool for helping to initiate change and organizational transformation.

Sidney Dekker explains, “employees are not a problem to control, but a resource to harness,”1 yet leaders and managers so often fall back into the Tayloristic command and control approaches that may have worked in the past. However, the world is a complex place and organizations are too complex to simply think perfunctory obedience to orders by employees is what is needed to achieve success and continuous improvement. It is at the points where interactive complexity occurs (the relationship between the parts of the system) during real work where employees often thrive and create success, rather than failure most of the time. They just need a chance to lift their voices and tell their stories.

Employees are not a problem to control, but a resource to harness!
— Sidney Dekker

As a leader, it is one of your jobs to create the environment that helps teams to be successful and thrive. Giving team members opportunities to tell their stories, and then listening to their stories and providing them with some decision-making control over how they perform their work may be a powerful step in the right direction to create this openness. Additionally, workers may have some excellent ideas on how you may be able to improve safety and reduce risk in your organizations. After all, they are the ones at the “sharp end” doing the work and often understand how to actively create safety during planning and operational execution. Research also suggests giving employees some choice in their work may improve happiness and performance.  Once this openness is created you are starting your journey to creating high-reliability teams.

1.     http://sidneydekker.com/wp-content/uploads/2014/08/DekkerPS2014.pdf

P.S. If you haven’t figured this out already, I love storytelling. I think we are hard-wired to like stories and I think stories have tremendous potential to impact the way we do work and how we can propel our organizations to new levels of performance. I would love to share our FREE Storytelling Guide with you. To receive your free Storytelling Guide, please enter your email address below. No, we won't send you spammy junk, just solid content designed to help you improve your organization's performance.

Thanks for reading and I wish you a great, productive and safe day!

 

Overcoming Resistance to High-Reliability and Safety Improvement Efforts

“Why should we do this?” This is a question we may often hear when trying to implement major organizational change. Employees, supervisors, and managers want justification for efforts that require them to change old habits. “Do it because I said so” tactics just won’t cut it. To be fair, these “why” questions are legitimate questions. If something appears to be working for employees, they deserve to know why they should change. In fact, in our Crew Resource Management Communications learning module I emphasize the importance of explaining the why behind the what and how.  But what happens after workers are told why and they still don’t lend their support and provide their complete buy-in to implementation efforts designed to improve organizational reliability or safety?

I remember years ago when I was in flight school learning about stop-drilling cracks in aircraft wings. This is a temporary fix for micro-failures, which is used to help reduce the likelihood of a crack propagating further and causing catastrophic failure. This is a temporary measure and it is a great analogy to illustrate the potential benefits of micro-failure.

Sometimes allowing a degree of error and micro-failure can be a great teaching tool. After all, some great inventions, like what we often call "yellow sticky notes" were created as a result of failure. When I was a student in flight school and then later an instructor there were times when I had to learn by error and failure or teach my students to learn through error and failure. After all, most flight students will not master a technique the first time they try it, so they have to practice techniques over and over again until they get it right. In some cases, flight instructors may recognize students have forgotten a step or are about to misapply a certain technique, and they develop defensive strategies to recover quickly to avoid catastrophic failure. This allows students to experience micro-failure and learning from their errors without causing harm to the aircraft. We used techniques called “defensive positioning” as well to make sure errors didn’t get to the point where catastrophe occurred. After the maneuver was completed and the aircraft was recovered (often by the Instructor Pilot), this would be a time for on-the-spot debriefing to help explain to the students what they did wrong and how to correct it. In many cases the debrief would be followed by another attempt. Post-flight debriefs were longer and were an opportunity to go into greater detail about the whys behind the what and how (the proper techniques). So, like a stop-drilled crack may attempt to reduce the likelihood of micro-failure expansion, we tried to use micro-failure as a training tool so critical errors would not be allowed into student habit patterns. I went into some detail about learning from small failures in a recent interview titled "Scaling Up To a High-Reliability Organization."

Is this a technique that may be used to overcome resistance to efforts in creating a culture of High-Reliability and for improving safety culture? Maybe. Perhaps when workers are extremely resistant there may be opportunities to allow micro-failure, so long as there are recovery methods that may be enacted quickly to prevent major disruption. These are strategies that organizational leaders must figure out on their own. However, one of the characteristics of resilient teams and organizations is that they fail gracefully and rather than waiting for catastrophic disruption and they tend to have a slower degradation and quicker recovery rather than brittleness and catastrophic failures. Here are a few points that may help address resistance to change when trying to build a high-reliability culture:

  • Allow the why questions. Hiding and suppressing them may build distrust within the employee ranks.
  • Provide legitimate answers and try to show with examples if feasible.
  • Where feasible develop scenarios for micro-failure that demonstrate the need for change, but make sure the situations can tolerate the micro-failure and build in recovery options.
  • Avoid ridicule; instead teach and encourage learning. When workers fail using the old ways, try not to ridicule them. Use this as a learning opportunity and provide alternative methods to help them be successful in the future.
  • Don’t underemphasize learning from success. This post has focused on learning from failure, but in many cases we don’t spend enough time learning from successful events. Try to find opportunities for small scale trials with the new approaches you are trying to implement and communicate successes and small wins. This may have a powerful impact on those who are resistant to change.
  • Tie the reason for the change to a larger overall vision and strategy. I remember years ago when a group I was part of transitioned to a new aircraft. We had to alter the way we communicated and worked together in terms of crew dynamics. The technology of the aircraft was so complex that we needed all crewmembers to participate in pointing out safety problems or errors. We had to cultivate a climate where even junior employees were allowed to speak up and one that would not tolerate outdated habits from the earlier version of the aircraft. It was important to understand that the changes were part of a larger vision of transitioning to a highly complex and more capable aircraft. We had to see ourselves differently, and our methods worked to produce the needed change.

This is just a short list, but should provide you with some ideas on how to deal with resistance to change. Major transformational efforts can be challenging and sometimes it can help to have a framework for change. When you are trying to complete a jigsaw puzzle, what is your strategy? Many people will find the straightedge pieces and build the exterior of the puzzle first, providing a framework for the interior pieces. Similarly, if you are trying to build a team or crew performance program you also need a framework, or something that can help to hold things together. Our Crew Performance Guide is one tool that may help you. It is designed to give your teams and crews actionable strategies to infuse high-reliability and team performance techniques into routine and non-routine operations.

I hope this has been a helpful article and I hope it provides some strategies to assist you with your change efforts. As always, thank you for reading and I wish you a great, safe, and productive day! 

 

P.S. If you like our content and want to share it with others, please feel free to forward this article so others can subscribe here.

The Dynamic Balancing Act: Reducing Unacceptable Risk and Embracing Acceptable Risk

Depending on who you talk to you may get a lot of different definitions for the term “safety.” Some people believe that safety means absence of harm, while others may equate safety to compliance with safety regulations. To me, though, safety should have to do with risk (the likelihood of something bad happening and the potential consequences if that bad thing were to occur). To that end, one safety definition frequently used is “Freedom from Unacceptable Risk” (American Society of Safety Engineers 12). While I don’t necessarily love that definition, it does give us something to work with because we can then describe safety in terms of risk. So, if we believe this is an adequate working definition for safety, then we would think that the safety practitioner is responsible for helping to set the conditions so workers are not subjected to unacceptable risk. Achieving this goal may be quite challenging for safety practitioners. The process may include many conversations trying to get non-safety managers to understand their viewpoint of what is unacceptable risk. Safety practitioners want production managers, supervisors, and workers to be compliant with safety controls in order to reduce risk to acceptable levels (that is, assuming the organization uses a risk-oriented approach and understands that safety means more than compliance with regulations).

For those other workers outside of the specific field of safety, such as production managers and teams, do they have the responsibility to help the organization seek the remaining part of the equation, (seeking and using acceptable risk)? Is it the operations professional’s job to seek out and exploit the acceptable risks to achieve the mission of the organization? After all, no organization is created simply with a mission to remain free from unacceptable risk. Organizations are created to solve a problem, deliver a service, or produce a product. Of course, if they can’t do this without seriously injuring people or causing major industrial accidents then they shouldn’t be in business, but the point is that safety is not their raison d’etre. If we believe that production personnel are responsible for exploiting acceptable risk, then this involves at least some inherent risk-taking activities. If this is the case, shouldn’t safety professionals and practitioners use their knowledge, skills, and abilities to help the organization exploit these risks to maximize the upside potential while minimizing the downside? If safety workers seek to influence production workers to help them understand the need to avoid unacceptable risk, isn’t the other side of the conversation where production workers try to influence safety professionals to help them understand how they should help workers work within their acceptable risk boundaries? Empathy can go a long way and I think it helps for safety practitioners to understand the viewpoint of the line operators, and for operations workers to understand the viewpoint of the safety workers. To this end, safety and production work to achieve a harmonious balance between unacceptable and acceptable risk, and hopefully the organization educates managers on how to make sacrifice judgments so they know when to slow down production to emphasize safety. This dynamic balancing act becomes, as Dr. Erik Hollnagel calls it when referencing Dr. Karl Weick about reliability, a “dynamic non-event.” (Hollnagel 5)

I believe this is possible and it requires deep conversations between workers at multiple levels so that a work system may be designed in a way that protects workers while empowering them to achieve the organization’s production goals. I think these conversations should take place around how work systems are designed, so below I included an excerpt from my book Team Leadership in High-Hazard Environments: Performance, Safety, and Risk Management Strategies for Operational Teams:

Work system design is an iterative process and even after the systems have been designed, they must be tested and revised. Following a design–develop–test–implement process, the work system may be created and then tested in a controlled environment with those who will be doing the work to ensure it works effectively. Feedback is obtained and input back into the design phase until the work system meets the needs of all parties involved. Then the work system is implemented into the operational environment and used by all required teams. Feedback should be obtained again, particularly from the operational teams closest to the hazards and doing the work. Ultimately the work system should support those teams, so their feedback is essential.

While it would be nice to think this process would be finished after the implementation phase, in reality it should never stop because the organization is constantly adapting and changing, so the work system should simultaneously adapt, change, and undergo incremental improvements. As often happens after work systems are implemented, the teams realize there are better ways to perform their tasks and that some of the compliance requirements and safety rules may not fit with more efficient methods for doing the work. If work system design iteration is not made a priority gaps may emerge between the way operational teams conduct their work and the formal procedures. When these gaps are created violations of compliance rules and safety policies often occur, and even worse, injuries and accidents can result. This is not to say that the operational teams are right or wrong for their perspective on the best way to do the work, but the process itself must be monitored and updated, aligning the operational procedures that are actually used and the formal policies that are created. This methodology is neither an appeasement of operational teams doing the work, nor excessive conciliation with the rule makers creating formal policies, but is a way of ensuring all the requirements are aligned in such a way as to keep the teams safe, effective, and efficient while ensuring the organization itself remains in compliance with obligatory regulations. In this fashion the gap between actual work and policy can be closed (or at least narrowed) while adhering to compliance and safety rules. This iterative alignment process should also work to ensure that risks remain within acceptable levels. (Cadieux 147-48)

These types of conversations to align practice and design are difficult without a shared understanding of the various goals different departments and teams in the organization are given and what motivates the teams and workers. A shared understanding and empathy for each type of team is important in achieving all of the organizational goals. In USMC aviation squadrons, unit commanders are required to attend a commander’s level safety course. Additionally, before being selected to be a squadron commander they normally must have served as an operations or maintenance department head. So, by the time they reach the command level jobs they have built an understanding of the various goals and activities required by various departments. Additionally, in aviation squadrons the Commanding Officer, Aviation Safety Officer, and the Director of Safety and Standardization are all aviators, so they understand what it means to be a front line leader as well as the need to reduce risk to acceptable levels. The Aviation Safety Officer, regardless of rank, has direct access to the Commanding Officer and may bypass any chain of command for safety issues. This is to help ensure safety is made a priority, even in the high-risk business of military aviation. Many of the aviators and aircrew have worked in operations, safety, and/or maintenance-related jobs at some points in their careers. While this organizational design model may not be appropriate in all organizations, it does tend to be effective at balancing safety and production, building a shared understanding across teams, and creating empathy by helping workers to see other’s viewpoints, and helps to align perspectives between work design and rules with actual work practice. 

While some safety practitioner’s may see their sole purpose as eliminating unacceptable risk, wouldn’t it be powerful if safety workers could also have the deep conversations with production workers to help them seek out the opportunities within the bounds of acceptable risk? This process might include a pre-job meeting where safety professionals and operational teams sit down and discuss the hazards and risks, and safety controls to reduce risks to acceptable levels, but might also include a deeper conversation about how the controls could also be developed and implemented to help workers do their jobs more efficiently and effectively, while not compromising safety. Sure, it is understandable that compliance with rules is necessary, but if there are ways to comply with the rules while achieving acceptable risk, but also helping teams to do their job better, isn’t that a more powerful tool for the overall mission of the organization? Could these conversations help the work system design process? What do you think?

I think these deep level conversations must also be followed up with a process to capture the stories and retell the stories in a meaningful and impactful way to help influence positive change in organizations. The challenge for many is that after they capture the stories they don't understand how to tell the stories in a compelling manner using a repeatable process. We're here to help. If you would like information on our storytelling workshop and course, and to receive a FREE copy of our storytelling guidebook, please enter your email below. 

References:

American Society of Safety Engineers. Prevention through Design: Guidelines for 11 Addressing Occupational Hazards and Risks in Design and Redesign Processes, 12 ANSI/ASSE Z590.3-2011. Des Plaines: American Society of Safety Engineers.

Cadieux, Randy E. Team Leadership in High-Hazard Environments: Performance, Safety and Risk Management Strategies for Operational Teams. London. Gower Publishing, 2014

Hollnagel, Erik. "The Issues." Safety-I and Safety-II: The Past and Future of Safety Management. Burlington: Ashgate, 2014. 

Creating High-Reliability through Resilient Design

Sometimes we may hear of people talking about the principles of High-Reliability Organizations (HRO) and while they may seem intuitive and easy to grasp by some, for others they may be more nebulous at first glance. Others may say something like, “Oh we already do that.” Others may have the opposite attitude, such as, “Well, how could we do that?” Still others may be dismissive with comments like, “Oh that would never work here.” These are understandable viewpoints, given the context of the specific organizations and teams, but it doesn’t necessarily mean they are true. Therefore, I think it may be helpful to provide a practical example to show how one organization is seeking greater resilience I believe this organization’s approach relates to at least two of Weick and Sutcliffe’s Principles of HRO (Preoccupation with Failure, and Commitment to Resilience).

I am a member of the Design Thinking group on LinkedIn. In this group, a member recently posted a link to an article about how one organization is creating resilience by identifying failure points and designing in countermeasures against this failure in its production environment. The article is listed here: I was fascinated by this approach is used to identify failure points and design in resilience. It made me think of how a proactive approach is often required to identify high-consequence failures.

Organizations must take the steps of identifying failure points and designing countermeasures, barriers, or defenses to reduce the likelihood of occurrence and/or to reduce the consequences. Additionally, operators and teams must be trained to sense and respond when weak signals are indicating the potential for an impending failure. In my opinion, to be successful an organization must integrate these approaches (including attitudes, behaviors, and work methods) into the daily fabric of operations until it becomes “the way we do things around here.” In other words, it becomes part of the organizational culture.

We want systems that can flex under the stress of expected and unexpected events without breaking and hopefully that become stronger as learning occurs and new actions are taken to improve. One way we did this in USMC and Navy aviation training was to train flight students to be highly competent at flying and using sound judgment and decision-making skills. Additionally, their training pushed them to the limits so that if an actual emergency occurred that would be prepared to respond appropriately. As a specific example, I will discuss the safe-for-solo check ride. This flight is an evaluation flight, and is normally the 13th familiarization flight. After 12 flights of preparation the students undergo a rigorous evaluation flight with someone besides their main flight instructor. This flight is designed to make sure they exhibit the competency and decision-making skills for solo flight. If a flight instructor has done his or her job well he will have made the 12th flight even harder that the safe-for-solo check ride, so that the check ride seems somewhat easier. Flight instructors do this because they want to make sure their students are extremely well prepared for expected events and unexpected abnormal and emergency scenarios. The added benefit is that on their solo flights they have been exposed to the type of training that has fine-tuned their senses and has helped to build an attitude that helps them become attuned to the aircraft performance and enables sense making and responsiveness if an unexpected situation arises.

I believe these principles are important for all organizations, and particularly in high-consequence industries where major failure is simply not an option. Whether your organization deploys production servers, trains aviators, or conducts other industrial operations, such as those in the mining, oil and gas, or manufacturing industries, I think there is a potential for applying these concepts. Organizations and teams may not know everything and it is unlikely that all risks will be predicted, but by designing for resilience and developing attitudes and behaviors that lend themselves toward sensing decreasing safety margins and impending failure, leaders and managers may be better-prepared to contend with the challenges faced in the operational environment.

Thanks for reading, and I wish you a great, safe, and productive day!

Reference: Weick, Karl E., and Kathleen M. Sutcliffe. Managing the Unexpected: Resilient Performance in an Age of Uncertainty. 2nd Ed. San Francisco: Jossey-Bass, 2007. 

P.S. Stay tuned; our Crew Resource Management Implementation Guide is available for pre-order and will soon be available for download. This guide is designed to provide actionable tools to go along with the book Team Leadership in High-Hazard Environments: Performance, Safety and Risk Management Strategies for Operational Teams. It is like a human performance toolkit to help implement some of the strategies from the book. 

Scaling up to a High-Reliability Organization

In a recent blog post I discussed one strategy for helping to build high-reliability into operations by giving ourselves "an out." By this I mean trying to avoid making irrevocable decisions when those decisions have high-consequence potential or to at least provide exit strategies if plans start to go down the wrong path. Another strategy for helping to build High-Reliability Organizations (HRO) is storytelling. While storytelling won't necessarily make your organization an HRO, it may sometimes help to learn from others about their experiences in HRO's. To that end, I am sharing a link to a recent interview with Sean K. Murhpy. 

Scaling Up to a High Reliability Organization

In the interview Sean asked me numerous questions about my experiences with HRO and how some of the elements might apply to other businesses. We discuss things like the 5 Principles of HRO, risk management, toxic leadership, and it's impact on deferring to expertise, and single points of failure. I shared some stories about how I saw HRO in the some of the Navy and Marine Corps aviation units I was part of. There are also numerous links to articles and books you may find useful in helping you to build HRO culture in your organizations. 

“What if we had…” Learning from Counterfactual Thinking

Have you ever had a close call or a near-miss and started the what-if process? Perhaps you thought to yourself something like, “If only we had done something different, the situation could have turned out much worse.” This is known as counterfactual thinking, where we consider an alternate outcome based on different antecedents. Essentially we think about changing the causal factors in the past to arrive at a different outcome. This may be done in two ways; when we think about how things could have ended up worse or when we think about changing the causal factors to end up with a better outcome. In reality counterfactual thinking applies to the past so there is really no way to create an alternate outcome (at least not until time travel is invented). However, counterfactual thinking could be applied to situations to think about actions in future scenarios. I think this process comes somewhat naturally to many people, but the way it is used may be ineffective (blaming individuals) or more effectively to improve the organization. Here are some examples:

1. The post-accident counterfactual that blames the employee. You’ve probably heard of this one. An employee experiences an injury or a team experiences a failure and investigators immediately start looking for the scapegoat(s) to tell them, “If only you had followed the procedures this accident wouldn’t have happened” or “If only you had paid more attention you wouldn’t have made this mistake.” These types of approaches fail to take into consideration the context of the situation and how events may have unfolded to place the worker or team in a position to take the actions they took. This is the sort of deficient retrospective understanding that claims to know everything because the outcome is already known, yet it does little to explain how something happened.

2. The post-accident counterfactual that is used for learning. This is similar to the approach above in that it is used after an accident or failure to analyze the causal events and to consider what could be changed in the future to attempt to avoid the same situation. This can be useful for learning if applied properly, but in some cases, it can still be problematic. For example, if a worker is injured and the team does a quick investigation and rather than blaming the employee determines a root cause, and corrects that root cause they may have reduced the likelihood that the exact same accident will occur as a result of the exact same cause, but if they ignore additional causal factors the same outcome could still occur, but from different causes than the “root case” that was rectified.

3. The post-incident/near-miss counterfactual that never takes place, but should have. In some ways maybe this is the worst situation of the three. How many times do organizations what-if potential outcomes after minor injuries, yet when near-misses occur that could have potentially had serious consequences, the team members wipe the sweat off their brows, exclaim,  “We got lucky on that one,” and then go about their business as if nothing happened? In many cases organizations get so focused on production they may miss opportunities for learning if they fail to consider counterfactual thinking in near-miss situations. I remember years ago in my Marine Corps aviation career experiencing numerous near-misses and while it seemed intuitive to conduct counterfactual thinking, like “If only this or that had happened instead we might not be here right now,” but for many years there wasn’t an easy way to capture and share this information. Now there are methods in place to record this type of information, including anonymous online reporting systems. Sometimes near-misses offer great opportunities for improving if the organization chooses to learn from them. That requires management and leadership to set the tone, demonstrate their willingness to learn, and lead from the front.

There are other examples and uses for counterfactuals, but I think the important point is to use counterfactual thinking like a tool in a toolkit. Not every tool will get the job done, but in complicated and complex work, a good toolkit with multiple tools will likely provide the resources needed to achieve success and to improve performance along the way. If counterfactual thinking is usedin a positive way to learn from the past and to think about future outcomes, but in a way that doesn’t seek to lay blame and that considers the system and context of work it may be a useful tool. One of the points I make in my Crew Resource Management training is that we can never be perfect, but we can learn and improve.

Building High-Reliability by Giving Ourselves an "Out"

Have you ever experienced an error or failure where in hindsight the chain of events seemed obvious, but during the process while the error or failure was occurring you did not recognize what was about to happen? You know the old saying, “hindsight is 20/20?” Of course hindsight appears to be 20/20 because we know how the story ended and it may be easy to say we should have known better or should have paid more attention. The reality is, though, that in complex systems (and I would argue most organizations are complex systems) the way errors, accidents, mishaps, and/or failures may occur may not be as simple to predict as we would like to imagine. In fact, some would argue that accidents might be a normal outcome in complex systems. Normal Accident Theory suggests that accidents are a normal part of complex systems and are often organizational accidents stemming from multiple failures. You may have heard of “black swan” events, where seemingly unknowable risks unfolded to cause major catastrophe. Sure, in hindsight perhaps they were not seen as black swans, but for those working within the system and organization, within a specific context these may have been black swan events to them because they did not recognize what was coming. 

Why do black swan events or unrecognized failures happen? There may be multiple reasons, and rather than asking why they occur, perhaps a more useful approach may be to examine the complex organization and how safety or reliability is created. We can start by realizing that safety (and perhaps reliability) is an emergent property of complex systems and organizations. This means that we cannot predict overall failure by the failure of one component. What happens at the individual level may not be a good predictor of what happens at the system or organizational level. If we only look at the failure of one person at his or her job and don’t think about the ripple effects that failure could have we may miss opportunities for identifying mitigation strategies. Additionally, I believe that we can never truly predict all types of failure, and there will always be a level of unknown unknown risks (we can’t imagine them so we can’t mitigate them specifically).

If this is true, should we just throw our hands up and simply give up as we wait for failure? Of course we shouldn’t, and I believe we can develop management cultures where we seek highly reliable performance and develop actions to help mitigate failure even if we can’t imagine what that failure might be. One approach may be to help design more loosely coupled systems, which allow a degree of flexibility and recovery options in the event of failure. Let’s look at a simple example to demonstrated the difference between a tightly coupled system and a loosely coupled system:

  • Tight coupling: A single generator supplies power to two systems. If that one generator fails both systems will fail.
  • Loose coupling: Two generators supply power independently to each of the two systems so that if the first generator fails there will still be power available to the second system.

This isn’t to say that both generators won’t fail, but it is a way to design looser coupling so that if a failure occurs, there are backup options. Slack in a system may be another example. Rather than following a critical path where resources are dedicated to a project to complete it in minimal time, with no room for failure, allowing slack in the system in case something (such as unforeseen risk) occurs there will be additional resources to apply to the work to continue operations.
 
I like to call resources to loosen coupling “safety gates.” From a conceptual standpoint “safety gates” are our “outs.” They help to give us an out in case decisions go badly. They also may help us to avoid irrevocable decisions, which are decisions we either cannot take back if they are wrong or cannot mitigate after we make them. Sometimes decisions become irrevocable because we have perhaps been overly optimistic and have not built in safety nets to help us recover if or when failure occurs. I used to fly a 4-engine transport aircraft. There are some scenarios where the aircraft could operate on two engines in an emergency, but under some conditions, such as heavy weight and high altitude, two-engine operation might not be enough to sustain level flight. I remember years ago during our multi-engine aircraft simulation training instructors would try to walk us down the path of shutting down two out of our four engines to put ourselves in a tough situation. The conditions were set up so that if we shut down two engines on the same wing we would not have enough power to maintain altitude and would start descending. The simulation instructors would often freeze the flight simulator and ask us to mentally rewind and see if we could perhaps restart the first engine (even if it meant operating it in a reduced power setting) before shutting down the second engine in order to avoid shutting down two engines and getting into an unrecoverable situation (an irrevocable decision).
 
Are there situations in your work environment where you or your teams perform critical tasks and where it is possible to make irrevocable high-consequence decisions? What happens if those decisions turn out wrong and failure occurs? Is it possible to conduct a “pre-mortem” meeting with what-if scenarios to talk bout worst case options and the possibility of building in “safety gates” to help prevent failure from escalating? Is there a way to conduct simulations and rehearsals to try out the implementation of the “safety gates?”  
 
There is no surefire, clear-cut answer, but I hope this newsletter is helpful in getting conversations started so you may identify tightly coupled systems and ways to perhaps loosen those couplings. If so, perhaps if error or failure occurs you may be able to stop the failure chain and recover from it early while minimizing damage to the overall system or organization. Additionally, while you may not be able to recognize all types of risks, by developing a management culture of high-reliability you may help to build a culture where employees and teams seek out information and try to recognize failure early, and build in capacity to deal with impending failure before it escalates beyond acceptable levels.
 
Here are a few resources you may want to consider reading:
 
“Art of Critical Decision Making “(part of The Great Courses) by Professor Michael Roberto
 
Managing the Unexpected: Resilient Performance in an Age of Uncertainty by Karl Weick and Kathleen Sutcliffe
 
Normal Accidents: Living with High-Risk Technologies by Charles Perrow
 
The Black Swan and Anti-Fragile:Things That Gain from Disorder by Nassim Taleb
 
I hope this newsletter was helpful. If so, I would greatly appreciate it if you would share it with others using the links below. Thanks for reading and I wish you a great, safe, and productive day! 

P.S. I am proud to announce that V-Speed's Crew Resource Management Planning and Execution Toolkit will soon be available for purchase. You may get a preview of the content or pre-order a copy here. This guide was written to serve as a sort of "field manual" to help organizations implement some of the concepts from my book Team Leadership in High-Hazard Environments: Performance, Safety and Risk Management Strategies for Operational Teams.

Thanks for reading, and I wish you a safe and productive day!   

If you want to receive FREE and regular actionable content delivered to your inbox, enter you email address below: