Chapter 9
Introduction
Although it is common to think of a crisis as a negative event, it can also be an opportunity for learning and change in the organization (Brockner & James, 2008; Wang, 2008). Put differently, a crisis should have the capacity to shock an organization out of its complacency (Veil & Sellnow, 2008). New perspectives can be developed that hedge the organization against future crisis attacks. The Chinese concept of a crisis views it as both a dangerous situation and an opportunity (Borodzicz & van Haperen, 2002). Those who do not learn from a crisis bring to mind the adage that those who ignore history are doomed to repeat it and thus are likely to be visited by similar crises in the future (Elliott, Smith, & McGuinness, 2000).
Unfortunately, some organizations do not take even initial steps to prepare adequately for a crisis. Perhaps human nature prevents many of us from addressing a crisis until it has arrived (Nathan, 2000). When an event does occur, learning from a crisis can be haphazard at best. The research that addresses crisis learning is limited but growing (Deverell, 2009; Lalonde, 2007). In this chapter, we examine this growing body of knowledge on learning from a crisis.
What Is Organizational Learning?
Organizational learning is the process of detecting and correcting errors (Argyris & Schön, 1978); it seeks to improve the operation of the organization by reflecting on past experiences (Sullivan & Beach, 2012). In the context of crisis management, learning should occur when the organization experiences a crisis. It should not be assumed that learning always emanates from a crisis, because some organizations do not appear to learn effectively. A distinction between single-loop and double-loop learning is germane. Barriers to organizational learning are presented at the end of this chapter.
Single-Loop Learning
Single-loop learning refers to the detection and correction of an error without changing basic underlying organizational norms (Argyris & Schön, 1978). Suppose you are driving your car in a snow storm and you suddenly loose traction. You sense your car is now veering left into oncoming traffic. To avoid hitting an oncoming vehicle, you steer the car away from the center lane, but in the process you sense that you are now turning too far to the right and running the risk of going off the road. You turn your wheels again, this time to the left so that you are back on the road. You are careful not to turn your wheels too far to the left lest you head into oncoming traffic again. The process of steering to the right and then to the left is an example of single-loop learning. The corrections were made instinctively by responding to the current driving conditions in the best way possible.
Learning From a Structure Fire
Firefighting is an example of a crisis activity that involves a great deal of single-loop learning. (The first author has served as a volunteer firefighter.) For instance, in fighting a structure fire, one must determine how much water to put on the fire. The firefighter will increase or decrease the volume of water and adjust the spray pattern according to the location and size of the blaze. In addition, a minimal amount of water will be used to extinguish the fire so as not to cause excessive damage for the property owner. If possible, firefighters will enter the structure and attempt to “push” the fire out away from the building, meaning they will spray the fire with water in the direction of a window or door. This type of attack extinguishes the fire more quickly and minimizes property damage but also heightens the risk of injury to the firefighter entering a burning building. If a structure is hopelessly consumed by the fire and entry into the building is not feasible, then the fire department will launch a defensive attack, also known as “surround and drown.” In this procedure, the firefighters are positioned outside the structure and aim their hoses onto the fire and the structure. There is little attempt to save the property; only to extinguish the fire.
In this example, the principles of firefighting are the same regardless of the type of structure fire encountered. The learning that occurs is based on adjustments that are made along the way. For instance, if the firefighter thinks more water is needed, he or she will increase the volume by adjusting the nozzle of the hose. Alternately, another hose (called a line) may be utilized to supplement the volume of water on the fire. The principles of firefighting do not change in single-loop learning during a fire, only the decisions regarding items such as water volume, pressure, or the type of attack.
Single-loop learning can be illustrated using a simple diagram. Figure 9.1 illustrates this process. In this example, an interior attack is initiated on a fire, which quickly escalates out of control despite the best efforts of the firefighters. They learn from the situation that they must exit the building and use a series of larger lines so that an increased volume of water can be distributed on the fire, thereby extinguishing the blaze. Note that the basic underlying assumptions of fighting the fire have not been changed; hence, it is an example of single-loop learning. In the next section, we employ another example of firefighting to illustrate double-loop learning.
Double-Loop Learning
Double-loop learning involves the detection and correction of an error, but there is also a change in basic underlying organizational norms (Argyris & Schön, 1978). Such learning usually occurs after a process of thoughtful reflection (Kolb, 1984). This type of learning changes the organizational culture and the cognitive arrangement of the company. “Based on an inquiry or some form of crisis, the organization’s view of the world will change and, so, stimulate a shift in beliefs and precautionary norms” (Stead & Smallman, 1999, p. 5). Such a change in beliefs can cause organizational leaders to rethink the “It couldn’t happen to us” mentality whereby managers feel immune to a crisis (Elliot et al., 2000, p. 17). As Stead and Smallman point out, this evaluation–rethinking process has come to be known by different terms, including “double-loop learning” (Argyris, 1982), “un-learning” (Smith, 1993), and “cultural readjustment” (Turner & Pidgeon, 1997). When these deeper learning processes are applied to crisis learning, the perception that a crisis cannot occur and that the organization is invulnerable usually diminishes.
Learning From the Hagersville Tire Fire
Double-loop learning can also take place as a crisis unfolds and escalates. The 1990 Hagersville tire fire in Ontario, Canada, illustrates how extensive double-loop learning took place not only in extinguishing the fire, but also in how used tires should be managed. Tire fires are difficult to extinguish for several reasons. First, the shape of the tire allows ample air flow that can feed the fire. Second, tires are usually stored in large mounds that may be difficult to reach with conventional fire equipment. Finally, burning tires produce oil, which can ignite as well, adding more heat and flames to the existing fire (Mawhinney, 1990).
Traditional assumptions on firefighting had to be adjusted for the Hagersville fire. Simply adding water to the fire was not a workable option because of the complex nature of the blaze. First, the tires were stacked in large mounds, which made access difficult for firefighters. Initially, the strategy was to attack the fire from the perimeter and gradually advance toward the center of the burning tire pile. This strategy continued for seven days, but because of the intense heat, firefighters were not able to advance to the core of the fire with their hose streams or equipment. It was later determined that the tires would need to be separated and extinguished in smaller batches (Mawhinney; 1990; Simon & Pauchant, 2000). Although this strategy worked, water runoff from the tires was taking oil with it and causing large puddles to form, threatening to contaminate the underground water supply. To address this situation, trenches were dug and sandbag barriers were used to direct the runoff water into ponds. The oil was skimmed off the runoff water and sent to an oil refinery. The runoff water was pumped into tanker trucks to be treated at a local water treatment plant while the oil was sent to the refinery (Mawhinney, 1990).
In addition to the fire, a deeper problem had to be addressed. Should the government regulate the management of used tires? At the time of the fire, the Ministry of Environment in Canada had not taken action except to impose an incineration ban. The local community where the fire occurred was also concerned about the environmental aspects of the fire. Smoke from the fire produces toxic fumes; the resulting water and foam from extinguishing the fire is also dangerous because it could seep into the groundwater supply (Simon & Pauchant, 2000). Attention needed to be focused on preventing another tire fire. Here again, double-loop learning began to take place as traditional assumptions on used tire management were being challenged. Figure 9.2 summarizes the discussion on the Hagersville fire and the role of double-loop learning.
Learning From Failure
Learning from failures is another way organizations have incorporated double-loop learning. In fact, some organizations thrive in environments that should be at high risk for failure and a potential loss of life (Weick & Sutcliffe, 2001). Such organizations have been labeled high-reliability organizations (HROs) and include aircraft carrier flight decks, medical facilities, and firefighting incident command systems (Roberts & Bea, 2001). An extensive literature bases exists on HROs (Bourrier, 2011), and lessons from these organizations have permeated into industries that are not considered as high a risk for catastrophic failure. This move is in the spirit of organizational learning, which seeks to improve critical activities and enhance performance based on an analysis of past events (Sullivan & Beach, 2012).
One of the hallmarks of HROs is their obsession with analyzing past failures so as to prevent future ones. For example, the 1967 accident on the USS Forrestal that killed 134 crew members has been studied extensively by the U.S. Navy so that such an accident may never occur again (Brunson, 2008). The event occurred when a rocket from a fighter jet accidently discharged into a group of other aircraft on the flight deck. The resulting fire was a combination of burning jet fuel and detonating bombs from the remaining aircraft on deck. Much was learned from the mistakes made in the firefighting tactics on board that day. First, because not all sailors were trained for this type of accident, mistakes were made fighting the fire and proper equipment was not utilized effectively. Today, all sailors are also trained as firefighters. Second, foam and water were not used effectively. Foam was used to smother the fire, a typical procedure for a fuel fire, but was subsequently washed off by other firefighters using water. This action caused the fuel and the fire to spread into the bottom compartments of the ship. Moreover, crew members using the foam had to stop and read the directions on how to apply it correctly (Brunson, 2008). As a result of the USS Forrestal accident, the U.S. Navy has upgraded its firefighting capabilities and has designated the Farrier Fire Fighting School in Norfolk, Virginia. The school is named after Chief Gerald W. Farrier, who died fighting the fire on the USS Forrestal that fateful day.
As this example illustrates, organizations need to adapt a posture whereby they learn from failure and pass these lessons on to future staff and managers. Failures are a byproduct of organizational life and part of operating in a complex and changing world (Cannon & Edmondson, 2005). Confronting failure gives managers the opportunity to reevaluate their assumptions on how a problem should be solved.
Building a Learning Organization
We cannot discuss the topic of organizational learning without acknowledging the work of Peter Senge and how it relates to learning in a crisis management context. Senge (2006) describes the components of the learning organization as systems thinking, personal mastery, mental models, building shared vision, and team learning. Each of these is described next.
Systems Thinking
Everything that occurs in an organization is influenced by something else. Likewise, the events the organization initiates influence other items or systems. This interconnectedness forces managers to think conceptually: How does a decision made at one point in time affect other decisions that are made later?
As we have seen, a crisis is not merely a random event. Instead, it is caused by many other movements of systems that culminate in a trigger event that initiates the crisis. Recognizing that an organization is part of a larger flow of events helps the manager understand how crises emerge. Crisis events do not just occur; they evolve and are influenced by various systems. In the Hagersville tire example, we saw how a number of other systems were influenced by the fire. Smoke from the burning tires affected air quality in the region. Water used to fight the fire contained oil runoff, which could contaminate drinking water if it seeped into the underground water supply. The fire was a system, affecting other systems as well. As the strategies were planned for fighting the fire, those leading the crisis response had to consider what other systems were being affected by their actions.
Personal Mastery
Senge (2006) views personal mastery as a competency that can be developed. It is also an organizational skill set. At its heart is the ability to see reality in an objective manner. Without this ability, learning is not possible. Developing this ability takes time, effort, and a commitment to discovering the truth. For the crisis manager, personal mastery is a must because reality is not always attractive.
The concept of sensemaking occurs during a crisis as managers seek to assign meaning to events. There are times, however, when a crisis is so bizarre that there is a collapse of sensemaking (Weick, 1993). This collapse can be caused by the loss of a frame of reference, because nothing similar has occurred in the past. The human response is one of fear and helplessness, the encountering of the fateful cosmology episode that has been discussed elsewhere in this book. As Weick (1993) describes it, “I have no idea where I am, and I have no idea who can help me” (pp. 634–635). Nonetheless, decision makers in charge of responding to a crisis should acknowledge their need to regroup to see the event as objectively as possible. This mind-set can help the response to the crisis and begin to let the organization learn from the event.
Mental Models
These are the sets of assumptions and viewpoints that we have. Such models are necessary because they help us make sense of the world. Organizations also have mental models that reflect the collective assumptions of their members. Mental models can be useful when they urge us to think creatively about problems being faced. Indeed, some managers thrive on thinking “outside the box,” to quote a well-known phrase, because their minds are geared to seeing possibilities behind every problem.
Mental models can also hamper crisis response and, ultimately, organizational learning. When managers insist that a crisis “cannot happen here,” they are exhibiting a mental model of denial. Destructive mental models can be seen even when crisis events occur repeatedly in the same organization. For example, scapegoating is a mental model that seeks to shift the blame to some other party. Again, such a model is a form of denial—not a healthy ingredient in an environment for learning.
Building Shared Vision
This ingredient of learning involves a collective agreement by members of the organization on its mission and goals. Inherent is a passion that employees show for the projects they work on and the role their company plays in society. Thus, when a crisis occurs, the whole organization is hurt because the collective vision has been attacked. As a result, efforts at confronting the crisis and getting back to business are embraced enthusiastically. This response can explain why some communities immediately move into action when a disaster strikes. Cleanup crews hit the streets quickly, volunteers abound, and government visibility is heightened as everyone works together to overcome the crisis and return to a sense of normalcy.
In the absence of a shared vision, there is a higher vulnerability to the organization when a crisis does occur. A fragmented organization will not respond cohesively and may even attack itself as the crisis unfolds. Scapegoating may occur among organizational members. Many professional sports teams experience this type of crisis from time to time. The scenario is usually predictable; the team has a bad season, the owners and coaches become confrontational, and the players frequently complain about the owner, the coach, or fellow teammates. Ultimately, some players may demand to be traded. When this type of “venting” occurs, a public relations crisis is born as well.
Team Learning
Senge (2006) describes the familiar situation when an average group of managers can produce an above-average company. The opposite is also true; a group of above-average managers can produce a below-average company. Many crises originate because less-than-ideal dynamics occur among a group of otherwise competent professionals.
According to Senge (2006), the key to better performance, or team learning, is to acknowledge the presence of dialogue. Dialogue is a deeper form of discussion through which new ideas originate from the group. In the end, the team becomes the learning unit for the organization and is capable of reaching new levels of performance that a group of individual managers might not reach on their own. Dialogue is the prerequisite for double-loop learning, because new assumptions may need to be developed as old ones are discarded.
This notion of dialogue is important from a crisis management perspective. Crisis management teams (CMTs) are special units, capable of doing much more than just generating a list of potential threats and crisis plans. The crisis team is the unit that protects the organization, its mission, its values, and its reputation. Thus, the CMT is a strategic unit within the organization. Thinking of the CMT as just a committee or a staff department hampers its ability to promote true learning and long-term benefits for the organization. The status of the CMT must be elevated to a level at which it can attain strategic importance.
Learning From a Crisis
An optimal time to learn from a crisis is shortly after it has occurred. Waiting too long to extract lessons from the crisis could cause the sense of urgency for learning to wane (Kovoor-Misra & Nathan, 2000). In addition, organizational learning cannot occur unless there is feedback (Carley & Harrald, 1997). After a major crisis occurs, managers should reevaluate their crisis management plans based on feedback received during the event. They must be able to determine why specific decisions were made during the crisis. Mechanisms such as debriefings, stakeholder interactions, and technology enable managers to capture and share information with members of the crisis management team. This information can be used in follow-up discussions to learn lessons and develop best practices.
In this book, we place organizational learning as the last stage in the four-stage framework. This placement is not to imply that learning does not take place in early stages. As a formal activity, it is a reflective process that must take place after the crisis has ended. Early crisis management frameworks also posit that learning takes place toward the end of the crisis management process. For example, Pearson & Mitroff (1993) place “learning” as the fifth phase in their five-stage framework. Table 9.1 offers a framework for assessing the learning areas in crisis management. If learning is to be systematic, we must examine the four major areas of the crisis management framework as well as the internal and external landscapes associated with each area.
Landscape Survey
The landscape survey phase of organizational learning focuses primarily on the crisis threats that existed. The following discussion looks at the questions relating to the internal and external landscapes.
Were There Warning Signals That Were Missed Prior to the Crisis Occurring?
The internal landscape survey looks inside the organization for emerging crisis vulnerabilities. Perhaps an equipment breakdown brought on the initial crisis. Have repairs been overlooked on other equipment? Perhaps the crisis occurred when key personnel left the company and their replacements were not adequately trained, leading to a production accident. In this example, at least two problems should be identified: Why the high exit of employees, and why the poor training of new employees? Problems such as these indicate that human resource issues may need to be addressed.
Are There New Vulnerabilities in Our Organization That We Need to Be Aware of?
Although not explicit in every crisis, every organizational leader should consider one internal vulnerability: the relationship between the organization and its mission. In his analysis of the sexual abuse problem within the Catholic Church, Barth (2010) noted that the protection of the church became more important than its real mission, serving its members. Unfortunately, this self-preservation mentality can hide a multitude of problems. The opening case involving the Michigan Board of Education illustrates how protecting the local school district superseded a more commonsense approach to the problem, which would have been to keep George Crear III out of any school system. Instead, the Michigan Board of Education chose to protect its own school system, regardless of what might happen elsewhere.
Of course, George Crear III, not the Michigan Board of Education, is responsible for the crisis that occurred. While we do not overlook this reality, this book is about protecting organizations like the Michigan board from future crises. School boards everywhere have a responsibility to protect their students. As this case illustrates, there are hidden vulnerabilities that must be addressed lest a crisis occur.
Are There New Methods of Detection That We Can Use to Detect an Impending Crisis?
An analysis of the internal landscape may also reveal that new methods of detection should be used to sense an impending crisis. Perhaps new accounting and financial controls are needed to detect potential sources of employee embezzlement and other types of fraud. As mentioned in Chapter 8, monitoring the Internet on a regular basis is a way a company can detect whether it is about to be caught in a viral crisis. Depending on the industry, a firm may identify specific ways it can use technology to help detect an impending crisis.
Are There New Threats in the External Environment That Can Lead to a Potential Crisis?
The external landscape survey can also signal emerging vulnerabilities. A recent crisis might have been weather related; in fact, droughts are common in the area where the authors reside. This situation has created water shortages and low-running wells. In a highly agricultural area like the southeastern United States, such an event is not only a crisis for many organizations but is a data point for a future crisis. To compound this crisis, an influx of new citizens is moving into this region, based on the growth of a nearby military base. Fortunately, learning is also taking place and new plans to satisfy water needs are being developed, even if droughts continue to occur in the future.
Strategic Planning
Organizational learning in regards to the strategic planning process looks at changes that may be needed with the crisis management team, crisis management plan (CMP), and training requirements.
Do We Need to Change the Composition of Our Crisis Management Team?
Organizational learning in the internal landscape may necessitate changes in crisis response plans. The composition of the CMT may require revision. Some current members may not be suitable, while other employees may be excellent replacements. In addition, it may be necessary to alter the size of the team. One of its members should have social media expertise or have access to staff members who do.
Are There Aspects of the Crisis Management Plan That Need to Be Changed?
The CMP can be revised at any time. Perhaps there are new scenarios that need to be added to the plan. The suitability of the command center should also be evaluated. The team should discuss whether the communication functions were readily available and whether the meeting rooms were suitable. Even a minor detail such as cell phone access should be evaluated, because some cell phone users may not have access in certain parts of a building, such as a basement.
Is There Enough Redundancy in the Day-to-Day Operations of the Company?
There is an old saying that “repetition is the mother of learning.” The practice of redundancy in an organization’s processes helps ensure that everyone understands their jobs and that there are backup systems for computers, files, and mechanical devices. Having a spare tire available for that one time when there is a flat is a common personal example of redundancy. At the organizational level, information technology (IT) professionals learned quickly and early that failure to back up their information systems can lead to disaster. The same is true in any organization. While redundancy is not necessary in every function, it is essential in those areas that are difficult to replicate. The organization that is prepared with backup systems can be resilient.
The same approach is appropriate in crisis management. When a specific process does not function well or at all, managers should have an alternate process that can substitute for the original. Redundancy in crisis management can be seen in the following examples:
Methods of contacting the CMT in the event of a crisis should include cell phone, regular phone, and e-mail.
The crisis management plan should be printed in hard copy as well as made available on backup storage sites and posted on the organization’s website.
What Can We Learn From the Best Practices of Those Outside of Our Organization Who Have Encountered Similar Types of Crises?
Managers can learn from crisis events by observing the failures and crises of other organizations (Ulmer, Sellnow, & Seeger, 2007). The external landscape can yield numerous resources that can be useful to crisis managers. Books and articles on crisis management comprise one such resource. The book you are reading offers a framework for learning about crisis management, whereas articles tend to be more specialized and often highlight the best practices of specific companies. Many articles focus on lessons learned from a specific crisis. In addition to these outlets, some colleges and universities offer courses in crisis communications and crisis management.
Within the external landscape, crises related to safety problems resulting in fatalities will probably yield learning and changes on the part of government regulators. In other words, there may be collective industry learning that takes place as well. For example, the 2006 Sago Coal Mine accident in Sago, West Virginia, resulted in 12 miner fatalities after methane gas seeping from the mine walls caused an explosion. After the incident was investigated by the Mine Safety Health Administration (MSHA), a new standard was released that increased the required strength of seals used to separate active and inactive sections of coal mines (Madsen, 2009). Unfortunately, one coal mining company, Massey Energy, choose to ignore safety regulations altogether and compromise miner safety. In this example, the company deliberately chose to ignore learning that had occurred in the mining industry. The Massey example is more a violation of business ethics than a lack of organizational learning. We explore the relationship between business ethics and crises in the next chapter.
Degrees of Success in Crisis Learning
The observation that an organization can display degrees of success in crisis management outcomes was first discussed by Pearson and Clair (1998). One of the items they considered was organizational learning from a crisis. Table 9.2 depicts three levels of outcomes: failure, midrange, and success. Each outcome is further distinguished by degree of learning, future impact on the organization, and strategy posture toward crisis management. Companies that experience mostly failure outcomes in the crisis management process have not yet learned from past events. It is not surprising that these organizations continue to repeat their mistakes each time a similar crisis erupts. Such organizations are reactive in nature, and therefore they are unable to learn because they are always in a state of surprise or, perhaps, nonchalance. Table 9.2 conveys the idea that learning success can vary among several ranges of outcomes.
Some companies will experience limited degrees of success in their crisis management practices and thus show some capacity for learning. A degree of learning is possible, but its applications will be sporadic. Therefore, certain areas in the organization will change for the better while others might remain the same. In terms of a strategy posture, the firm is still reactive but shows some willingness and ability to learn.
The ideal, of course, is a total learning organization. Companies that experience success outcomes in this area are willing and able to learn. The result is that policies and procedures are changed as needed. The hope is that in the event of future crisis events, the new learning will enable the organization to respond more effectively.
Barriers to Organizational Learning
Learning is not necessarily a natural outcome of a crisis. In fact, many companies are reluctant to learn and instead choose to return to the status quo as quickly as possible (Cannon & Edmondson, 2005; Roux-Dufort, 2000). There are a number of reasons why this is so. In the next section, we examine the more common reasons organizational members, particularly those in management, may resist learning. Barriers to learning are approached from two perspectives: operational considerations and factors related to the organization’s culture.
Operational Considerations
Operational considerations focus on issues related to the day-to-day functioning of the organization. Included in this discussion are an overreliance on programmed decisions, information asymmetry, and the tendency to ignore small failures.
There Is an Overreliance on Programmed Decisions
Programmed decisions—those that are based on some type of decision rule or prearranged logic—can be useful in a number of situations. They tend to work well when management decisions are routine and repetitious, such as the reordering of inventory when levels reach a prespecified number. Programmed decisions have also been factored into certain crisis management procedures. For example, many organizations have a prearranged list of procedures to follow when there is a bomb threat. These are designed to methodically protect assets and people (usually by evacuating the occupants from the building) while seeking as much information as possible about the person making the threat (taking note of background noises, engaging the caller in conversation as long as possible to identify speaking patterns, etc.). Such programmed decisions are useful because they are systematic in their application.
There can be a problem, however, when there is too much reliance on programmed decisions: “The more programmed decisions are utilized by an organization, the more resistant to change it becomes” (Lester & Parnell, 2007, p. 177). This kind of situation can occur in companies where programmed decisions are used to promote efficiency. Because this mode of operation is usually effective, management may become complacent and not seek new approaches to running the operation. This complacency can carry over into the area of crisis management, especially when crisis planning is either not addressed or is left to top management (Nystrom & Starbuck, 1984).
At the employee level, programmed decision making can lead to a work routine in which the worker becomes a “mindless expert,” meaning they concentrate on the end result instead of the process of the task (Langer, 1989, p. 20). The end result can be a workplace accident or missing the cue for a crisis altogether.
There Is Information Asymmetry
Information asymmetry can occur when similar incidents involving the same technology transpire over a wide geographic area (Boin, Lagadec, Michel-Kerjan, & Overdijk, 2003). For example, information asymmetry can exist when the manufacturer of a product has information that customers do not possess. Furthermore, different customers may have access to different information as well. The Therac-25 incidents from 1985 to 1987 illustrate this (Leveson & Turner, 1993).
Therac-25 was a computer-controlled radiation machine that administered prespecified doses of radiation to cancerous tumors. The machines were offered by Atomic Energy Canada Limited (AECL) and were introduced in 1982. The machines operated flawlessly until a time period between June 1985 and January 1987. During this period, six incidents occurred when patients received massive overdoses of radiation while undergoing treatment. Several of these patients laterdied (Leveson & Turner, 1993). What made the crisis especially perplexing was the lack of information transfer that took place among the six medical centers using the Therac-25. Instead, each medical center reported the machine failure directly to the manufacturer, unaware that other medical centers were also experiencing problems. Figure 9.4 illustrates the information asymmetry that existed.
The figure shows four different medical centers that were affected by overdoses of radiation caused by the Therac-25 machines. The incident that started the Therac-25 crisis occurred at Kennestone Regional Oncology Center in June 1985. The second incident occurred at Ontario Cancer Foundation in July 1985. Yakima Valley Memorial Hospital experienced incidents in December 1985 and in January 1987. East Texas Cancer Center experienced incidents in both March and April 1986. The radiation overdoses resulted in three fatalities and three other patients who suffered serious physical injuries (Fauchart, 2006).
As Fauchart reports in his analysis of the case, communication took place between each medical center and the manufacturer, but not among the four medical centers. Thus, the manufacturer, AECL, had complete information but the four medical centers did not. Thus, potential learning opportunities at each of the four medical centers were not possible. Fauchart (2006) maintains that this information asymmetry could have been avoided:
The manufacturer should have informed all the users that a number of accidents had occurred, but he did not do so. Instead, he told every user who asked for information about other possible incidents that he was not aware of any. He thus used the information asymmetry to pretend that each accident was a one-off fluke. This clearly delayed the instauration of a learning process aimed at fixing the problem and preventing other accidents from occurring. (p. 101).
Small Failures Are Routinely Ignored
A reoccurring theme in crisis research is that small incidents that are ignored can lead to more substantial incidents or crises (Cannon & Edmondson, 2005; Smith, 1993; Veil, 2011). Such small incidents can be interpreted as warning signals of a larger crisis on the horizon. These early warning signals can be likened to an incubation stage when the crisis is growing, largely unnoticed by organizational members (Seeger, Sellnow, & Ulmer, 2003; Veil, 2011). The term “smoldering crisis” has also been used to describe a situation where management largely ignores a series of smaller events, only to have them erupt into a larger calamity later on down the road (Institute for Crisis Management, 2011). The BP oil disaster in the Gulf of Mexico that resulted in the deaths of 11 workers was an example of a series of smaller problems that were routinely ignored. According to the Institute for Crisis Management:
Some will argue that the explosion on the BP oil rig Deepwater Horizon was a sudden crisis—an explosion that killed eleven and triggered a major oil spill in the Gulf of Mexico. ICM maintains there is ample evidence there were a series of human errors and ignored problems, that had they dealt with when they first occurred, could have prevented the disaster that cost BP so much money and additional damage to its reputation. (2011, p. 2)
Organizational Cultural Considerations
The belief systems within an organization can stifle progress in attempting to learn from a crisis. As we will see in the discussion below, a track record of success, a culture of scapegoating, a status quo culture, and the painful process of looking at failures are deterrents to the learning process.
There Is a Solid Track Record of “Success” in the Organization
It may seem ironic, but success can ultimately lead to failure (Parnell, Lado, & Wright, 1992). When an organization enjoys success, the result can be an attitude of feeling invincible against crisis events. Success can be defined in a number of ways, such as consistent revenues, accident-free workdays, or a wealth of positive publicity. Sitkin (1996) noted that some level of failure is needed to encourage organizations to learn. After all, where is the incentive to learn if one does not experience a setback from time to time? A track record of success implies that there is nothing new to learn (Veil, 2011).
The organizational culture at NASA has long been recognized as having a culture of success that has overlooked the potential for failure (Barton, 2008; Gilpin & Murphy, 2008; Tompkins, 2005). Despite the fact that the organization has had incredible success in its space program, there have also been setbacks that have proved fatal on three separate occasions.
There Is a Culture of Scapegoating
Scapegoating hinders an organization from learning from a crisis (Elliott et al., 2000). It blames the crisis on another party, thus deflecting attention away from the core source of the problem. There are a number of pitfalls with scapegoating that ultimately prevent learning in the organization. First, the organization is likely to become even more failure prone because key issues and warnings are not raised and addressed (Elliott et al., 2000). This scenario is likely because putting the blame on a scapegoat diverts attention away from the issue that needs attention. For example, manufacturers often blame their suppliers when a product is found defective. While this may be true, it still begs the question of why that supplier was used in the first place. The example mentioned in Chapter 6 of toys manufactured in China that were decorated with lead paint and sold in the United States illustrates this supplier dilemma. The fact that there have been a number of recalls, even as this book was being written, indicates that toymakers are still learning about the pitfalls of outsourcing operations overseas.
Another problem with scapegoating is that it indicates a company’s lack of ethics in running the business (Elliott et al., 2000). Scapegoating requires that blame be shifted, even if the company is at fault. Such an ethical stance is a form of denial, which is hardly a healthy atmosphere for organizational learning. The hindrance to learning is that the company’s core belief system cannot be changed for the better if managers are in denial as to what went wrong at the outset. Instead of displacing the blame to other parties, the organization needs to develop a culture of learning (Argyris & Schön, 1996). This organizational culture shift to learning enables management to make changes that can prevent future crises (Veil, 2011). However, this shift is difficult if there is a status quo–seeking culture in the organization.
There Is a Status Quo–Seeking Culture
A company’s core beliefs are the foundation of its organizational culture. If the culture is entrenched in an unwillingness to change from the status quo, then organizational learning will be virtually impossible (Roux-Dufort, 2000). This type of belief system begins with the attitude that a crisis “cannot happen here” or “it can’t happen to us” syndrome. When denial is present, there is a high likelihood that the organization can become crisis prone (Pearson & Mitroff, 1993). When a crisis does occur, the organization must either learn from it and move on, or transfer the blame to another party, thereby entering a state of denial.
Christophe Roux-Dufort studied a 1992 crisis involving a French airliner that crashed into the Saint Odile Mountain while making its final approach for landing. He interviewed a vice president for the airline and was surprised to learn that the executive did not consider the event a crisis. His reasoning was that the day after the accident, reservations for other flights with the airline had not changed (Roux-Dufort, 2000). The problem with this mind-set is that the crisis is written off as just another event—something that happens when you conduct business—and nothing more. Deep learning and attempts to change the organization’s culture are difficult to achieve when a company is in such denial; indeed, analyzing failure is painful.
Analyzing Failure Is Painful
Finally, analyzing a crisis that was the result of a failure of some human error is difficult for those involved. Negative emotions usually result when individuals examine their shortcomings, and this can result in a painful loss of self-confidence and self-esteem. Likewise, managers may find it difficult to focus attention on organizational failures because these failures are an extension of their abilities to govern effectively (Cannon & Edmondson, 2005). After all, if anyone is supposed to exert control in the organization, it is the manager. When a crisis occurs due to the failure of the organization, it is ultimately the manager’s responsibility.
Research in the area of organizational behavior has revealed how managers may displace the blame for their failures. Attribution error is a concept that has emerged in studies on leadership and attempts to explain how managers attribute blame or success when certain organizational outcomes occur. When a manager achieves success, she may attribute that success to her own personal traits as a leader. However, if that same manager encounters failure, she may attribute that failure to external causes; in other words, it is not her fault. This type of rationale is known as a self-serving bias, or “the tendency to make external attributions (blame the situation) for one’s own failures yet make internal attributions (take credit) for one’s successes” (Hughes, Ginnett, & Curphy, 2012, p. 51). Hence, managers do not want to experience a crisis based on some fault of their own, and they certainly do not want to talk about it afterward.
The post Chapter 9 Introduction Although it is common to think of a crisis appeared first on essayfab.