By Jeffrey Pfeffer
Companies are a lot like children—none of them is born knowing all that it needs to know, and relatively few are born really smart. Most acquire intelligence by learning basic tasks and skills, mastering them, and then moving on to learn more advanced skills that can be used on more difficult problems and tasks. It has become virtually axiomatic that to succeed over time, companies have to continuously innovate, learn, and improve how they do things. That’s why there has been so much emphasis on innovation and learning in the management literature. On Amazon.com there are almost 1,000 entries for “organizational learning” and 11,925 for “innovation.” Google has 802 million entries for the word “innovation” and 2.5 million for “organizational learning.” And the pressures to continuously improve are why, at least for a while, companies embraced the total quality management movement and why organizations in the United States invest billions—some estimates are more than $80 billion—annually on training and education.
Of course, you already know all this. The phrases “learning organization” and “continuous improvement” have become virtually management clichés. Nonetheless, relatively few companies actually embrace the management practices that are required to help them get smarter. That’s because some of the things they need to do to learn are counterintuitive—or at least inconsistent with conventional wisdom and common management practice.
Consider the research of Amy Edmondson, a professor at Harvard Business School, and her colleagues, who studied how hospitals and their employees acquired and implemented new knowledge and techniques. Learning about new science and practice and mastering new equipment and techniques are fundamental to the practice of good health care, because medicine is always changing in response to new research, pharmaceutical products, procedures, and equipment. Moreover, this adaptation has to occur in complex settings where the consequences for failure are high. Edmondson’s research also investigated how health-care organizations drove mistakes and errors out of the system. In short, Edmondson analyzed how to build a real learning organization. What she discovered makes perfect sense, but only if you think about second-order feedback effects and adopt a more long-term view of building a successful organization.
Should a nurse who discovers a problem, such as an unmade bed in a room about to receive a patient, just fix the problem, that is, make the bed? Or to take a similar situation in different context, should a software programmer facing an unexpected coding glitch just develop a “work around” patch to keep the project moving forward? Not according to Edmondson. Taking individual responsibility and fixing problems might seem like a conscientious and good thing to do—the problem is fixed and no one else has to get involved. But that’s the rub. Consider the consequences—unless the unmade bed is brought to others’ attention, no one besides the nurse will know there was ever a problem and therefore there will be no effort to discover the root cause and fix it. And the same applies to the software project. If the root causes of problems are not discovered and remedied, the problems will almost certainly recur, and then other people will be faced with the task of fixing them. Organizational learning thus requires people to direct others’ attention to problems so they can be noticed, diagnosed, and fundamentally fixed once and for all. Organizations seldom like “noisy complainers,” to use a phrase I first heard from my colleague Bob Sutton. But such people are vitally important for the learning process as long as their complaints are substantive.
Edmondson’s research also examines a second conundrum—what about the effects of removing layers of management and lots of managers, and instead putting people in self-managed teams and leaving supervisors with larger spans of control? Isn’t it better to have fewer managers and a flatter structure? The answer, according to both Edmondson’s research and the experience of Southwest Airlines as described by Jody Hoffer Gittell, is it depends on what the managers do. If they just give orders and assign blame if things go wrong, you’re probably better off with fewer of them. But if leaders actually help people coordinate and learn, more are better.
Southwest Airlines actually has one of the higher ratios of supervisors to those being supervised in the airline industry, much higher than the ratio at American Airlines, for instance. And the research on health-care organizations also found that those that learned the best generally had a higher proportion of managers. At Southwest and in the best health-care organizations, the leaders were spending time in the task of relational coordination. They were helping their employees learn, moving information across organizational boundaries, and essentially scanning the environment for common trends and themes, and then bringing that information to their people, who could collectively use it to enhance performance.
The problem with having fewer managers is actually quite simple: since people have been taken out of the organization, those that remain have more to do unless something has been done to decrease the total workload. And there are fewer people in the organization to ensure coordination, reflection, and learning. In order for leaders to act as coaches, there must be enough leaders to do the coaching. Just as coaches help their teams perform better by standing on the sidelines and providing perspective and information that players in the thick of things might otherwise miss, so in companies it is useful to have people whose job responsibility includes learning, coaching, teaching, and reflecting, or else those activities won’t occur.
Here’s a third puzzle that comes from Edmondson’s research. If, when you entered a hospital, you had a choice of two wards, which would you choose—Ward A or Ward B, which has ten times the number of reported errors as Ward A? When Edmondson and her colleagues discovered that medical units with more reported errors, for instance, in administering medicine, actually had better health outcomes for their patients, they were genuinely perplexed. But their fieldwork quickly made sense of the apparent paradox. Medical units could have fewer reported errors for one of two reasons: (1) they actually made fewer mistakes, or (2) they made as many or even more mistakes but had a climate of fear in which mistakes were covered up instead of acknowledged.
In the organizations Edmondson studied, the second reason seemed to prevail. The units with more reported errors were generally led by people who understood that in complex environments with difficult tasks, stuff happens. The best way to ensure that the same mistakes were not made again and again was to acknowledge them, try to figure out their root causes, fix those causes, and then continually repeat that process. So, for instance, if no one ever admitted to making a mistake in administering medicine to a patient, there would be no way to uncover whether the problem was physicians’ handwriting, having similar-color pills placed too close together, insufficient instructions and record keeping, and so forth. In medicine, the motto “forgive and remember” is embraced. Forgiveness is important for ensuring that people are willing to admit when they messed up, and remembering, particularly if that memory gets institutionalized in better work processes, is essential for preventing the same mistake from occurring again. Both uncovering mistakes and learning from them are essential for the learning process.
There are two other things that companies need to do to get smarter that are also contrary to common practice. The first is letting people do new things, a decision that often has costs in terms of short-term efficiency. It is obvious that learning at the level of the individual involves a certain amount of beginner’s clumsiness—whether one is learning how to play a musical instrument, speak a new language, or make investment decisions. The irony is that even as companies want to become learning organizations, they don’t want to be places where people can learn new things—because that requires putting people in positions where they do tasks they don’t yet do very well.
At AES, the global independent power producer, people historically had the opportunity to volunteer for tasks such as deciding with teammates about health-insurance plans or even, in the case of the company’s Connecticut plant, having front-line employees invest about $10 million in reserves. The consequence was that people learned new skills. And there was a consequence for the company as well. After a while, people who have developed expertise become trapped in their accustomed way of doing things and by their existing knowledge, so having different people do the tasks brings fresh eyes to the problem. Studies of innovation have found that much invention entails taking ideas and technologies from one context and using them in a different product or service environment, or combining existing elements in new ways and new settings. Thus, companies can get smart when they encourage such internal knowledge brokering, something that is accomplished by having teams of people do different things.
Learning and innovation also require letting people make mistakes. Individuals inexperienced in doing some new task are obviously likely to make more errors than those with more experience. This is true even in the case of medicine. Doctors in training learn by doing, and often these initial efforts are not particularly skilled. New procedures that will eventually save many lives, such as open heart surgery, and new devices such as stents, initially result in relatively high rates of mortality and morbidity until they are perfected through learning and experience. So, even in the highest-stakes situations, there seems to be no complete substitute for learning by doing and experimenting.
At the company level, bringing new products to market is obviously risky—some offerings will fail due to insufficient demand. The idea of running small experiments, a managerial practice embraced as a management mantra at Harrah’s Entertainment, Yahoo!, and IDEO, takes into account that some of the experiments and innovations won’t work, some of the Website trials won’t improve things, and some of the product prototypes will fail. Harrah’s tries different ways of getting loyal players back into its properties, and not every idea is going to be equally successful. IDEO may make hundreds of prototypes for new toys, of which relatively few get tested in the marketplace and even fewer are ultimately successful.
IDEO’s idea is that failing early and failing often is better than failing once, failing at the end, and failing big. The principle is simple—learn and fail on a small scale. But that ethos requires accepting that novelty and innovation are invariably accompanied by setbacks and failures. And embracing such a way of operating requires letting people fail—maybe even encouraging them to fail. After all, if nothing ever goes wrong, it must be because the capabilities of the system and its people have not been truly tested. This is a principle of total quality management—removing waste and inventory to highlight bottlenecks and problems that can then be fixed—in the process using problems and failures to highlight opportunities for improvement.
I’ve noticed that companies pay lip service to getting smarter but few do what is required to accomplish that lofty goal. Organizational learning requires three things: a clear understanding of recurring problems, the willingness to allocate resources to address the root causes of those problems, and cultural values that foster learning—which means encouraging employees to find, fix, and report mistakes rather than heroically patch things up. None of this may seem sexy or glamorous, and a lot of what is required seems to go against common sense ideas of “soldiering on,” efficiency, and holding people personally accountable for every error. But if you really want to outsmart your competition, it’s the only intelligent way to go. Reprinted with permission of Harvard Business School Press. Excerpted from What Were They Thinking? Unconventional Wisdom About Management by Jeffrey Pfeffer. Copyright © 2007 Jeffrey Pfeffer; All Rights Reserved.
About the Author(s)
Jeffrey Pfeffer is the Thomas D. Dee II Professor of Organizational Behavior at the Stanford Graduate School of Business, where he has taught since 1979. He is the author or coauthor of twelve books including Hard Facts: Dangerous Half-Truths & Total Nonsense and The Knowing-Doing Gap: How Smart Companies Turn Knowledge into Action.