A Whole New Ball Game: Overcoming the Odds Against Successful Implementation

In a pivotal scene from the 2011 movie Moneyball,[1] actor Brad Pitt, playing the role of Billy Beane, general manager of the failing Oakland Athletics baseball team, asks a room full of scouts grappling with the loss of key players and a limited budget, “What’s the problem that we are trying to solve?”:


Billy Beane:     Guys, you’re just talking. Talking “la-la-la-la” like this is business as usual. It’s not.

Grady Fuson:   We’re trying to solve the problem here, Billy.

Billy Beane:     Not like this you’re not. You’re not even looking at the problem.

Grady Fuson:   We’re very aware of the problem. I mean…

Billy Beane:     Okay, good. What’s the problem?

Grady Fuson:   Look, Billy, we all understand what the problem is. We have to…

Billy Beane:      Okay, good. What’s the problem?

Grady Fuson:    The problem is we have to replace three key players in our lineup.

Billy Beane:      Nope. What’s the problem?

CHRIS PITTARO:   Same as it’s ever been. We’ve gotta replace these guys with what we have existing.

Billy Beane:      Nope. What’s the problem, Barry?

SCOTT BARRY:     We need 38 home runs, 120 RBIs, and 47 doubles to replace.

Billy Beane:    Ehhhhhhh! [imitates a buzzer] The problem we’re trying to solve is that there are rich teams and there are poor teams. Then there’s fifty feet of crap, and then there’s us. It’s an unfair game. And now we’ve been gutted. We’re like organ donors for the rich. Boston’s taken our kidneys, Yankees have taken our heart. And you guys just sit around talking the same old “good body” nonsense like we’re selling jeans. Like we’re looking for Fabio. We’ve got to think differently. We are the last dog at the bowl. You see what happens to the runt of the litter? He dies.

The problem is the team is drastically underfunded—there’s no money to buy “big-time” players—and Billy is frustrated that his scouts are discussing possible new prospects with the same old problem in mind. His frustration is understandable: a lack of divergence regarding alternative ways to frame a problem generally results in striking out when it comes to successful implementation. And, unless you have the deep pockets of a Major League Baseball team owner, spending time on understanding why such approaches fail would be pretty useful. because the odds based on current practices are not in your favor.

A few years prior, a client organization was revitalizing a program that had been discontinued a decade earlier (under a new name, of course). Why, I asked myself, would it be different this time? What would have to change? Intrigued by this topic, having seen many organizational initiatives come and go in my three decades of working with organizations, I set out to research the percentage of initiatives that end in failure and the associated reasons why they do not live up to their expectations.

The news is not good. What I discovered was, if you’re heading up a new initiative or implementing strategy in your organization, the odds are not in your favor. Multiple studies completed over the past decade substantiate the rather dismal outcomes of well-intentioned initiatives. Extensive research shows how Total Quality Management and other programs like it (e.g. Six Sigma) have roughly a 20-40% success rate when it comes to implementation. Eighty-five percent of reengineering programs failed to live up to their expectations.[2] A 2013 Gallup Business Journal poll reported that more than 70% of change initiatives fail.[3] Sadly, many organizations wait a bit and try something else, earning the disdainful mantra of “flavor of the month,” which almost assures roadblocks to implementation. Why is the descent—the implementation—so difficult? These are, for the most part, well-intentioned and substantiated programs.

One of the issues is failing to recognize the scope of impact that the program has on the culture. As reported in one study, too many programs viewed their mandate as one of a set of techniques rather than a “fundamental shift in the organization’s values, direction, and culture.”[4]  Other key findings included:


  • tasks associated with maintaining the bureaucracy becoming more important than the thinking skills


  • there was too much emphasis on tools and terminology


  • organizations lost sight of what the initiative was intended to do, and,


  • the inability to create or maintain commitment.


With regards to the issue of commitment, what was equally intriguing were the references I both read and heard that stated, “But we have an organizational mandate!” as if that were some sort of guarantee, an inoculation against the systemic organizational forces that detect and eliminate threats from within. The act of telling produces compliance, but not the engagement and commitment needed to overcome challenges associated with any change initiative.

What’s the Problem?

Of all the steps in a typical decision-making process, the one most overlooked is that of defining the problem—specifically, people tend to bypass the step of diverging on different ways to formulate the problem. When groups come together to create a new strategy, address issues of underperformance, implement new initiatives, or make decisions, they will typically—and often at an unconscious level—tackle these challenges with unexamined, “old” definitions of the problem guiding and limiting the development of their solutions.[5] Like those baseball scouts gathered around the table in the Oakland A’s office, most people are not even aware of the particular problem that they are trying to solve; they just jump right into the solution phase.

This is an issue I often see in teams I work with. When a group is stuck on the question “Should we do this or not?” I’ll ask everyone to grab a piece of paper and write down their definition of the problem they are trying to solve. Odds are, if you have 10 people in the room, you will likely get four different problem definitions.

People tend to speak in solutions without even thinking about the problem or question that is driving those solutions. If you were to reflect on typical conversations that you’ve had in meetings—formal or informal—my hunch is that you would observe the same pattern. Just move into an active listening mode, and, pretty soon, you’ll likely hear some variant of “Should we invest in this project or not?” or “Should we move into the one-story or the two-story house?” The question that drives those solutions is rarely articulated.

Hence, when a new initiative (solution) catches the eye of a leader and makes its way into an executive meeting, everyone starts to debate it or tries to find ways to get people to buy in to it. Asking questions such as “In response to what? What is the question that we are trying to answer? Why are we looking at this in the first place? Where’s the pain that is driving this?” will likely elicit either an awkward silence or numerous different perspectives. As Billy Beane despaired, no one is stepping back to distinguish the problem that they are trying to solve.

A recent Harvard Business Review article examined this issue of becoming too quickly enamored with “bright, shiny solutions” and programs, and suggested instead that one should “fall in love with the problem” and “spend time letting the challenge soak in, studying it from various angles, and understanding it more deeply.”[6] Successful companies, the article’s authors noted, did not run with the presenting symptoms, but, rather, persevered at understanding the essence of the problem.

In my experience, leadership teams often avoid the discomfort that can accompany such deliberations, and choose instead to move on to the ostensibly easier work of implementation. And, when the initiative fails to take hold, leadership treats it as an issue of poor implementation rather than an issue of a poorly defined problem. Whether in the fields of sport or business, the “question step” is clearly critical.   Leaders are well advised to spend the time and effort required to find the right question that needs to be answered. Choose to avoid this and odds are you will join others who are similarly “confined to their situation”[7] by failing to articulate and gain alignment on—the problem to be solved.


What’s the Smallest Step That You Can Take?

The next challenge with implementation is curtailing the desire to knock the ball out of the park. “Train the masses! Issue the organizational mandate that everyone must do this!” are the ballpark equivalent of “Knock it out of the park!” And while that can energize the baseball crowd, it is not enough to sustain managerial interest during the long process of organizational transformations.

Research conducted by Harvard Professor Michael Beers over the past two decades shows that when it comes to the implementation of TQM, organizations do better when they focus efforts on “a small number of units when TQM fits the strategy and where leader’s attitudes, skills and behavior create a fertile context for TQM.”[8]  A 2010 McKinsey study examined a number of practices that overcame the rather dismal 30% organizational transformation success rates noted in their research.[9]  The authors report that “…three-quarters of the respondents whose companies broke down their change process into clearly defined smaller initiatives” were much more successful. Small wins, as it turns out, is the way to win the game.

Social psychologist Karl Weick’s seminal article, “Small Wins,” discusses how psychological barriers are created when situations are framed as huge issues or problems—it just overwhelms the brain.[10] To mitigate this response, he shares how several successful, large-scale public policy changes began with incremental steps. “The first steps were driven less by logical decision trees,” Weick wrote, “than action that could be built upon.”[11] The resulting small wins were like preliminary experiments that created heightened interest and a commitment toward achieving a second win.

Weick’s conclusions regarding the value of crafting intentions into small wins is supported by the neurological sciences as well.  A small wins framework likely results in a more specific goal. For instance, rather than HR stating, “We will become more inclusive in our hiring practices,” a small win statement might sound something like, “We will expand the number of universities from which we hire from 5 to 10.”  Why would that help? Essentially, vague goals tax the brain’s resources and working memory load, making it difficult to create a mental imagine of the intention, all of which increase the likelihood of failure to implement.[12] Giving people a list of 15 things to accomplish overloads the working memory of the brain.  The list gets tossed.

Giving people a list of 15 things to accomplish overloads the working memory of the brain.

A colleague gave me some excellent counsel two decades ago when I was the facilitator for the company’s executive team meetings. I was headed out the door, off to catch a flight for our annual strategic planning retreat and she grabbed my arm and said, “Please don’t come back with 20 initiatives. Just pick the top three and so we can do them well.” From a neuroscience point of view, she was spot on. Achieving small win goals gives the brain a nice boost of dopamine, activates the reward state in the brain, and motivates us to continue onward.[13] And experiencing these reward state increases the brain’s cognitive resources for creating the next set of small wins.

Don’t try to hit the home run!  Just getting onto first base is enough to start the process of change.

What Specific Skills Do You Need?

Several years ago, a client company resolved to become more “collaborative” in their discussions—a worthy goal, but one that lacked specificity. A year or so after this goal was announced, the firm’s senior managers got together to discuss what the term really meant, and, in my pre-meeting interviews with a few of the leaders, one leader stated, “I already thought I was collaborative! I have no idea of what they are looking for!”

And therein lies the last key issue when implementing initiatives: a lack of specificity regarding what kind of behaviors that you are seeking. Lewis’ Moneyball was, essentially, a book about asking a different set of questions and applying statistical analysis to answer those questions. Now that the problem has been articulated, what are the specific skills and abilities that will contribute to a winning team? As Billy Beane and his scouts found out, it wasn’t RBIs that made the difference, but on-base percentage. Daryl Morey of the Houston Rockets (also a student of data analytics) found out that points and rebounds and steals per game was not very predictive of a good basketball player; but points and rebounds and steals per minute was.[14]

So, going back to an organization’s goal to become more collaborative, which specific skills ensure collaborative leadership? What should be measured? What really makes the difference? As Lewis discovered in the research for his next book, when making such assessments, we often rely on outdated or flawed collective wisdom, and, rather than undertaking rigorous analysis to prove what really contributes to success, we are swayed by first impressions, confirmation and availability biases, unproven correlations, and a whole host of other heuristics.

A particularly germane section in Lewis’ The Undoing Project describes Daniel Kahneman’s work (Kahneman is a Nobel prize-winning behavioral economist) for the Israeli army in assessing new recruits. Having identified a disastrous correlation between the recruits that the interviewers thought would perform well and those who actually did, Kahneman put together a series of interview questions designed to assess how that person actually behaved.”[15] The core set of questions became “not ‘What do I think of him?’ but ‘What has he done?’” Through this work, Kahneman found that if you remove the opportunity for the expression of gut feelings, people’s judgements improve.

When making assessments, we often rely on outdated or flawed collective wisdom, rather than undertaking rigorous analysis to prove what really contributes to success.

This approach of eliminating gut feelings and analyzing better data has been applied successfully in a multitude of sectors, including finance, education, criminal justice, and health care. In health care, this same data-focused technique is used to identify differences between doctors who are successful at engaging patients and those who are not. Why would anyone care about this? As it turns out, patients who are more engaged with the decision-making process are more likely to give clues to what the doctor might not even be thinking about, it improves patients’ understanding of the available treatment options, increases the proportion of patients with realistic expectations of benefits and harms, and improves agreement between patients’ values and treatment choices.[16] This all rolls into better health outcomes and fewer malpractice suits.

So, rather than just chalking it up to “She just gets along well with people,” researchers intent on improving the doctor-patient relationship ask how does the physician deal with resistance? What were the specific question structures used to elicit the patient’s values and preferences? How are treatment options presented? They have identified that “how a doctor asks questions and how he/she responds to his/her patient’s emotions” are both key to engaging the patient.[17] And little things like periods of silence and how long they let the patient talk makes a difference. The studies showed that, on average, doctors will interrupt a patient’s story in about 12-18 seconds, an act that decreases patient perception of their involvement in medical decisions.[18]

These conclusions are not based on a quick assessment of the physician but rather by analyzing thousands of video tapes and live interactions, i.e. data, not gut feel. Forget surveys and interviews. Such methods are so peppered with the first impressions, confirmation and availability biases, they are not of much use.

Researchers instead, capture the actual conversation on audiotapes and then go through the painstaking process of writing down every “huh,” “uhhh,” and timed moments of silence that are part and parcel of an actual conversation. The outcome? They can detect qualitative differences as to why one doctor might have better outcomes than another. Armed with this data, if you are part of the hospital training staff tasked with improving the level of shared decision-making, rather than just telling your doctors to “ask more questions!” your training can be much more specific on the type of questions to ask, and how to wait for the reply.

In our consulting work we employed a similar technique when evaluating “good” and “poor” performing virtual teams.  We had been asked to deliver a training class to improve leadership capabilities in a globally dispersed set of teams. Rather than defer to collective wisdom on what makes for a great team leader, and even my own gut feel about what makes for a good facilitator (I have been teaching facilitation techniques as well as facilitating for over two decades), I taped and analyzed the dialogues of those teams that were performing well (as defined by their ability to deliver a set of results) and those that were struggling in their performance.

The study allowed us to pinpoint the specific differences in how leaders organized the discussion, the specific question structures that they used to engage their team members, and how they resolved the differences of opinion.

We can improve our judgements and decision-making through a more exacting analysis of current, relevant data.

In the better team, you could hear the leader calling people out by name when asking for input and she used specific questions such as “Tom, I am interested in what might be some limitations of this suggestion,” rather than the “So does anyone have any questions?” question posed by the “not-so-good” leader. In the not-so-good team, one presentation went on for over 20 minutes with no break for discussion. Not a good thing to do in a teleconference call! Names were rarely mentioned.

When it came to training, instead of providing vague concepts that called for better ground rules and the directive to be a better active listener, we had a targeted set of skills (e.g. during a presentation, break every five minutes to solicit—with a certain kind of question structure—comments) to teach the participants.

For the client company mentioned earlier whose goal was to become more collaborative, my question would be “What would we find if we taped the actual conversations between a leader who is deemed to be collaborative and one who is not?” What would the data tell us? Do they ask more questions? How do they challenge opposing points of view? How do they employ silence? How do they respond to disconfirming information? My point is, this is the level of specificity needed if one is to succeed at changing the behaviors in one’s organization. Actions are driven by talk and we need to understand at a very specific level the root of the differences in talk if we wish to be successful in implementing the new behaviors. General calls fail to shift behaviors.


For an initiative to be successfully implemented, the question or need driving it must be first be explored and articulated. Leaders need to dedicate effort to understand the specific problem that they are wanting to address.  What’s the game that you are trying to play? What are the constraints that you are dealing with? Would your leadership team pass the “Billy Beane” test?

Once you have personalized, concretized, and own the essence of the problem, ask “What’s the difference between those who do-whatever-you-are-looking-for well, and those that don’t?” That means before launching a company-wide training initiative, someone needs to spend dedicated time gathering data on exactly what needs to be trained or coached, and how the organization plans to measure the impact of that training. What specifically helps those players score? Certain doctors to be rated higher? The manager to be so collaborative? Fortunately, someone in your organization is already good at what you are seeking to foster. Go find that person and study the heck out of what they do and how they do it.

Lastly ask, “What are the easy small steps that will set a positive momentum for the change to get going? Where is it already happening and how do we leverage that?” It is so much easier—from both a psychological and physical perspective—to  build on what currently exists, rather than adding to one’s already full plate.

In today’s challenging business environment, it is not, as Billy Beane stated “business as usual.”  Organizations can no longer afford the cost of continually running after new initiatives. By failing to implement well, organization start to churn, creating frustration as scarce resources are diverted from their primary mission.

The good news is, there are a host of examples in multiple industries and disciplines that prove the value of asking the different question and understanding the specifics. The better news is that you, like the Oakland A’s, don’t have to pay the big money to win your game: you likely have what you need within your own organization. You just have to identify the problem and, in very small steps, promote the specific skills that will help you win your game.


[1] Moneyball, Miller. The movie is based on Michael Lewis’s 2003 book Moneyball: The Art of Winning an Unfair Game.

[2]Katherine Rosback, Overcoming the Odds [White Paper], Indianapolis, October 23, 2013. See Katherine Rosback, “The Failure of Advocacy to Influence” (White Paper, 2015; paper presented at the SDP 2017 DAAG)

[3] Leonard, David and Coltea, Claude. “Most Change Initiatives Fail—But They Don’t Have To.” Gallup News Business Journal, (May 4, 2013). http://news.gallup.com/businessjournal/162707/change-initiatives-fail-don.aspx

[4] Cameron, Cameron, K. and Quinn, R. Diagnosing the Organizational Culture. New York: Pfeiffer, 2011.

[5] Marilee C. Goldberg, The Art of the Question: A Guide to Short-Term Question-Centered Therapy (New York: John Wiley & Sons, 1998).

[6] John Boudreau and Steven Rice, “Bright, Shiny Objects and the Future of HR.” Harvard Business Review (July–August 2015): 72–78.

[7] Goldberg, The Art of the Question.

[8] Michael Beer, “ Why Total Quality Management Programs Do Not Persist: The Role of Management Quality and Implications for Leading a TQM Transformation,” Decision Sciences, Vol 20. 2003.

[9] McKinsey Quarterly 2010. (Accessed November 30, 2017) https://www.mckinsey.com/business-functions/organization/our-insights/what-successful-transformations-share-mckinsey-global-survey-results

[10] Karl E. Weick, “Small Wins: Redefining the Scale of Social Problems.” American Psychologist 39, no. 1 (1984): 40–49.

[11] Ibid., 42

[12] Anna-Lisa Cohen and Peter M. Gollwitzer. “The Cost of Remembering to Remember: Cognitive Load and Implementation Intentions Influence Ongoing Task Performance.” Prospective Memory: Cognitive, Neuroscience, Developmental, and Applied Perspective. eds. Matthias Kliegel, Mark A. McDaniel, and Gilles O. Einstein. (New York: Taylor and Francis Group, 2008).

[13] Kevan Lee, “Your Brain on Dopamine: The Science of Motivation” (January 24, 2017).

Your Brain on Dopamine: The Science of Motivation

[14] Lewis, “Basketball Nerd King.”

[15] Ibid.

[16] Ballard-Reisch, D. S. A Model of Participative Decision Making for Interaction. Health Commun. 2, 91–104 (2009).

[17] Groopman, Jerome. How Doctors Think. (New York: Harcourt Publishing Company, 2008).

[18] Rhoades, McFarlan, Finch, and Johnson. Speaking and Interruptions During Primary Care Office Visits. Family Medicine, 7, Jul-Aug, 528-324. (2001)





About the author: Katherine Rosback