Playing To Win
Faux Science in Strategy
Why Modern Strategy Projects are Structured for Disappointment
The strength of the modern demand for strategy studies from McKinsey, Boston Consulting Group, and Bain (MBB) never ceases to amaze me — though maybe the bloom is finally going off that rose. A current situation has reminded me of the fundamentally flawed logic of how these assignments are structured. So, I decided to do a Playing to Win/ Practitioner Insights (PTW/PI) piece on Faux Science in Strategy: Why Modern Strategy Projects are Structured for Disappointment. And as always, you can find all the previous PTW/PI here.
The Current Situation
My reflections have been spurred by a situation at a great client of mine. Both its Chair and its CEO are fans of my Strategic Choice Structuring approach to strategy which waits to do analysis until it is clear what analysis needs doing.
The President of one of the many business units (which itself is composed of several sub-businesses) wanted to hire MBB to do the strategy for it rather than use the Strategic Choice Structuring Process. It perhaps wasn’t surprising because the President was ex-MBB himself. While it was not what either the corporate Chair or CEO preferred, they had no interest in compelling the business President to do something he didn’t want to do.
But because Chair is a very able strategic thinker, he had a view of the outcomes he desired and provided the President a note on the outputs he wanted to see from the MBB study. Some were questions — What is the competitive landscape in the industry of business X? Others were instructions — Understand the elasticity of demand and supply in business Y.
His questions made total sense to me. There was nothing on his list that I wouldn’t want to know or anything unrelated to strategy. It was a long list. I counted about 25 independent questions on the list on the list. That is what happens when you start from a proverbial blank sheet of paper and the desire to have “a strategy.” There are always many questions that could be sensibly contemplated from this starting place.
I looked at the questions from the perspective an ex-senior partner of a big strategy consulting house (Monitor Company for a decade and a half) and made my best guess of the cost to answer each question thoroughly and decisively at 2025 MBB rates. For example, a question like: What is the best way to pursue the development of X region? Or: In business Y, what should our portfolio look like and what should our strategy be?
My best guess is $1–1.5M per question, but let’s just say $1M to be optimistic, which would mean $25M in MBB work. But that is not going to happen at this company. It isn’t one of those companies that has a standing $50M MBB annual budget with MBB partners embedded in offices at the company’s headquarters. This is a normal, sensible company. I suspect that the business President hired MBB for between $1–2M — let’s say $1.5M for argument’s sake.
The Dilemma
MBB was hired to do strategy the MBB way: the client wants a strategy, so MBB will provide the client with one. There is always a back and forth on the kind of topics and questions MBB will cover to produce this thing called “a strategy.” The fundamental approach is to collect data and analyze it based on the agreed-upon questions. When the topics/questions have been analyzed thoroughly enough, out pops “a strategy” — notionally
The dilemma here is that the Chair, whose board will have to approve any strategy coming out of this effort, has 25 distinct questions — implying $25M of work — but MBB has $1.5M of fees to spend (again, both are estimates that might be off, but even if they are by a factor of two each, the basic point doesn’t change). That implies rather than having $1M per question, MBB has $60K per question — or 6% of the necessary work resources.
This may sound like it is an extreme outlier case. But it isn’t. It is a bog-standard situation when strategy is done by a strategy consultant in the standard way. The client asks for a strategy. The strategy consultant does a study that produces a strategy. There may not be 25 questions every time — but there will always be a lot.
I see two solutions to this dilemma that are in wide use in the strategy consulting business.
The first is that when they have 6% of the resources they need, they do inch-deep work across a mile-wide territory. And to understand why it can be only an inch deep, at MBB, $60K buys you about 15 days of a first year, freshly minted MBA or about 5 days of an engagement manager-level person working without junior support — i.e. not much work. The output of the shallowness is typically less-than-compelling to the client, who often thinks “maybe, but what about X or Y?” And that is why so many strategies are never converted to action. They just aren’t sufficiently compelling to the executives who would have to act on them.
A second popular technique has emerged and in fact become the dominant way to deal with the inherent shallowness dilemma. It is for the consultants to use the work that they have done for direct competitors to provide strategy advice that requires as little additional work on the client’s assignment as possible. The most useful such work is for competitors that are most similar and facing the most comparable contexts. This enables the consultants to appear deeper. But of course, the problem is that strategy is about being distinctive and this consulting approach drives exactly the opposite — convergence to sameness.
Given the emergence of this approach in the modern strategy consulting industry (which I believe is fundamentally unethical), I question the sanity of firms in the top quartile of their industry hiring strategy consultants who will use the insights the consultants learned while working for them to teach the rest of the industry how to catch up!
The Way Around
Thankfully, there is a way around the shallowness dilemma that doesn’t involve the mediocrity of converging to competitor strategies.
The solution is to be actually scientific — not faux scientific. The scientific method doesn’t start with analysis. It is not scientific to say: if I conduct this set of analyses, an answer will pop out. The scientific method starts with hypothesis — not analysis — for a reason. It is the only way for the scientist in question to avoid analyzing anything and everything — i.e. to avoid being exquisitely shallow.
Starting with a hypothesis is the only way science works. Otherwise, it is data mining — the search for random correlations, which is the antithesis of science.
In strategy, a possibility is the key form of hypothesis. The process is to consider the key ways in which the current strategy is producing negative gaps between the company’s aspirations and the outcomes it is achieving — and then generate hypotheses — i.e., possibilities — for overcoming them. With possibilities in hand, an executive team can be narrowly scientific — focusing its resources. It can ask the most important question in strategy: what would have to be true (WWHTBT) for the possibility to be sound — i.e. what is the logic of the hypothesis? And then, which of the WWHTBT appear to be least likely to be true — i.e. which are the barriers to choice of each possibility?
Those barriers to choice can be analyzed with precision and intensity — a mile-deep and an inch wide as a scientist would do — because everything under the sun no longer requires analysis.
The Conventional Pushback
The core pushback that I get to this approach is: what if management can’t come up with good strategy possibilities? My response is that this approach should be compared to an alternative method, not to utopian perfection. If management can’t come up with good possibilities, how is it that it will come up with good strategy out of the broad and shallow analysis featured in the conventional method? The answer is that there is a belief that a eureka solution will pop out. Of course, that is not impossible. But I have watched and watched the conventional approach in action and rarely have seen the backend eureka solution emerge.
The situation for management is not unlike that of scholars. To be successful, research scholars need to come up with interesting and plausible hypotheses to study. If they can’t, they will never write publishable papers, never get tenure, and eventually get flushed out of the academic system. Ironically (and tragically), despite its central importance, there is no training in academia on how to generate a novel hypothesis. I have tested this question by searching for years to see if there are any PhD courses on how to develop an interesting research hypothesis and have yet to find a single course. Young scholars are told to ‘read the literature’ and (essentially) hope that a hypothesis pops out — it is quite pathetic.
It reminds me of novelist Tom Wolfe and his legendary book, The Right Stuff, in which he chronicled the world of test pilots during the early days of the US space program. The only advice given to test pilots was don’t crash! If they avoided crashing, it was the proof they had ‘the right stuff.’ If they crashed, it really didn’t matter that they didn’t have the right stuff: they were dead.
It is similar for management. There are no business school courses on generating strategy possibilities. Hence, lots of management teams fail on this point.
Practitioner Insights
If you are a manager, be very wary of strategy development approaches that are a mile wide and an inch deep. Great strategy doesn’t pop magically out of analyses. That is why SWOT analyses are a useless waste of time. Data mining is expensive and unhelpful — but there are lots of folks who would happily sell you as much as you are willing to buy.
If you are a strategy advisor/facilitator, work assiduously to be scientific. Be hypothesis-led. Analyze only when you have a precise view of how the potential analysis will address an uncertainty in the logic of a possibility you are exploring. Aim to be a mile deep and an inch wide in your analysis, not the opposite.
You have two big jobs — neither of which involves doing analyses. The client can do that — at a tenth the cost.
The first job is to help the management team generate creative possibilities. That is a hard task and managers aren’t trained in this capability — a major failure of business education, which focuses on training business technocrats. You can provide extremely valuable help to management— and I have written previously in this series about how to generate creative possibilities.
The second job is to help the management team work through the logic of the possibilities as Aristotle encouraged — to imagine possibilities and choose the one for which the logic is most compelling. I have written repeatedly in this series (e.g. here and here) on the Strategic Choice Structuring Process, which is designed on Aristotelian principles. This involves helping the team reverse-engineer the logic of the possibilities, identifying the barriers to choice and then helping them design inch-wide-mile-deep analytical tests to confirm or disconfirm the barriers to choice.
The goal is to help management teams rise above the test pilot phenomenon that Tom Wolfe chronicled. Train them to help them fly.
