Playing To Win

Strategy & Artificial Intelligence

A Story of Heuristics, Means & Tails

Roger Martin


Source: Roger L. Martin, 2024

Companies are spending wildly on Artificial Intelligence (AI), understandably because AI has already impacted business plenty and, arguably, is only getting started doing so. In response, I have decided to write a two-piece series on AI. The first Playing to Win/Practitioner Insights (PTW/PI) piece will discuss a way to conceptualize the role of AI in business and strategy and the second will suggest how to think about company investments in AI. This one is Strategy & Artificial Intelligence: A Story of Heuristics, Means, and Tails.

AI & the Progress of Knowledge

I will focus on the AI manifestation that has gotten businesspeople simultaneously giddy and worried, which is large language model-based generative AI (Let’s call it LLM/AI) — as epitomized by, though certainly not limited to, ChatGPT. There are, of course, many types of AI, some older, some newer, some yet to come. But I will let others discuss AI classification systems. And bear with me, I need to go on a longer-than-usual detour to get to the way to think about LLM/AI.

Though I wrote The Design of Business a decade and a half ago, long before AI became a big deal, I feel it provides a helpful way to think about what LLM/AI accomplishes. The book has aged well and become the 2ndmost cited publication in its field.

The book posits (as shown above) that all knowledge in our world goes through three stages. Every domain starts as a mystery, at which point we don’t even know how to think about the subject. For example, at one point in history, we had no idea why objects fall to the ground — fate, the love of all objects for mother earth, animal spirits, etc. Some mysteries get advanced to heuristics — a way of thinking about it that helps get towards a valuable answer. In due course, a brilliant person posited that there is a universal force (gravity) that pushes all objects towards the ground, some more successfully (rocks) than others (birds). Some heuristics get advanced to algorithms — a formula that consistently produces the desired result. Objects accelerate toward the ground at 9.8 m/s2. I went into greater depth on this in this earlier PTW/PI piece if more detail is helpful, and here and here for still more nuance.

To give a preview of my conclusion, I believe that the best way to conceptualize LLM/AI is that it is a device for advancing knowledge domains through the Knowledge Funnel more quickly than would happen without it — which is a good thing.

The world is full of mysteries, which is what makes life interesting. In addition to being interesting, tackling a mystery and developing a great heuristic for dealing with the knowledge in question can be extremely remunerative. For example, how to write a hit song is a mystery. But artists like Bruce Springsteen, Drake, Rihanna, and Taylor Swift have developed heuristics for generating hit song after hit song. While producing a hit song is a mystery for most, it is not for them. And as a result, they are rich and famous.

And it is not just artists. Goldman Sachs bankers have a heuristic for doing M&A transactions. Wachtell Lipton law partners have a heuristic for providing legal support on deals for its clients. It is the same for highly successful pharmaceutical scientists or super-salespeople.

Importantly, these heuristic advances are not accessible to the world in general, otherwise Springsteen et al wouldn’t be famous, and Goldman bankers and Wachtell partners wouldn’t be rich. It is yet another manifestation of the great William Gibson observation: “The future is already here — it’s just not very evenly distributed.” (It appears Gibson never wrote it but said it in an interview.)

Why is the Future Unevenly Distributed?

Finance scholar Michael Jensen has provided an explanation of why many heuristics are not ubiquitously understood in Specific and General Knowledge, and Organizational Structure. Jensen argues that some forms of knowledge are specific knowledge, residing in the head of a given individual and both costly and difficult to transfer to others. An example would be the knowledge about the way to optimally handle a given important client in the head of its longstanding salesperson. It would be time consuming and painful for the salesperson to transfer the many intricacies of that knowledge built up over years of serving the client to (say) the company’s CRM system. In contrast, general knowledge, such as how much the customer bought last quarter, is easy to collect and distribute to the entire sales staff.

Such salespeople may not even know their own heuristic, but rather just operate it intuitively. Many people run heuristics of which they aren’t even aware. That was clear from my interviews of highly successful leaders for my 2007 book, The Opposable Mind. At least half of them ran a brilliant heuristic they used to power their success, but they couldn’t articulate or explain it.

Codifying one’s own heuristic is an enormous challenge. And I know the enormity from personal experience. It took me a decade to create a heuristic for strategy — what became the Playing to Win framework — then another decade and a half refining it, and then co-writing a 67 thousand-word book (Playing to Win) on it with AG Lafley. But I came to realize that the framework was still mainly a specific knowledge heuristic in my head (and AG’s), so I set out to convert it more so to general knowledge and have, thus far, put out 285 thousand words of heuristics in the PTW/PI series — i.e. 4.25 additional books-worth.

Beyond the enormity of the challenge, there is the problem of what economists call ‘moral hazard.’ People in possession of a heuristic have strong disincentives to shift it from specific knowledge exclusively in their heads to general knowledge, even though doing so would be good for humanity. If they cause it to become general knowledge, the supply of people in possession of the heuristic would grow and the value to its creator would fall. If the salesperson above gave all the insight on how to serve that client, the company could successfully transfer the client to another salesperson.

If a heuristic becomes evenly distributed general knowledge, it is more readily advanced to an algorithm so that even a machine could do it. Machines can now read x-rays as well as a radiologist for many applications (an early application of AI), meaning that radiology skills in those areas are of diminished value. That is because a group of radiologists made reading x-rays enough of a general knowledge heuristic to advance it to algorithm, which made it programmable into software.

What Then Does LLM/AI Do?

LLM/AI infers the nature of a heuristic from an enormously large database of words concerning a mystery (and the same can be done with numbers, images, or sounds). The ‘M’ in LLM is an algorithm for inferring a heuristic that is currently hidden inside a mystery. The model figures out, from the giant textual database, what is the most compelling combination of words — one word at a time. Ask an LLM to compose an acceptance speech and it will infer the heuristic for successful acceptance speeches from all the acceptance (and similar) speeches in the textual database.

LLM/AI fully overcomes the two impediments to generalizing heuristics. Machines aren’t daunted by enormous tasks: they will work forever if called upon. And they don’t have moral hazard problems.

Consequently, LLM/AI has the power of pushing knowledge more quickly through the knowledge funnel by de-partitioning heuristics, making them more ubiquitous faster, which I believe will hasten their further advancement to algorithms — which is good for the world.

The Dark Side

Recall that I wrote a PTW/PI piece, Strategy & Sunshine, which argued that the stronger the light shines on your face, the darker the shadow will be behind you. And so it is with LLM/AI.

The dark side stems from the process for inferring the heuristic. It is fundamentally based on frequency. The LLM/AI does not ask: what is unique or outstanding? It asks: what is most? LLM/AI democratizes heuristics — but it makes them average. LLM/AI will find the mean/median/modal representation of the heuristic in the domain it is asked to search. It won’t find the right tail of the distribution because it isn’t set up to do so — it won’t even try.

It makes me think of fellow Canadian and rock legend Neil Young, who rails against MP3 and Spotify. While he appreciates that these standards helped spread music by making it easier and cheaper to consume, he despises them because they reduce the quality and fidelity of recorded music.

Practitioner Insights

LLM/AI is here to stay, just like MP3 and Spotify. It is useful to the world because it will infer lots of hidden heuristics and, in doing so, de-partition and democratize knowledge.

There are lessons both for managers/professionals in an LLM/AI world and for users of LLM/AI.

As a manager/professional, you need to work on creating a uniquely valuable heuristic. If you simply run an algorithm, you are probably out of a job already. If you haven’t done the thinking work to formalize the heuristics that you employ in your own work, good luck, it is only a matter of time until you will be replaced. And if you have aimed for a heuristic of average quality, you are directly in LLM/AI’s kill-zone.

To have a great career in the modern economy, the only path is to have an above average heuristic for creating value in the specific domain of your job. LLM/AI won’t find your heuristic because that is not what it is looking for if your heuristic is out on the right tail of the distribution. Creating an above average heuristic is a high bar — but will increasingly be reality in the LLM/AI economy.

As a user of LLM/AI, you need to be clear on which of two things you seek. If you want to discover the median, mean, or mode, then lean aggressively into LLM/AI. For example, use LLM/AI if you want to determine average consumer sentiment, create a median speech, or write a modal paper. If you are looking for the hidden median, mean, or mode, LLM/AI is your ticket — and that will be the case plenty of the time. Being an LLM/AI denier is not an option.

However, if your aspiration is a solution in the tail of the distribution, stay away. This is the enduring problem for autonomous vehicles. Tail events (like the bicyclist who swerves in front of you to avoid being sideswiped by another car) are critical to safety — and still aren’t consistently handled by automotive AI.

When it comes to strategy, I am on record as seeing strategy as a lost art, and most of what is written about ‘strategy’ is neither strategy nor helpfully insightful. In strategy, I am aiming for the tail of the distribution, and for that reason, I would never use an LLM/AI that contains everything written on strategy because the output would be mediocre. Though in fairness, I am not the target market for strategy LLM/AI.

However, I am doing something interesting on this front with a tech client that is a big player in AI tools. As an AI experiment, it put about 1 million words I have written on strategy into a ‘Roger bot.’ Thus far I like it. It sounds remarkably like me!

Maybe eventually LLM/AI will be able to get you the right tail of the distribution. But I am not holding my breath on that front. That having been said, LLM/AI will force you to up your game on your own job and will be very useful to you for many tasks you face. But before you use it, make sure to ask: am I after the mean or the tail? If you do ask, I think Neil Young would be happy!



Roger Martin

Professor Roger Martin is a writer, strategy advisor and in 2017 was named the #1 management thinker in world. He is also former Dean of the Rotman School.