How to Solve Any Problem the McKinsey Way (AI edition)

‘Anyone can use the problem-solving and management techniques described in this book; you don’t have to be in (or even from) the Firm.’ So says the introduction to The McKinsey Mind by Ethan M. Rasiel and Paul N. Friga.

I powered through this book with an eye toward any techniques that might have new life or notability in the age of generative AI.

Use structure to strengthen your thinking

Without structure, your ideas won’t stand up. This isn’t something that comes naturally to many, nor is it explicitly taught in many schools. At many organizations, even in the realm of their core competencies, little structure gets applied to problem-solving processes.

MECE is a term that McKinsey consultants use a lot. It means reducing a problem into a set of Mutually Exclusive, Collectively Exhaustive issues. This is an eminently reasonable way to approach a thorny problem, and it has some overlap with various prompt engineering techniques, especially for multi-step prompting. One technique consultants use is to break down a “logic tree” listing in a hierarchy all the components of a problem. For cause-and-effect situations, an Ishikawa (or “fishbone”) diagram can be useful.

Structure is a good way to amplify and regularize inherent problem-solving instincts. It can help you prioritize various options and prevent you from going too far down a dead-end path.

But rather than being a strict framework, this is better thought of as a set of concepts that can be flexibly applied to generate new ideas.

The problem is not the problem

It’s tempting to take a client’s diagnosis of their problem at face value. But a little bit of fresh perspective goes a long way. The key here is to dig deeper, keep asking questions, and ruthlessly collecting facts. An additional level of skepticism is warranted early on in the process. Trust but verify. This seems to be even more the case in the world of software requirements.

The classic McKinsey solution to this is to “come up with the answer before the first meeting” which not only helps people feel slightly better about paying huge $$$ to a consultant but also focuses the investigation on the most productive alternatives.

In practice, this means an ordered list of hypotheses and a set of “quick and dirty tests” examining foundational assumptions in an attempt to falsify — as quickly as possible — any duds.

Language models can be strikingly creative when asked to identify assumptions and shoot down hypotheses, so this seems like another fertile area for developing AI-flavored techniques.

Find the key drivers

Real-world decisions need to happen with limited and imperfect data. Especially when a problem is interesting or scratches an itch, it’s easy to get wound up in the details and lose sight of the big picture. Out of a nearly infinite set of possible directions to explore, it’s important to quickly identify and chase down the critical ones. Sometimes a few quick wins are worth racking up, especially when the person writing the check is watching. Other times it’s better (especially when a team is awaiting direction) to get something directionally and order-of-magnitude correct kickstarted and iterate from there. Brainstorming is key.

Data and insights

In my experience, lots of people say they are ‘data-oriented’ but reality often says otherwise. A lengthy section in this book covers various aspects of data gathering, especially interviews, which are a major part of McKinsey engagements. Not surprising, given the amount of implicit knowledge most organizations have that exists only in the heads of employees. The Firm also has substantial internal research facilities.

Contrary to what I see a lot of people trying to do, language models are lousy at Knowledge Management. (This is close to the core of the ‘hallucination’ problem.) Pairing language models with adjunct systems like Retrieval Augmented Generation (RAG) databases and Knowledge Graphs can help here. But even the best LLMs can’t read peoples’ minds. They would be great, however, at preparing interview guides.

Problem-solving endgame

The whole point of problem-solving isn’t solving problems, per se. It’s making change happen.

Of all the data collected during the consulting process, inevitably it will follow the Pareto distribution, with 80% of the solution stemming from 20% of the data. The trick is, identifying the right 20%. 🙂 Personal bias and predilection can easily exert an undue influence on getting to the endgame. Exhibits leading to the conclusion need to have a tough “So what?” filter applied to them.

AI tools can be helpful for this kind of filtering, and summarizing and rephrasing data, and tailoring the message for the target audience is something that language models excel at. Delivering the message still requires a human touch, for the foreseeable future.

Things come full circle here. Strong structure is needed in presenting the solution to the client, especially if the recommendations intersect with office politics. The McKinsey term here is “prewiring” or walking key decision-makers through the solution in advance of the big presentation.

So it looks like humans might remain relevant for a little while longer.

If you find this kind of discussion enlightening, you’ll benefit from joining the Problem Solvers Digest, a low-volume, high-quality conversation about AI and problem-solving.

This post is 100% free-range human written.