I’m someone who benefits a lot from talking out a problem–often in front of a whiteboard. This is pretty common. But why does it work?
We had been previously discussing latent spaces — a kind of internal map connecting different pieces of knowledge in different ways — as part of “understanding” and this is another facet of it.
Suppose I introduced a problem to you in this way: You know what the Monty Hall Problem is, right? This is a related problem to think about…
That first sentence (assuming you know something about the MHP) will radically alter how you think about the problem that follows. That leading sentence activates latent knowledge, including awareness of a seeming paradox, which primes your mind to tackle a related issue. Our minds are full of all kinds of interconnected knowledge, and it only takes a brief prompt to unleash a chain of associations. A small pebble can start an avalanche of cognition.
Talking out a problem is a form of self-priming. Instead of diving directly into some problem, the act of restating it, or even better, sketching it out, gives your brain a chance to wander around an abstract latent space, pulling out relevant bits of knowledge and putting them in context. It lets you kick up a bunch of pebbles, so to speak.
It turns out large language models (LLMs) work much the same way. The right kind of prompting, encouraging associations, leads it on a different path. This is what’s behind simple prompting tricks like chain-of-thought or adding “Think step by step” or even “take a deep breath” to a prompt. Even better to add specific guidance.
The brilliant David Shapiro has a longer video on this. In an upcoming post, I’ll talk about some specific problem-solving frameworks, though they tend to have a common backbone: generate hypotheses, pick one, try it out, and repeat. Augmenting these frameworks with AI makes them even more powerful. Stay tuned.
For a regular digest of problem-solving insights like this, sign up for the Problem Solvers Digest.
This post 100% human-written.