Generative AI and the Gell-Mann Amnesia effect

Author Michael Chrichton coined the Murray Gell-Mann Amnesia effect in a 2002 speech. Paraphrasing:

‘The effect is as follows. You open the newspaper to an article on some subject you know well. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. Then you turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate than the baloney you just read. You turn the page, and forget what you know.’

https://web.archive.org/web/20200221061108/http://larvatus.com/michael-crichton-why-speculate/

This is a good basic lesson on media literacy, but it also applies surprisingly well to interpreting the results of generative AI. These models are trained and rewarded for plausibility. For the moment, at least, they can’t introspect enough to know what they don’t know.

Spend a few minutes asking a chatbot about a subject you’re deeply familiar with and you’ll immediately notice all kinds of mistakes, omissions, or factual hand-waving. But ask it about something you’re clueless about, and, chances are, you’ll be more inclined to take things at face value.

This is even more pronounced in code-generating models, which is why I don’t recommend them for beginner programmers. It’s far too easy to introduce subtle bugs that are easy to miss. If you are using code generating tools for production code, you should have a combination of AI and human review processes in place.

What can we do about this? That’s a whole separate topic, but a good start is having a good understanding of the training materials behind the model. The more you know and trust the source materials, the better off you’ll be. The more of your own documents in the mix, the better accuracy you will see in the outputs. This suggests that fine-tuning and other approaches may be worth spending some time on. And libraries like LlamaIndex that make it easier to query your own documents are still underrated.

At some level, it’s not that different than interacting with other people. You can’t trust everything you hear, even if it’s stated confidently and seems like it comes from an expert. One of the key skills needed for the AI revolution is learning the to discern the true from the plausible.

Learn more about problem-solving with AI.

Originially posted on LinkedIn. 100% free-range human written.

Scroll to Top