Confabulating AI Capabilities

In an ironic plot twist, it turns out humans had been “hallucinating” (I prefer the term “confabulating”) a rationale for a supposed detector of AI-written text.

Hallucination refers to the penchant for generative AI to confabulate confident-sounding ‘facts’ that have no grounding in reality. Many view it as one of the defining limitations of the way the largest models are trained — at least for now.

But I’d wager all of has have, at least once in our lives, asked a colleague, relative, teacher, politician, or internet correspondent some question, and gotten a supremely confident — and flat wrong — answer.

This relates to the previously-mentioned GAP model for thinking about generative-AI. The technology has long been defined by things a) it couldn’t do, but b) that average people could. And that gap is steadily narrowing.

So OpenAI recently decommissioned their experimental service that purported to detect AI-generated text, citing “low” accuracy. OpenAI themselves were cautious about making claims about how this was supposed to work, but college professors and and motivated developers spun up multiple plausible justifications.

The more humanlike AI gets, the more it will tend to exhibit foibles of human reasoning. (This isn’t a new idea. Douglas Hofstadter in 1979’s Gödel, Escher, Bach mused about an AI having trouble doing accurate arithmetic without using a calculator.)

Which, in turn, suggests that many of the same techniques we already apply to basic media literacy — identifying misinformation, verifying research, assessing veracity, and so on — will become even more important.

I’m putting together a new framework to guide people through the thicket. Get a first look hearing it from the source.

I’m especially looking for folks building AI application teams to join the Problem Solvers Digest.

This post was 100% free-range human written.

Scroll to Top