The AI of the Gaps Argument: What Most People Miss

Modern AI technologies have from their origin in the 1950s have been characterized by what that can’t do. This doesn’t look like a coincidence, and it has long-term implications on important work toward regulating, legislating, and leaning to live with AI technologies.

It wasn’t so long ago that serious-minded folks could be found claiming that beating a human at chess would be a Big Deal in the advancement of AI. There was quite a bit of hand-wringing in ’96 and ’97 when Deep Blue defeated Garry Kasparov in championship play.

These days, an open source project (partly based on neural nets and AI technologies) is orders of magnitude better than the world champion, who himself is generationally impressive. Chess is rated on the Elo scale, named after Arpad Elo, in a scheme were a difference of 400 Elo points means that the stronger player should win 90% of matches played. The latest version of the Stockfish chess engine has an estimated Elo rating around 1000 points higher than the great Magnus Carlsen, current world champion. And it runs on a laptop.

In the last few months large language models (LLMs) have exploded onto the scene, in a dizzying display of capabilities that many thought would be perpetually 20-years away. In fact, something of an arms race has ensued, with some tech leaders calling for a moratorium on research.

As of this writing, nobody is seriously saying we’ve solved general AI. But the gap keeps getting narrower. Science fiction magazine Clarkesworld recently had to enact a ban on AI-written stories and temporarily shut down story submissions while they work through the backlog. This is a sensible and reasonable thing to do — but for how long will even professional fiction editors be able to tell the difference? At the frenetic pace of advancement, it could easily be that within 3 years — or maybe 2, or 1 — technology will have advanced enough to credibly close the gap. How, then, would such a restriction be enforced?

A similar dynamic comes up in code completion models, like GitHub Copilot. Nobody would ever object — except maybe on economic grounds! — to hiring a coder to spend a decade reading through piles of source code for learning purposes, then to write something new to spec. This hasn’t stopped the lawsuits from flying, however, one even arguing that such models are no different than a ‘digital collage tool’. Now, there are technical reasons why these models can occasionally emit verbatim training samples, but these are bugs — aka rapidly-closing gaps. Focusing on the temporary distracts from seeing the longer-term trend.

Will we ever reach general AI? Maybe. But even if not, the gaps will keep getting smaller. Especially as litigants and legislators think about this space, it’s the gaps that stand out and draw attention. But this focus is doomed to premature obsolescence. And it doesn’t seem like enough folks are thinking about and writing about the longer-term implications here. In that light, a research moratorium makes more sense…

Originally posted on LinkedIn. All text 100% free-range human written.

Scroll to Top