Even Google can’t handle how fast the AI landscape is shifting. Originally posted on LinkedIn in May 2023.
Especially if you haven’t been closely following the whirlwind of AI developments, this one is worth a peek.
We Have No Moat (And neither does OpenAI)
In this leaked Google document, the author describes several ways in which open source AI models and techniques are catching up with, and even surpassing some of the millions-of-dollars-to-train commercial models.
Smaller models can iterate more quickly, and lots of small improvements compound over time in a way that the big guys can’t keep up with. In many cases haven’t they even had these techniques on their radar.
For example LoRA, which preserves the weights of a large model as-is, but introduces smaller ‘in-between’ matrices that reduce the training cost by 1000x, or in some cases, 10,000x.
Small models can also be trained on surgically-targeted data sets, which (within a narrow domain, obviously) can outperform models with stupid-large amounts of training data.
The incredible abilities unlocked by simply scaling bigger-and-bigger are what initiated this current wave of progress in AI. But that’s not the only avenue by which progress can happen. Watch this space.
100% free-range human written.