Dmitri Mendeleev was a smart guy. In 1869 he wrote out a card for each element and started sorting them by increasing atomic weight. He noticed that there were repeating patterns, for instance, the noble gases. Thus was forged the periodic table of the elements.
Question: Did Mendeleev “understand” the underlying structure in a way that others at that time didn’t?
These kinds of discussions inevitably devolve, because it ends up being an argument over the exact definition of a word. (See, for example, countless memes a la “Is a taco a sandwich?“) And when it comes to AI, there is clearly NO agreement on what it means to understand something.
All Mendeleev did was write down existing information on some cards and move them around. Was that enough? Did he “understand”?
Well, consider this: When Mendeleev laid out the cards, there were gaps, and from those he was able to predict the existence of not-yet-discovered elements. In time these elements were discovered. This feels a lot more like understanding, even though all he did was regurgitate existing information into a different format.
Now, where have we heard that argument before? Ah, yes, the Stochastic Parrot argument for generative AI. Since these systems are trained on existing data, all they can do is serve as “merely a complex collage tool.”
In actuality, deep inside generative AI systems, particularly neural nets, the optimization steps of the training process arrange data in complex ways we call latent spaces. These arrangements, like Mendeleev’s cards, can suggest deeper layers of structure that weren’t previously apparent. And gaps in the structures can suggest new ideas or ways of thinking that we haven’t thought before.
Since we’re still talking about Mendeleev 150 years later, I’m inclined to give him some credit, and in so doing reject the binary description of understanding–as in a system either has it or doesn’t. Understanding can be thought of as a continuum, one that generative AI is on.
Maybe we’ll get to the point where we can say the same about consciousness and sentience. Maybe that time is closer than we thinkā¦
This post 100% human-written.
Pingback: Why talking it out helps solve problems: Leveraging latent spaces - M.Joel