Large Language Models are huge topics of conversation, even at the conventionally markup-focused Balisage conference.
If you look at the 2022 program, you won’t find a single mention of AI, “artificial intelligence”, LLMs, or language models among the talk titles or abstracts.
A year later in 2023, it’s the talk of the town. Here’s some highlights.
Uche Ogbuji, after an overview of the technology, spoke about automating common (and uncommon) and surprising markup tasks, particularly with a locally-hosted language model. For example, “correct the following XML …”. Foundation models with no additional training can already “understand” — though this is a loaded term — XML well enough to handle simple tasks. This led into a deeper discussion of what it means to understand anything in the first place, and some cautions about subtle errors getting introduced. Powerful tools though. Use with care.
John Chesolm later spoke about combining AI with XForms in a medical records context, though this was more focused on conventional AI than language models.
Long-time conference sponsor Docugami has recently sharpened their messaging to point out that they’ve been building a powerful foundation model for business documents for years now. When you load your business document’s you’re fine-tuning an already-powerful model. Since it’s your own data, many of the ‘confabulation’ issues of LLMs go away. This year, their demo and new developer playground landed differently. These guys are doing amazing work, which I’ll say more about later.
Then there was my late-breaking talk. It wasn’t planned this way, but it ended up building on a key idea from Uche’s presentation. Language models are tools, and we’re only beginning to figure out some of the things they’re good at — as well as some of the dangers. Power tools, after all, are capable of inflicting a lot more damage than hand tools. What makes these tools so powerful comes from emergent behaviors, for example the ability to analyze data to the point where one can build an application on top of it. I’ll say more in a separate post.
An open mic session continued the discussion, including ways to annotate machine-generated, or in some cases machine-assisted content. Paul Prescod spoke briefly about why LLMs are good for markup, and vice-versa, pointing at Microsoft’s guidance. In his full talk, he showed off a system to evaluate machine-generated markup.
There were some ad-hoc social spaces, with topics set by attendees. The one with the most messages was AI/LLM relevance to markup had the most messages by more than a factor of 2. It even beat the one with pet pictures.
There was a lot to chew on, especially for something basically not on the radar a year ago. In will be interesting to see how far the discussion will have shifted by next year.
Takeaways:
- LLMs are powerful tools. We need to at a minimum understand this new kind of tool–including notable risks and dangers.
- Emergent behaviors are largely what are driving value in the ecosystem. At least from the perspective of talks given at this conference.
- The change of pace in technology has accelerated to the point where even experts are having trouble keeping up with it. This will require new ways of tackling information overload.
Additional resources: Uche compiled his own observations, and in particular did a better job capturing links, in his write-up. The remarkable Mary Holstege and others kept some general conference notes on Mastodon.
This posting 100% human-written.