Articles

  • Why You Need a Thinking Partner (like this Genius)

    You may not have heard of him, but James Clerk Maxwell wrote down what we now think of as four equations that define ALL of electromagnetics. You can get a T-Shirt with them emblazoned on it. If you want to know how radio waves propagate, or how a radio functions, or how anything running on electricity…

    Read Article…



  • Google Gemini is really bad at compsci math

    Be careful out there, folks. Me> show an example of a GUID in ascii85 encoding Gemini> Unfortunately, a standard GUID cannot be directly encoded using Ascii85 due to limitations in how both formats work. Here’s why: Ascii85: This encoding scheme takes 4 bytes of input and produces 5 bytes of output. A standard GUID is…

    Read Article…



  • Master Systems Thinking: Levels and Inversion

    A meditation on a simple electronic circuit: There’s a direct current (DC) source, a switch, and a light bulb. Not that different from a lot of grade school experiments. Close the switch, and the light turns on, right? Even if you’ve never studied electronics, you probably have a mental model of how this circuit would…

    Read Article…



  • Why ‘Preview’ may become the most important MacOS AI app

    Preview is one of those apps that people use all the time without really thinking about it. “Hey, what’s that file?” Click. “Ah, OK.” MacOS already handles this better than Linux or Windows. So the ability to summarize a document would be a huge force-multiplier for Preview. Some of the biggest uses of Generative AI…

    Read Article…



  • What follows Llamas?

    Everybody is talking about LLMs, aka Large Language Models, sometimes via the cutesy word Llama, which means the same thing except smoothed over and rounded off by the linguistic optimization of getting used in actual speech. I don’t know this for sure, but I suspect that the falsely-modest adjective “Large” came about because a more…

    Read Article…



  • How to Solve Any Problem the McKinsey Way (AI edition)

    ‘Anyone can use the problem-solving and management techniques described in this book; you don’t have to be in (or even from) the Firm.’ So says the introduction to The McKinsey Mind by Ethan M. Rasiel and Paul N. Friga. I powered through this book with an eye toward any techniques that might have new life…

    Read Article…



  • Start Generating: Is AI right for brainstorming?

    When I needed a super simple “hello world” introduction to AI apps, I went for a simple brainstorming app. Start with a word or phrase describing a topic, and the app comes up with a number of title suggestions for a blog post. (And they’re pretty decent.) Is this a good introduction? For purposes of…

    Read Article…



  • Seeking feedback on official course for building AI apps

    Work is proceeding on a LinkedIn Learning course on building AI apps. Even before all of the OpenAI drama, I had decided to focus on the ability to work locally, without relying on cloud or external APIs for embeddings or the LLM itself. Some of the things I’m showing off end up being rather obscure–in…

    Read Article…



  • Teaching an AI to use a knowledge graph for semantic compression

    The following conversation has implications for MemGPT and similar use cases, as well as LLM/Knowledge Graph integration. Instead of requiring URIs for everything (which would require constant lookups against a lexicon of all-possible-things) this uses locally non-ambiguous (aka conversational) identifiers. Something like this would be great for tracking continuity in fiction, for instance. Maybe a…

    Read Article…



  • New LinkedIn Learning course on building AI applications in development

    I’ve signed a contract with LinkedIn Learning to record a course on building AI apps. Exact title TBD. OpenAI’s announcements this week certainly make things interesting. This whole space is changing so quickly that figuring out exactly what to cover is its own serious challenge. What are your most burning questions about building AI apps…

    Read Article…



  • The AI of the Gaps Argument: What Most People Miss

    Modern AI technologies have from their origin in the 1950s have been characterized by what that can’t do. This doesn’t look like a coincidence, and it has long-term implications on important work toward regulating, legislating, and leaning to live with AI technologies. It wasn’t so long ago that serious-minded folks could be found claiming that…

    Read Article…



π