Running DBRX on your local network

DataBricks released their DBRX model, and it appears to be quite capable, beating Llama 2, Mistral, and Grok.

It uses a slightly novel architecture, so existing tools don’t work with it out of the box, though they are making rapid progress to add support.

I haven’t yet been successful in converting this new format into GGUF, which would open the way for it to run in LM Studio. It does, however, work at a basic level with MLX, which is optimized for Apple silicon. The 4-bit quantization still requires over 70Gb of VRAM.

If anyone is evaluating LLMs for local use, this one should probably get immediately added to your shortlist. If you’d like a thinking partner to help you evaluate DBRX or other AI technology, I’d love to assist. Reach out.

Scroll to Top