I ordered a framework desktop to start my research into using LLMs for various tasks.

Why a framework? I want to run the LLM locally. The biggest reason for this is I will learn more than if I let someone else do it. After looking around and talking to some people I trust, the Framework desktop was a no brainer compared to the other options.

The 128G of RAM will let me run some fairly large models locally. It’s not the fastest thing ever, but for what I want to do, it’ll work.

There are a bunch of videos and howtos about the Framework Desktop. I won’t go into details on the machine itself.

I ended up with the Fedora KDE spin on it. This is because someone else I know has Ollama running on the Framework correctly. I tried Debian and Fedora server first but those didn’t work. I think it’s because of missing firmware packages, but I can’t be bothered to figure it out yet. Someday I will.

Running Ollama#

I found this docker compose file on Reddit https://www.reddit.com/r/ollama/comments/1gec1nx/docker_compose_for_amd_users/

I’m also putting it the repo for this blog here https://github.com/joshbressers/ai-skeptic/blob/main/docker/docker-compose.yaml

Just for safe keeping.

The important part is the line OLLAMA_VULKAN: "1" That makes Ollama use the GPU on the Framework.

I’ll talk about some simple first tests in the next blog post