Ollama is pretty easy to run. It’s where I’m going to start for running a local model. I know there are some better options, I might get to those later.
I put a link to this docker compose file in the last blog https://github.com/joshbressers/ai-skeptic/blob/main/docker/docker-compose.yaml
If you try to use this, look at the paths. I’m lazy and haven’t turned those into something better (maybe I have by the time you read this).