Service hosting in a Homelab

Building affordable homelab AI workstations with AMD Ryzen, Ollama, and Docker

(2026-01-27T22:00:00.000Z) The local AI dream can be satisfied with GPUs other than expensive NVIDIA hardware. An AMD Ryzen MiniPC with integrated Radeon GPU and shared memory can run 30B parameter models at usable speeds, for under $1000.

Self-hosting Ollama, in Docker, to support AI features in other Docker services

(2026-01-22T22:00:00.000Z) Many applications of interest to Homelabbers use AI systems to add value. Since Ollama is a popular tool for running LLM AI software, let's look at it as a Docker service which can be used by other tools.