Back to Logbook
2025-04-18 // 20:14
OLLAMA HOMELAB

Ollama on bare metal: what actually runs

16GB RAM handles 8B parameter models fine. Phi-3 Mini is fast. Llama 3 70B needs a GPU. The quantized 4-bit version is barely usable at homelab speed.