2025-05-01 // 22:48 KUBERNETES FLUX
Adding Flux to an already-running k3s cluster
Bootstrapping Flux onto a live cluster is different from starting fresh. Reconciliation will fight workloads you deployed manually unless you annotate them first.
2025-04-18 // 20:14 OLLAMA HOMELAB
Ollama on bare metal: what actually runs
16GB RAM is enough for 8B models. Phi-3 Mini runs fine. Llama 3 70B needs a GPU. The quantized 4-bit version is barely usable at homelab speed.