16GB RAM handles 8B parameter models fine. Phi-3 Mini is fast. Llama 3 70B needs a GPU. The quantized 4-bit version is barely usable at homelab speed.