Llama400B params17B active (MoE)10000k contextpopular
Llama 4 Maverick 17Bx128 locally
Llama 4 Maverick 17Bx128 is an open-weight LLM from the Llama family with 400B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: Instinct MI325X (256 GB).
Technical facts
Parameters400B
Max context10000k
Q4_K_M251.5 GB
Q5_K_M307.3 GB
Q8447.0 GB
FP16894.1 GB
FamilyLlama
Last sync2026-05-12
Available quantizations
GGUF weightsQ4_K_M
251.5GB
Acceptable. Good compromise when VRAM is limited.
Q5_K_M
307.3GB
Good quality. Sweet spot for size and precision.
Q8
447.0GB
Near-FP16 quality. Comfortable for production.
FP16
894.1GB
Reference precision. Maximum quality, doubled VRAM.
Compatible GPUs
3 single-GPUGPUs that can run Llama 4 Maverick 17Bx128 on a single card, ranked by VRAM headroom.
Similar models
VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12