Llama109B params17B active (MoE)10000k contextpopular

Llama 4 Scout 17Bx16 locally

Llama 4 Scout 17Bx16 is an open-weight LLM from the Llama family with 109B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: NVIDIA A100 80GB (80 GB).

Technical facts
Parameters109B
Max context10000k
Q4_K_M68.5 GB
Q5_K_M83.7 GB
Q8121.8 GB
FP16243.6 GB
FamilyLlama
Last sync2026-05-12

Available quantizations

Q4_K_M
68.5GB

Acceptable. Good compromise when VRAM is limited.

Q5_K_M
83.7GB

Good quality. Sweet spot for size and precision.

Q8
121.8GB

Near-FP16 quality. Comfortable for production.

FP16
243.6GB

Reference precision. Maximum quality, doubled VRAM.

Compatible GPUs

GPUs that can run Llama 4 Scout 17Bx16 on a single card, ranked by VRAM headroom.

Recommended multi-GPU rigs

For Llama 4 Scout 17Bx16 at higher quantization or with more context, a multi-GPU rig gives more headroom.

Recommended rig

4× TITAN RTX

Llama 4 Scout 17Bx16 with Ubuntu, vLLM, Open WebUI and the model already downloaded.

Configure

Similar models

VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12