Mistral141B params39B active (MoE)66k context

Mixtral 8x22B locally

Mixtral 8x22B is an open-weight LLM from the Mistral family with 141B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: NVIDIA H100 NVL (94 GB).

Technical facts
Parameters141B
Max context66k
Q4_K_M88.6 GB
Q5_K_M108.3 GB
Q8157.6 GB
FP16315.2 GB
FamilyMistral
Last sync2026-05-12

Available quantizations

Q4_K_M
88.6GB

Acceptable. Good compromise when VRAM is limited.

Q5_K_M
108.3GB

Good quality. Sweet spot for size and precision.

Q8
157.6GB

Near-FP16 quality. Comfortable for production.

FP16
315.2GB

Reference precision. Maximum quality, doubled VRAM.

Compatible GPUs

GPUs that can run Mixtral 8x22B on a single card, ranked by VRAM headroom.

Recommended multi-GPU rigs

For Mixtral 8x22B at higher quantization or with more context, a multi-GPU rig gives more headroom.

Recommended rig

4× RTX 5090

Mixtral 8x22B with Ubuntu, vLLM, Open WebUI and the model already downloaded.

Configure

Similar models

VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12