Llama70B params128k contextpopular
Llama 3.3 70B locally
Llama 3.3 70B is an open-weight LLM from the Llama family with 70B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: Quadro RTX 8000 (48 GB).
Technical facts
Parameters70B
Max context128k
Q4_K_M44.0 GB
Q5_K_M53.8 GB
Q878.2 GB
FP16156.5 GB
FamilyLlama
Last sync2026-05-12
Available quantizations
GGUF weightsQ4_K_M
44.0GB
Acceptable. Good compromise when VRAM is limited.
Q5_K_M
53.8GB
Good quality. Sweet spot for size and precision.
Q8
78.2GB
Near-FP16 quality. Comfortable for production.
FP16
156.5GB
Reference precision. Maximum quality, doubled VRAM.
Compatible GPUs
12 single-GPUGPUs that can run Llama 3.3 70B on a single card, ranked by VRAM headroom.
Quadro RTX 8000
NVIDIA48 GB · Quadro RTX
44.0 / 48 GBtight · Q4
RTX A6000
NVIDIA48 GB · RTX A (Ampere)
44.0 / 48 GBtight · Q4
RTX 6000 Ada
NVIDIA48 GB · RTX Ada
44.0 / 48 GBtight · Q4
NVIDIA A40
NVIDIA48 GB · Ampere DC
44.0 / 48 GBtight · Q4
NVIDIA L40
NVIDIA48 GB · Lovelace DC
44.0 / 48 GBtight · Q4
NVIDIA L40S
NVIDIA48 GB · Lovelace DC
44.0 / 48 GBtight · Q4
Radeon Pro W7900
AMD48 GB · Radeon Pro W
44.0 / 48 GBtight · Q4
Mac Mini M4 Pro (48GB)
Apple48 GB · Mac Mini
44.0 / 48 GBtight · Q4
MacBook Pro 14 M4 Pro (48GB)
Apple48 GB · MacBook Pro 14
44.0 / 48 GBtight · Q4
MacBook Pro 14 M4 Max (48GB)
Apple48 GB · MacBook Pro 14
44.0 / 48 GBtight · Q4
MacBook Pro 16 M3 Max (48GB)
Apple48 GB · MacBook Pro 16
44.0 / 48 GBtight · Q4
MacBook Pro 16 M4 Pro (48GB)
Apple48 GB · MacBook Pro 16
44.0 / 48 GBtight · Q4
Recommended multi-GPU rigs
2x / 4x consumer GPUsFor Llama 3.3 70B at higher quantization or with more context, a multi-GPU rig gives more headroom.
4× RTX 2060 12GB
NVIDIA48 GB · RTX 20
44.0 / 48 GBtight · Q4
2× TITAN RTX
NVIDIA48 GB · RTX 20
44.0 / 48 GBtight · Q4
4× RTX 3060 12GB
NVIDIA48 GB · RTX 30
44.0 / 48 GBtight · Q4
4× RTX 3080 12GB
NVIDIA48 GB · RTX 30
44.0 / 48 GBtight · Q4
4× RTX 3080 Ti
NVIDIA48 GB · RTX 30
44.0 / 48 GBtight · Q4
2× RTX 3090
NVIDIA48 GB · RTX 30
44.0 / 48 GBtight · Q4
2× RTX 3090 Ti
NVIDIA48 GB · RTX 30
44.0 / 48 GBtight · Q4
4× RTX 4070
NVIDIA48 GB · RTX 40
44.0 / 48 GBtight · Q4
Recommended rig
4× RTX 2060 12GB
Llama 3.3 70B with Ubuntu, vLLM, Open WebUI and the model already downloaded.
Similar models
VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12