Llama13B params4k context
Llama 2 13B locally
Llama 2 13B is an open-weight LLM from the Llama family with 13B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: RTX 3080 10GB (10 GB).
Technical facts
Parameters13B
Max context4k
Q4_K_M8.2 GB
Q5_K_M10.0 GB
Q814.5 GB
FP1629.1 GB
FamilyLlama
Last sync2026-05-12
Available quantizations
GGUF weightsQ4_K_M
8.2GB
Acceptable. Good compromise when VRAM is limited.
Q5_K_M
10.0GB
Good quality. Sweet spot for size and precision.
Q8
14.5GB
Near-FP16 quality. Comfortable for production.
FP16
29.1GB
Reference precision. Maximum quality, doubled VRAM.
Compatible GPUs
12 single-GPUGPUs that can run Llama 2 13B on a single card, ranked by VRAM headroom.
RTX 3080 10GB
NVIDIA10 GB · RTX 30
8.2 / 10 GBcomfortable · Q4
Radeon RX 6700
AMD10 GB · RDNA 2
8.2 / 10 GBcomfortable · Q4
Arc B570
Intel10 GB · Arc Battlemage
8.2 / 10 GBcomfortable · Q4
GTX 1080 Ti
NVIDIA11 GB · GTX 10
8.2 / 11 GBcomfortable · Q4
RTX 2080 Ti
NVIDIA11 GB · RTX 20
8.2 / 11 GBcomfortable · Q4
RTX 2060 12GB
NVIDIA12 GB · RTX 20
10.0 / 12 GBcomfortable · Q5
RTX 3060 12GB
NVIDIA12 GB · RTX 30
10.0 / 12 GBcomfortable · Q5
RTX 3080 12GB
NVIDIA12 GB · RTX 30
10.0 / 12 GBcomfortable · Q5
RTX 3080 Ti
NVIDIA12 GB · RTX 30
10.0 / 12 GBcomfortable · Q5
RTX 4070
NVIDIA12 GB · RTX 40
10.0 / 12 GBcomfortable · Q5
RTX 4070 Super
NVIDIA12 GB · RTX 40
10.0 / 12 GBcomfortable · Q5
RTX 4070 Ti
NVIDIA12 GB · RTX 40
10.0 / 12 GBcomfortable · Q5
Recommended multi-GPU rigs
2x / 4x consumer GPUsFor Llama 2 13B at higher quantization or with more context, a multi-GPU rig gives more headroom.
2× GTX 1060 6GB
NVIDIA12 GB · GTX 10
10.0 / 12 GBcomfortable · Q5
2× GTX 1660
NVIDIA12 GB · GTX 16
10.0 / 12 GBcomfortable · Q5
2× GTX 1660 Super
NVIDIA12 GB · GTX 16
10.0 / 12 GBcomfortable · Q5
2× GTX 1660 Ti
NVIDIA12 GB · GTX 16
10.0 / 12 GBcomfortable · Q5
2× RTX 2060 6GB
NVIDIA12 GB · RTX 20
10.0 / 12 GBcomfortable · Q5
2× RTX 3050 6GB
NVIDIA12 GB · RTX 30
10.0 / 12 GBcomfortable · Q5
2× Arc A380
Intel12 GB · Arc Alchemist
10.0 / 12 GBcomfortable · Q5
2× GTX 1070
NVIDIA16 GB · GTX 10
10.0 / 16 GBcomfortable · Q5
Recommended rig
2× GTX 1060 6GB
Llama 2 13B with Ubuntu, vLLM, Open WebUI and the model already downloaded.
Similar models
VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12