Stable12B params4k context
StableLM 2 12B locally
StableLM 2 12B is an open-weight LLM from the Stable family with 12B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: GTX 1070 (8 GB).
Technical facts
Parameters12B
Max context4k
Q4_K_M7.5 GB
Q5_K_M9.2 GB
Q813.4 GB
FP1626.8 GB
FamilyStable
Last sync2026-05-12
Available quantizations
GGUF weightsQ4_K_M
7.5GB
Acceptable. Good compromise when VRAM is limited.
Q5_K_M
9.2GB
Good quality. Sweet spot for size and precision.
Q8
13.4GB
Near-FP16 quality. Comfortable for production.
FP16
26.8GB
Reference precision. Maximum quality, doubled VRAM.
Compatible GPUs
12 single-GPUGPUs that can run StableLM 2 12B on a single card, ranked by VRAM headroom.
GTX 1070
NVIDIA8 GB · GTX 10
7.5 / 8 GBtight · Q4
GTX 1070 Ti
NVIDIA8 GB · GTX 10
7.5 / 8 GBtight · Q4
GTX 1080
NVIDIA8 GB · GTX 10
7.5 / 8 GBtight · Q4
RTX 2060 Super
NVIDIA8 GB · RTX 20
7.5 / 8 GBtight · Q4
RTX 2070
NVIDIA8 GB · RTX 20
7.5 / 8 GBtight · Q4
RTX 2070 Super
NVIDIA8 GB · RTX 20
7.5 / 8 GBtight · Q4
RTX 2080
NVIDIA8 GB · RTX 20
7.5 / 8 GBtight · Q4
RTX 2080 Super
NVIDIA8 GB · RTX 20
7.5 / 8 GBtight · Q4
RTX 3050 8GB
NVIDIA8 GB · RTX 30
7.5 / 8 GBtight · Q4
RTX 3060 8GB
NVIDIA8 GB · RTX 30
7.5 / 8 GBtight · Q4
RTX 3060 Ti
NVIDIA8 GB · RTX 30
7.5 / 8 GBtight · Q4
RTX 3070
NVIDIA8 GB · RTX 30
7.5 / 8 GBtight · Q4
Recommended multi-GPU rigs
2x / 4x consumer GPUsFor StableLM 2 12B at higher quantization or with more context, a multi-GPU rig gives more headroom.
2× GTX 1650
NVIDIA8 GB · GTX 16
7.5 / 8 GBtight · Q4
2× GTX 1060 6GB
NVIDIA12 GB · GTX 10
9.2 / 12 GBcomfortable · Q5
2× GTX 1660
NVIDIA12 GB · GTX 16
9.2 / 12 GBcomfortable · Q5
2× GTX 1660 Super
NVIDIA12 GB · GTX 16
9.2 / 12 GBcomfortable · Q5
2× GTX 1660 Ti
NVIDIA12 GB · GTX 16
9.2 / 12 GBcomfortable · Q5
2× RTX 2060 6GB
NVIDIA12 GB · RTX 20
9.2 / 12 GBcomfortable · Q5
2× RTX 3050 6GB
NVIDIA12 GB · RTX 30
9.2 / 12 GBcomfortable · Q5
2× Arc A380
Intel12 GB · Arc Alchemist
9.2 / 12 GBcomfortable · Q5
Recommended rig
2× GTX 1060 6GB
StableLM 2 12B with Ubuntu, vLLM, Open WebUI and the model already downloaded.
Similar models
VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12