GLM9B params128k contextpopular
GLM-4 9B locally
GLM-4 9B is an open-weight LLM from the GLM family with 9B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: GTX 1060 6GB (6 GB).
Technical facts
Parameters9B
Max context128k
Q4_K_M5.7 GB
Q5_K_M6.9 GB
Q810.1 GB
FP1620.1 GB
FamilyGLM
Last sync2026-05-12
Available quantizations
GGUF weightsQ4_K_M
5.7GB
Acceptable. Good compromise when VRAM is limited.
Q5_K_M
6.9GB
Good quality. Sweet spot for size and precision.
Q8
10.1GB
Near-FP16 quality. Comfortable for production.
FP16
20.1GB
Reference precision. Maximum quality, doubled VRAM.
Compatible GPUs
12 single-GPUGPUs that can run GLM-4 9B on a single card, ranked by VRAM headroom.
GTX 1060 6GB
NVIDIA6 GB · GTX 10
5.7 / 6 GBtight · Q4
GTX 1660
NVIDIA6 GB · GTX 16
5.7 / 6 GBtight · Q4
GTX 1660 Super
NVIDIA6 GB · GTX 16
5.7 / 6 GBtight · Q4
GTX 1660 Ti
NVIDIA6 GB · GTX 16
5.7 / 6 GBtight · Q4
RTX 2060 6GB
NVIDIA6 GB · RTX 20
5.7 / 6 GBtight · Q4
RTX 3050 6GB
NVIDIA6 GB · RTX 30
5.7 / 6 GBtight · Q4
Arc A380
Intel6 GB · Arc Alchemist
5.7 / 6 GBtight · Q4
GTX 1070
NVIDIA8 GB · GTX 10
5.7 / 8 GBcomfortable · Q4
GTX 1070 Ti
NVIDIA8 GB · GTX 10
5.7 / 8 GBcomfortable · Q4
GTX 1080
NVIDIA8 GB · GTX 10
5.7 / 8 GBcomfortable · Q4
RTX 2060 Super
NVIDIA8 GB · RTX 20
5.7 / 8 GBcomfortable · Q4
RTX 2070
NVIDIA8 GB · RTX 20
5.7 / 8 GBcomfortable · Q4
Recommended multi-GPU rigs
2x / 4x consumer GPUsFor GLM-4 9B at higher quantization or with more context, a multi-GPU rig gives more headroom.
2× GTX 1650
NVIDIA8 GB · GTX 16
5.7 / 8 GBcomfortable · Q4
2× GTX 1060 6GB
NVIDIA12 GB · GTX 10
10.1 / 12 GBcomfortable · Q8
2× GTX 1660
NVIDIA12 GB · GTX 16
10.1 / 12 GBcomfortable · Q8
2× GTX 1660 Super
NVIDIA12 GB · GTX 16
10.1 / 12 GBcomfortable · Q8
2× GTX 1660 Ti
NVIDIA12 GB · GTX 16
10.1 / 12 GBcomfortable · Q8
2× RTX 2060 6GB
NVIDIA12 GB · RTX 20
10.1 / 12 GBcomfortable · Q8
2× RTX 3050 6GB
NVIDIA12 GB · RTX 30
10.1 / 12 GBcomfortable · Q8
2× Arc A380
Intel12 GB · Arc Alchemist
10.1 / 12 GBcomfortable · Q8
Recommended rig
2× GTX 1650
GLM-4 9B with Ubuntu, vLLM, Open WebUI and the model already downloaded.
Similar models
VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12