GLM9B params128k context

GLM-4.1V 9B Thinking locally

GLM-4.1V 9B Thinking is an open-weight LLM from the GLM family with 9B parameters. Main use: reasoning and problem solving. Detected minimum hardware: GTX 1060 6GB (6 GB).

Technical facts
Parameters9B
Max context128k
Q4_K_M5.7 GB
Q5_K_M6.9 GB
Q810.1 GB
FP1620.1 GB
FamilyGLM
Last sync2026-05-12

Available quantizations

Q4_K_M
5.7GB

Acceptable. Good compromise when VRAM is limited.

Q5_K_M
6.9GB

Good quality. Sweet spot for size and precision.

Q8
10.1GB

Near-FP16 quality. Comfortable for production.

FP16
20.1GB

Reference precision. Maximum quality, doubled VRAM.

Compatible GPUs

GPUs that can run GLM-4.1V 9B Thinking on a single card, ranked by VRAM headroom.

Recommended multi-GPU rigs

For GLM-4.1V 9B Thinking at higher quantization or with more context, a multi-GPU rig gives more headroom.

Recommended rig

2× GTX 1650

GLM-4.1V 9B Thinking with Ubuntu, vLLM, Open WebUI and the model already downloaded.

Configure

Similar models

VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12