DeepSeek16B params2.4B active (MoE)33k context

DeepSeek V2 Lite locally

DeepSeek V2 Lite is an open-weight LLM from the DeepSeek family with 16B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: GTX 1080 Ti (11 GB).

Technical facts
Parameters16B
Max context33k
Q4_K_M10.1 GB
Q5_K_M12.3 GB
Q817.9 GB
FP1635.8 GB
FamilyDeepSeek
Last sync2026-05-12

Available quantizations

Q4_K_M
10.1GB

Acceptable. Good compromise when VRAM is limited.

Q5_K_M
12.3GB

Good quality. Sweet spot for size and precision.

Q8
17.9GB

Near-FP16 quality. Comfortable for production.

FP16
35.8GB

Reference precision. Maximum quality, doubled VRAM.

Compatible GPUs

GPUs that can run DeepSeek V2 Lite on a single card, ranked by VRAM headroom.

Recommended multi-GPU rigs

For DeepSeek V2 Lite at higher quantization or with more context, a multi-GPU rig gives more headroom.

Recommended rig

2× GTX 1060 6GB

DeepSeek V2 Lite with Ubuntu, vLLM, Open WebUI and the model already downloaded.

Configure

Similar models

VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12