CodeLlama34B params16k context

CodeLlama 34B locally

CodeLlama 34B is an open-weight LLM from the CodeLlama family with 34B parameters. Main use: code and developer agents. Detected minimum hardware: TITAN RTX (24 GB).

Technical facts
Parameters34B
Max context16k
Q4_K_M21.4 GB
Q5_K_M26.1 GB
Q838.0 GB
FP1676.0 GB
FamilyCodeLlama
Last sync2026-05-12

Available quantizations

Q4_K_M
21.4GB

Acceptable. Good compromise when VRAM is limited.

Q5_K_M
26.1GB

Good quality. Sweet spot for size and precision.

Q8
38.0GB

Near-FP16 quality. Comfortable for production.

FP16
76.0GB

Reference precision. Maximum quality, doubled VRAM.

Compatible GPUs

GPUs that can run CodeLlama 34B on a single card, ranked by VRAM headroom.

Recommended multi-GPU rigs

For CodeLlama 34B at higher quantization or with more context, a multi-GPU rig gives more headroom.

Recommended rig

2× GTX 1080 Ti

CodeLlama 34B with Ubuntu, vLLM, Open WebUI and the model already downloaded.

Configure

Similar models

VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12