Gemma27B params128k contextpopular

Gemma 3 27B locally

Gemma 3 27B is an open-weight LLM from the Gemma family with 27B parameters. Main use: chat, RAG and general assistance. Detected minimum hardware: MacBook Pro 14 M3 Pro (18GB) (18 GB).

Technical facts
Parameters27B
Max context128k
Q4_K_M17.0 GB
Q5_K_M20.7 GB
Q830.2 GB
FP1660.3 GB
FamilyGemma
Last sync2026-05-12

Available quantizations

Q4_K_M
17.0GB

Acceptable. Good compromise when VRAM is limited.

Q5_K_M
20.7GB

Good quality. Sweet spot for size and precision.

Q8
30.2GB

Near-FP16 quality. Comfortable for production.

FP16
60.3GB

Reference precision. Maximum quality, doubled VRAM.

Compatible GPUs

GPUs that can run Gemma 3 27B on a single card, ranked by VRAM headroom.

Recommended multi-GPU rigs

For Gemma 3 27B at higher quantization or with more context, a multi-GPU rig gives more headroom.

Recommended rig

2× RTX 3080 10GB

Gemma 3 27B with Ubuntu, vLLM, Open WebUI and the model already downloaded.

Configure

Similar models

VRAM estimates: parameters x bits/8 plus margin. Real performance varies by engine, context length and batch size.
sync: 2026-05-12