NVIDIAConsumerRTX 30

RTX 3080 10GB for local AI

RTX 3080 10GB provides 10 GB of VRAM for local AI. In the LocalIA catalog, 153 out of 242 models run comfortably on a single card.

VRAM
10GB
Category
Consumer
Series
RTX 30
Vendor
NVIDIA

Models that run comfortably

These models fit in 10 GB with room for context and stable inference.

Llama 2 13Bllama8.2 GBcomfortableQ4 · / 10 GB
CodeLlama 13Bcodellama8.2 GBcomfortableQ4 · / 10 GB
OLMo 2 13Bolmo8.2 GBcomfortableQ4 · / 10 GB
Vicuna 13Bvicuna8.2 GBcomfortableQ4 · / 10 GB
Mistral Nemo 12Bmistral7.5 GBcomfortableQ4 · / 10 GB
Gemma 3 12Bgemma7.5 GBcomfortableQ4 · / 10 GB
StableLM 2 12Bstable7.5 GBcomfortableQ4 · / 10 GB
Solar 10.7Bsolar8.2 GBcomfortableQ5 · / 10 GB
Falcon 3 10Bfalcon7.7 GBcomfortableQ5 · / 10 GB
Gemma 2 9Bgemma6.9 GBcomfortableQ5 · / 10 GB
Yi 1.5 9Byi6.9 GBcomfortableQ5 · / 10 GB
Qwen 3.5 9Bqwen6.9 GBcomfortableQ5 · / 10 GB
GLM-4 9Bglm6.9 GBcomfortableQ5 · / 10 GB
GLM-4.7 Flashglm6.9 GBcomfortableQ5 · / 10 GB
GLM-4.1V 9B Thinkingglm6.9 GBcomfortableQ5 · / 10 GB
NVIDIA Nemotron Nano 9Bnemotron6.9 GBcomfortableQ5 · / 10 GB
gemma 2 9b itgemma6.9 GBcomfortableQ5 · / 10 GB
Llama 3 8Bllama6.1 GBcomfortableQ5 · / 10 GB
Llama 3.1 8Bllama6.1 GBcomfortableQ5 · / 10 GB
Ministral 8Bmistral6.1 GBcomfortableQ5 · / 10 GB
Qwen 3 8Bqwen6.1 GBcomfortableQ5 · / 10 GB
DeepSeek R1 Distill 8Bdeepseek6.1 GBcomfortableQ5 · / 10 GB
Aya 23 8Baya6.1 GBcomfortableQ5 · / 10 GB
Granite 3 8Bgranite6.1 GBcomfortableQ5 · / 10 GB
Hermes 3 8Bhermes6.1 GBcomfortableQ5 · / 10 GB
DeepSeek R1 Distill Llama 8Bdeepseek6.1 GBcomfortableQ5 · / 10 GB
MiniCPM 4.1 8Bminicpm6.1 GBcomfortableQ5 · / 10 GB
Qwen3 8Bqwen6.1 GBcomfortableQ5 · / 10 GB
Llama 3.1 8B Instructllama6.1 GBcomfortableQ5 · / 10 GB
Meta Llama 3 8Bllama6.1 GBcomfortableQ5 · / 10 GB

Tight models

These models barely fit. They can run, but context and speed will be limited.

StarCoder 2 15Bstarcoder9.4 GBtightQ4 · / 10 GB
Phi-4 Reasoning Vision 15Bphi9.4 GBtightQ4 · / 10 GB
Qwen 2.5 14Bqwen8.8 GBtightQ4 · / 10 GB
Qwen 2.5 Coder 14Bqwen8.8 GBtightQ4 · / 10 GB
Qwen 3 14Bqwen8.8 GBtightQ4 · / 10 GB
DeepSeek R1 Distill 14Bdeepseek8.8 GBtightQ4 · / 10 GB
Phi-3 Medium 14Bphi8.8 GBtightQ4 · / 10 GB
Phi-4 14Bphi8.8 GBtightQ4 · / 10 GB
GLM-4.5 Airglm8.8 GBtightQ4 · / 10 GB
Qwen2.5 14B Instructqwen8.8 GBtightQ4 · / 10 GB
Qwen3 14Bqwen8.8 GBtightQ4 · / 10 GB
Qwen2.5 Coder 14B Instructqwen8.8 GBtightQ4 · / 10 GB
DeepSeek R1 Distill Qwen 14Bqwen8.8 GBtightQ4 · / 10 GB

Unlocked in a 2x rig

With two cards in parallel (20 GB total), larger models become reachable.

Gemma 4 31Bgemma19.5 GBtightQ4 · / 20 GB
Qwen 3 30B A3Bqwen18.9 GBtightQ4 · / 20 GB
MPT 30Bmpt18.9 GBtightQ4 · / 20 GB
Qwen3 Coder 30B A3B Instructqwen18.9 GBtightQ4 · / 20 GB
Qwen3 30B A3Bqwen18.9 GBtightQ4 · / 20 GB
Qwen3 30B A3B Instruct 2507qwen18.9 GBtightQ4 · / 20 GB
Gemma 2 27Bgemma17.0 GBcomfortableQ4 · / 20 GB
Gemma 3 27Bgemma17.0 GBcomfortableQ4 · / 20 GB
Gemma 4 26B A4Bgemma16.3 GBcomfortableQ4 · / 20 GB
Mistral Small 3 24Bmistral15.1 GBcomfortableQ4 · / 20 GB
Mistral Small 3.1 24Bmistral15.1 GBcomfortableQ4 · / 20 GB
Mistral Small 3.2 24Bmistral15.1 GBcomfortableQ4 · / 20 GB
Devstral Small 2 24Bdevstral15.1 GBcomfortableQ4 · / 20 GB
Mistral Small 22Bmistral16.9 GBcomfortableQ5 · / 20 GB

Unlocked in a 4x rig

Server-style configuration (40 GB total) for the largest open-weight models.

Mixtral 8x7Bmistral29.5 GBcomfortableQ4 · / 40 GB
Falcon 40Bfalcon30.7 GBcomfortableQ5 · / 40 GB
Command R 35Bcommand26.9 GBcomfortableQ5 · / 40 GB
Aya 23 35Baya26.9 GBcomfortableQ5 · / 40 GB
CodeLlama 34Bcodellama26.1 GBcomfortableQ5 · / 40 GB
Yi 1.5 34Byi26.1 GBcomfortableQ5 · / 40 GB
dolphin 2.9.1 yi 1.5 34byi26.1 GBcomfortableQ5 · / 40 GB
Qwen 2.5 32Bqwen24.6 GBcomfortableQ5 · / 40 GB
Qwen 2.5 Coder 32Bqwen24.6 GBcomfortableQ5 · / 40 GB
Qwen 3 32Bqwen24.6 GBcomfortableQ5 · / 40 GB

Similar GPUs

VRAM estimates updated 2026-05-12.