NVIDIAConsumerRTX 40

RTX 4070 for local AI

RTX 4070 provides 12 GB of VRAM for local AI. In the LocalIA catalog, 168 out of 242 models run comfortably on a single card.

VRAM
12GB
Category
Consumer
Series
RTX 40
Vendor
NVIDIA

Models that run comfortably

These models fit in 12 GB with room for context and stable inference.

DeepSeek V2 Litedeepseek10.1 GBcomfortableQ4 · / 12 GB
DeepSeek Coder V2 Litedeepseek10.1 GBcomfortableQ4 · / 12 GB
StarCoder 2 15Bstarcoder9.4 GBcomfortableQ4 · / 12 GB
Phi-4 Reasoning Vision 15Bphi9.4 GBcomfortableQ4 · / 12 GB
Qwen 2.5 14Bqwen8.8 GBcomfortableQ4 · / 12 GB
Qwen 2.5 Coder 14Bqwen8.8 GBcomfortableQ4 · / 12 GB
Qwen 3 14Bqwen8.8 GBcomfortableQ4 · / 12 GB
DeepSeek R1 Distill 14Bdeepseek8.8 GBcomfortableQ4 · / 12 GB
Phi-3 Medium 14Bphi8.8 GBcomfortableQ4 · / 12 GB
Phi-4 14Bphi8.8 GBcomfortableQ4 · / 12 GB
GLM-4.5 Airglm8.8 GBcomfortableQ4 · / 12 GB
Qwen2.5 14B Instructqwen8.8 GBcomfortableQ4 · / 12 GB
Qwen3 14Bqwen8.8 GBcomfortableQ4 · / 12 GB
Qwen2.5 Coder 14B Instructqwen8.8 GBcomfortableQ4 · / 12 GB
DeepSeek R1 Distill Qwen 14Bqwen8.8 GBcomfortableQ4 · / 12 GB
Llama 2 13Bllama10.0 GBcomfortableQ5 · / 12 GB
CodeLlama 13Bcodellama10.0 GBcomfortableQ5 · / 12 GB
OLMo 2 13Bolmo10.0 GBcomfortableQ5 · / 12 GB
Vicuna 13Bvicuna10.0 GBcomfortableQ5 · / 12 GB
Mistral Nemo 12Bmistral9.2 GBcomfortableQ5 · / 12 GB
Gemma 3 12Bgemma9.2 GBcomfortableQ5 · / 12 GB
StableLM 2 12Bstable9.2 GBcomfortableQ5 · / 12 GB
Solar 10.7Bsolar8.2 GBcomfortableQ5 · / 12 GB
Falcon 3 10Bfalcon7.7 GBcomfortableQ5 · / 12 GB
Gemma 2 9Bgemma10.1 GBcomfortableQ8 · / 12 GB
Yi 1.5 9Byi10.1 GBcomfortableQ8 · / 12 GB
Qwen 3.5 9Bqwen10.1 GBcomfortableQ8 · / 12 GB
GLM-4 9Bglm10.1 GBcomfortableQ8 · / 12 GB
GLM-4.7 Flashglm10.1 GBcomfortableQ8 · / 12 GB
GLM-4.1V 9B Thinkingglm10.1 GBcomfortableQ8 · / 12 GB

Unlocked in a 2x rig

With two cards in parallel (24 GB total), larger models become reachable.

Command R 35Bcommand22.0 GBtightQ4 · / 24 GB
Aya 23 35Baya22.0 GBtightQ4 · / 24 GB
CodeLlama 34Bcodellama21.4 GBtightQ4 · / 24 GB
Yi 1.5 34Byi21.4 GBtightQ4 · / 24 GB
dolphin 2.9.1 yi 1.5 34byi21.4 GBtightQ4 · / 24 GB
Qwen 2.5 32Bqwen20.1 GBcomfortableQ4 · / 24 GB
Qwen 2.5 Coder 32Bqwen20.1 GBcomfortableQ4 · / 24 GB
Qwen 3 32Bqwen20.1 GBcomfortableQ4 · / 24 GB
QwQ 32Bqwq20.1 GBcomfortableQ4 · / 24 GB
DeepSeek R1 Distill 32Bdeepseek20.1 GBcomfortableQ4 · / 24 GB
Qwen 2.5 VL 32Bqwen20.1 GBcomfortableQ4 · / 24 GB
Granite 4 H-Small 32B-A9Bgranite20.1 GBcomfortableQ4 · / 24 GB
GLM-4.6glm20.1 GBcomfortableQ4 · / 24 GB
GLM-4.7glm20.1 GBcomfortableQ4 · / 24 GB
GLM-5glm20.1 GBcomfortableQ4 · / 24 GB

Unlocked in a 4x rig

Server-style configuration (48 GB total) for the largest open-weight models.

Qwen 2.5 72Bqwen45.3 GBtightQ4 · / 48 GB
Qwen 2.5 VL 72Bqwen45.3 GBtightQ4 · / 48 GB
Qwen2.5 72B Instructqwen45.3 GBtightQ4 · / 48 GB
Llama 2 70Bllama44.0 GBtightQ4 · / 48 GB
Llama 3 70Bllama44.0 GBtightQ4 · / 48 GB
Llama 3.1 70Bllama44.0 GBtightQ4 · / 48 GB
Llama 3.3 70Bllama44.0 GBtightQ4 · / 48 GB
CodeLlama 70Bcodellama44.0 GBtightQ4 · / 48 GB
DeepSeek R1 Distill 70Bdeepseek44.0 GBtightQ4 · / 48 GB
Hermes 3 70Bhermes44.0 GBtightQ4 · / 48 GB

Similar GPUs

VRAM estimates updated 2026-05-12.