NVIDIAConsumerRTX 40

RTX 4070 Ti Super for local AI

RTX 4070 Ti Super provides 16 GB of VRAM for local AI. In the LocalIA catalog, 170 out of 242 models run comfortably on a single card.

VRAM
16GB
Category
Consumer
Series
RTX 40
Vendor
NVIDIA

Models that run comfortably

These models fit in 16 GB with room for context and stable inference.

Reka Flash 3 21Breka13.2 GBcomfortableQ4 · / 16 GB
InternLM 2.5 20Binternlm12.6 GBcomfortableQ4 · / 16 GB
DeepSeek V2 Litedeepseek12.3 GBcomfortableQ5 · / 16 GB
DeepSeek Coder V2 Litedeepseek12.3 GBcomfortableQ5 · / 16 GB
StarCoder 2 15Bstarcoder11.5 GBcomfortableQ5 · / 16 GB
Phi-4 Reasoning Vision 15Bphi11.5 GBcomfortableQ5 · / 16 GB
Qwen 2.5 14Bqwen10.8 GBcomfortableQ5 · / 16 GB
Qwen 2.5 Coder 14Bqwen10.8 GBcomfortableQ5 · / 16 GB
Qwen 3 14Bqwen10.8 GBcomfortableQ5 · / 16 GB
DeepSeek R1 Distill 14Bdeepseek10.8 GBcomfortableQ5 · / 16 GB
Phi-3 Medium 14Bphi10.8 GBcomfortableQ5 · / 16 GB
Phi-4 14Bphi10.8 GBcomfortableQ5 · / 16 GB
GLM-4.5 Airglm10.8 GBcomfortableQ5 · / 16 GB
Qwen2.5 14B Instructqwen10.8 GBcomfortableQ5 · / 16 GB
Qwen3 14Bqwen10.8 GBcomfortableQ5 · / 16 GB
Qwen2.5 Coder 14B Instructqwen10.8 GBcomfortableQ5 · / 16 GB
DeepSeek R1 Distill Qwen 14Bqwen10.8 GBcomfortableQ5 · / 16 GB
Llama 2 13Bllama10.0 GBcomfortableQ5 · / 16 GB
CodeLlama 13Bcodellama10.0 GBcomfortableQ5 · / 16 GB
OLMo 2 13Bolmo10.0 GBcomfortableQ5 · / 16 GB
Vicuna 13Bvicuna10.0 GBcomfortableQ5 · / 16 GB
Mistral Nemo 12Bmistral13.4 GBcomfortableQ8 · / 16 GB
Gemma 3 12Bgemma13.4 GBcomfortableQ8 · / 16 GB
StableLM 2 12Bstable13.4 GBcomfortableQ8 · / 16 GB
Solar 10.7Bsolar12.0 GBcomfortableQ8 · / 16 GB
Falcon 3 10Bfalcon11.2 GBcomfortableQ8 · / 16 GB
Gemma 2 9Bgemma10.1 GBcomfortableQ8 · / 16 GB
Yi 1.5 9Byi10.1 GBcomfortableQ8 · / 16 GB
Qwen 3.5 9Bqwen10.1 GBcomfortableQ8 · / 16 GB
GLM-4 9Bglm10.1 GBcomfortableQ8 · / 16 GB

Tight models

These models barely fit. They can run, but context and speed will be limited.

Mistral Small 3 24Bmistral15.1 GBtightQ4 · / 16 GB
Mistral Small 3.1 24Bmistral15.1 GBtightQ4 · / 16 GB
Mistral Small 3.2 24Bmistral15.1 GBtightQ4 · / 16 GB
Devstral Small 2 24Bdevstral15.1 GBtightQ4 · / 16 GB
Mistral Small 22Bmistral13.8 GBtightQ4 · / 16 GB
Codestral 22Bcodestral13.8 GBtightQ4 · / 16 GB

Unlocked in a 2x rig

With two cards in parallel (32 GB total), larger models become reachable.

Mixtral 8x7Bmistral29.5 GBtightQ4 · / 32 GB
Falcon 40Bfalcon25.1 GBcomfortableQ4 · / 32 GB
Command R 35Bcommand26.9 GBcomfortableQ5 · / 32 GB
Aya 23 35Baya26.9 GBcomfortableQ5 · / 32 GB
CodeLlama 34Bcodellama26.1 GBcomfortableQ5 · / 32 GB
Yi 1.5 34Byi26.1 GBcomfortableQ5 · / 32 GB
dolphin 2.9.1 yi 1.5 34byi26.1 GBcomfortableQ5 · / 32 GB
Qwen 2.5 32Bqwen24.6 GBcomfortableQ5 · / 32 GB
Qwen 2.5 Coder 32Bqwen24.6 GBcomfortableQ5 · / 32 GB
Qwen 3 32Bqwen24.6 GBcomfortableQ5 · / 32 GB
QwQ 32Bqwq24.6 GBcomfortableQ5 · / 32 GB
DeepSeek R1 Distill 32Bdeepseek24.6 GBcomfortableQ5 · / 32 GB
Qwen 2.5 VL 32Bqwen24.6 GBcomfortableQ5 · / 32 GB
Granite 4 H-Small 32B-A9Bgranite24.6 GBcomfortableQ5 · / 32 GB
GLM-4.6glm24.6 GBcomfortableQ5 · / 32 GB

Unlocked in a 4x rig

Server-style configuration (64 GB total) for the largest open-weight models.

Qwen3 Next 80B A3B Instructqwen50.3 GBcomfortableQ4 · / 64 GB
Qwen 2.5 72Bqwen45.3 GBcomfortableQ4 · / 64 GB
Qwen 2.5 VL 72Bqwen45.3 GBcomfortableQ4 · / 64 GB
Qwen2.5 72B Instructqwen45.3 GBcomfortableQ4 · / 64 GB
Llama 2 70Bllama53.8 GBcomfortableQ5 · / 64 GB
Llama 3 70Bllama53.8 GBcomfortableQ5 · / 64 GB
Llama 3.1 70Bllama53.8 GBcomfortableQ5 · / 64 GB
Llama 3.3 70Bllama53.8 GBcomfortableQ5 · / 64 GB
CodeLlama 70Bcodellama53.8 GBcomfortableQ5 · / 64 GB
DeepSeek R1 Distill 70Bdeepseek53.8 GBcomfortableQ5 · / 64 GB

Similar GPUs

VRAM estimates updated 2026-05-12.