NVIDIAConsumerGTX 16

GTX 1650 for local AI

GTX 1650 provides 4 GB of VRAM for local AI. In the LocalIA catalog, 76 out of 242 models run comfortably on a single card.

VRAM
4GB
Category
Consumer
Series
GTX 16
Vendor
NVIDIA

Models that run comfortably

These models fit in 4 GB with room for context and stable inference.

Qwen 3 4Bqwen3.1 GBcomfortableQ5 · / 4 GB
Gemma 3 4Bgemma3.1 GBcomfortableQ5 · / 4 GB
Nemotron Mini 4Bnemotron3.1 GBcomfortableQ5 · / 4 GB
Gemma 4 E4B (Efficient)gemma3.1 GBcomfortableQ5 · / 4 GB
Qwen3 4B Instruct 2507qwen3.1 GBcomfortableQ5 · / 4 GB
Qwen3 4Bqwen3.1 GBcomfortableQ5 · / 4 GB
Qwen3 4B Baseqwen3.1 GBcomfortableQ5 · / 4 GB
Qwen3 4B Thinking 2507qwen3.1 GBcomfortableQ5 · / 4 GB
Phi-3 Mini 3.8Bphi2.9 GBcomfortableQ5 · / 4 GB
Phi-3.5 Mini 3.8Bphi2.9 GBcomfortableQ5 · / 4 GB
Phi-4 Mini 3.8Bphi2.9 GBcomfortableQ5 · / 4 GB
Phi-4 Mini Instruct 3.8Bphi2.9 GBcomfortableQ5 · / 4 GB
Phi Tiny MoE 3.8Bphi2.9 GBcomfortableQ5 · / 4 GB
Granite 3 3B A800Mgranite2.5 GBcomfortableQ5 · / 4 GB
Llama 3.2 3Bllama3.4 GBcomfortableQ8 · / 4 GB
Ministral 3Bmistral3.4 GBcomfortableQ8 · / 4 GB
Qwen 2.5 3Bqwen3.4 GBcomfortableQ8 · / 4 GB
Falcon 3 3Bfalcon3.4 GBcomfortableQ8 · / 4 GB
StarCoder 2 3Bstarcoder3.4 GBcomfortableQ8 · / 4 GB
Qwen 2.5 VL 3Bqwen3.4 GBcomfortableQ8 · / 4 GB
SmolLM 3 3Bsmollm3.4 GBcomfortableQ8 · / 4 GB
Granite 4 Micro 3Bgranite3.4 GBcomfortableQ8 · / 4 GB
Qwen2.5 3B Instructqwen3.4 GBcomfortableQ8 · / 4 GB
Llama 3.2 3B Instructllama3.4 GBcomfortableQ8 · / 4 GB
Llama 3.2 3Bllama3.4 GBcomfortableQ8 · / 4 GB
Qwen2.5 3Bqwen3.4 GBcomfortableQ8 · / 4 GB
Qwen2.5 Coder 3B Instructqwen3.4 GBcomfortableQ8 · / 4 GB
Qwen2.5 Coder 3Bqwen3.4 GBcomfortableQ8 · / 4 GB
Gemma 2 2Bgemma2.2 GBcomfortableQ8 · / 4 GB
CodeGemma 2Bgemma2.2 GBcomfortableQ8 · / 4 GB

Tight models

These models barely fit. They can run, but context and speed will be limited.

Yi 1.5 6Byi3.8 GBtightQ4 · / 4 GB
Phi-4 Multimodal 5.6Bphi3.5 GBtightQ4 · / 4 GB

Unlocked in a 2x rig

With two cards in parallel (8 GB total), larger models become reachable.

Mistral Nemo 12Bmistral7.5 GBtightQ4 · / 8 GB
Gemma 3 12Bgemma7.5 GBtightQ4 · / 8 GB
StableLM 2 12Bstable7.5 GBtightQ4 · / 8 GB
Solar 10.7Bsolar6.7 GBcomfortableQ4 · / 8 GB
Falcon 3 10Bfalcon6.3 GBcomfortableQ4 · / 8 GB
Gemma 2 9Bgemma5.7 GBcomfortableQ4 · / 8 GB
Yi 1.5 9Byi5.7 GBcomfortableQ4 · / 8 GB
Qwen 3.5 9Bqwen5.7 GBcomfortableQ4 · / 8 GB
GLM-4 9Bglm5.7 GBcomfortableQ4 · / 8 GB
GLM-4.7 Flashglm5.7 GBcomfortableQ4 · / 8 GB
GLM-4.1V 9B Thinkingglm5.7 GBcomfortableQ4 · / 8 GB
NVIDIA Nemotron Nano 9Bnemotron5.7 GBcomfortableQ4 · / 8 GB
gemma 2 9b itgemma5.7 GBcomfortableQ4 · / 8 GB
Llama 3 8Bllama6.1 GBcomfortableQ5 · / 8 GB
Llama 3.1 8Bllama6.1 GBcomfortableQ5 · / 8 GB

Unlocked in a 4x rig

Server-style configuration (16 GB total) for the largest open-weight models.

Mistral Small 3 24Bmistral15.1 GBtightQ4 · / 16 GB
Mistral Small 3.1 24Bmistral15.1 GBtightQ4 · / 16 GB
Mistral Small 3.2 24Bmistral15.1 GBtightQ4 · / 16 GB
Devstral Small 2 24Bdevstral15.1 GBtightQ4 · / 16 GB
Mistral Small 22Bmistral13.8 GBtightQ4 · / 16 GB
Codestral 22Bcodestral13.8 GBtightQ4 · / 16 GB
Reka Flash 3 21Breka13.2 GBcomfortableQ4 · / 16 GB
InternLM 2.5 20Binternlm12.6 GBcomfortableQ4 · / 16 GB
DeepSeek V2 Litedeepseek12.3 GBcomfortableQ5 · / 16 GB
DeepSeek Coder V2 Litedeepseek12.3 GBcomfortableQ5 · / 16 GB

VRAM estimates updated 2026-05-12.