AMDDatacenterInstinct CDNA 3+

Instinct MI300X for local AI

Instinct MI300X provides 192 GB of VRAM for local AI. In the LocalIA catalog, 233 out of 242 models run comfortably on a single card.

VRAM
192GB
Category
Datacenter
Series
Instinct CDNA 3+
Vendor
AMD

Models that run comfortably

These models fit in 192 GB with room for context and stable inference.

DeepSeek V2deepseek148.4 GBcomfortableQ4 · / 192 GB
DeepSeek Coder V2deepseek148.4 GBcomfortableQ4 · / 192 GB
Qwen 3 235B A22Bqwen147.7 GBcomfortableQ4 · / 192 GB
Qwen3 235B A22Bqwen147.7 GBcomfortableQ4 · / 192 GB
Falcon 180Bfalcon138.3 GBcomfortableQ5 · / 192 GB
Mixtral 8x22Bmistral157.6 GBcomfortableQ8 · / 192 GB
Mistral Large 123Bmistral137.5 GBcomfortableQ8 · / 192 GB
NVIDIA Nemotron 3 Super 120B A12B BF16nemotron134.1 GBcomfortableQ8 · / 192 GB
Llama 4 Scout 17Bx16llama121.8 GBcomfortableQ8 · / 192 GB
Command R+ 104Bcommand116.2 GBcomfortableQ8 · / 192 GB
Qwen3 Next 80B A3B Instructqwen89.4 GBcomfortableQ8 · / 192 GB
Qwen 2.5 72Bqwen160.9 GBcomfortableFP16 · / 192 GB
Qwen 2.5 VL 72Bqwen160.9 GBcomfortableFP16 · / 192 GB
Qwen2.5 72B Instructqwen160.9 GBcomfortableFP16 · / 192 GB
Llama 2 70Bllama156.5 GBcomfortableFP16 · / 192 GB
Llama 3 70Bllama156.5 GBcomfortableFP16 · / 192 GB
Llama 3.1 70Bllama156.5 GBcomfortableFP16 · / 192 GB
Llama 3.3 70Bllama156.5 GBcomfortableFP16 · / 192 GB
CodeLlama 70Bcodellama156.5 GBcomfortableFP16 · / 192 GB
DeepSeek R1 Distill 70Bdeepseek156.5 GBcomfortableFP16 · / 192 GB
Hermes 3 70Bhermes156.5 GBcomfortableFP16 · / 192 GB
Llama 3.1 Nemotron 70Bnemotron156.5 GBcomfortableFP16 · / 192 GB
Athene 70Bathene156.5 GBcomfortableFP16 · / 192 GB
Llama 3.3 70B Instructllama156.5 GBcomfortableFP16 · / 192 GB
Llama 3.1 70B Instructllama156.5 GBcomfortableFP16 · / 192 GB
Mixtral 8x7Bmistral105.1 GBcomfortableFP16 · / 192 GB
Falcon 40Bfalcon89.4 GBcomfortableFP16 · / 192 GB
Command R 35Bcommand78.2 GBcomfortableFP16 · / 192 GB
Aya 23 35Baya78.2 GBcomfortableFP16 · / 192 GB
CodeLlama 34Bcodellama76.0 GBcomfortableFP16 · / 192 GB

Unlocked in a 2x rig

With two cards in parallel (384 GB total), larger models become reachable.

Llama 3.1 405Bllama311.2 GBcomfortableQ5 · / 384 GB
Hermes 3 405Bhermes311.2 GBcomfortableQ5 · / 384 GB
Llama 4 Maverick 17Bx128llama307.3 GBcomfortableQ5 · / 384 GB
Nemotron 340Bnemotron261.2 GBcomfortableQ5 · / 384 GB

Unlocked in a 4x rig

Server-style configuration (768 GB total) for the largest open-weight models.

DeepSeek V3.2deepseek526.3 GBcomfortableQ5 · / 768 GB
DeepSeek V4 Prodeepseek526.3 GBcomfortableQ5 · / 768 GB
DeepSeek R1deepseek515.6 GBcomfortableQ5 · / 768 GB
DeepSeek V3deepseek515.6 GBcomfortableQ5 · / 768 GB
DeepSeek R1 (0528 snapshot)deepseek515.6 GBcomfortableQ5 · / 768 GB

Similar GPUs

VRAM estimates updated 2026-05-12.