AppleAppleMac Studiounified memory

Mac Studio M4 Max (36GB) for local AI

Mac Studio M4 Max (36GB) provides 36 GB of VRAM for local AI. In the LocalIA catalog, 208 out of 242 models run comfortably on a single card.

VRAM
36GB
Category
Apple
Series
Mac Studio
Vendor
Apple

Models that run comfortably

These models fit in 36 GB with room for context and stable inference.

Mixtral 8x7Bmistral29.5 GBcomfortableQ4 · / 36 GB
Falcon 40Bfalcon25.1 GBcomfortableQ4 · / 36 GB
Command R 35Bcommand26.9 GBcomfortableQ5 · / 36 GB
Aya 23 35Baya26.9 GBcomfortableQ5 · / 36 GB
CodeLlama 34Bcodellama26.1 GBcomfortableQ5 · / 36 GB
Yi 1.5 34Byi26.1 GBcomfortableQ5 · / 36 GB
dolphin 2.9.1 yi 1.5 34byi26.1 GBcomfortableQ5 · / 36 GB
Qwen 2.5 32Bqwen24.6 GBcomfortableQ5 · / 36 GB
Qwen 2.5 Coder 32Bqwen24.6 GBcomfortableQ5 · / 36 GB
Qwen 3 32Bqwen24.6 GBcomfortableQ5 · / 36 GB
QwQ 32Bqwq24.6 GBcomfortableQ5 · / 36 GB
DeepSeek R1 Distill 32Bdeepseek24.6 GBcomfortableQ5 · / 36 GB
Qwen 2.5 VL 32Bqwen24.6 GBcomfortableQ5 · / 36 GB
Granite 4 H-Small 32B-A9Bgranite24.6 GBcomfortableQ5 · / 36 GB
GLM-4.6glm24.6 GBcomfortableQ5 · / 36 GB
GLM-4.7glm24.6 GBcomfortableQ5 · / 36 GB
GLM-5glm24.6 GBcomfortableQ5 · / 36 GB
GLM-5.1glm24.6 GBcomfortableQ5 · / 36 GB
Qwen3 32Bqwen24.6 GBcomfortableQ5 · / 36 GB
Qwen2.5 Coder 32B Instructqwen24.6 GBcomfortableQ5 · / 36 GB
DeepSeek R1 Distill Qwen 32Bqwen24.6 GBcomfortableQ5 · / 36 GB
Qwen2.5 32B Instructqwen24.6 GBcomfortableQ5 · / 36 GB
Gemma 4 31Bgemma23.8 GBcomfortableQ5 · / 36 GB
Qwen 3 30B A3Bqwen23.1 GBcomfortableQ5 · / 36 GB
MPT 30Bmpt23.1 GBcomfortableQ5 · / 36 GB
Qwen3 Coder 30B A3B Instructqwen23.1 GBcomfortableQ5 · / 36 GB
Qwen3 30B A3Bqwen23.1 GBcomfortableQ5 · / 36 GB
Qwen3 30B A3B Instruct 2507qwen23.1 GBcomfortableQ5 · / 36 GB
NVIDIA Nemotron 3 Nano 30B A3B BF16nemotron23.1 GBcomfortableQ5 · / 36 GB
Gemma 2 27Bgemma30.2 GBcomfortableQ8 · / 36 GB

Unlocked in a 2x rig

With two cards in parallel (72 GB total), larger models become reachable.

Llama 4 Scout 17Bx16llama68.5 GBtightQ4 · / 72 GB
Command R+ 104Bcommand65.4 GBtightQ4 · / 72 GB
Qwen3 Next 80B A3B Instructqwen50.3 GBcomfortableQ4 · / 72 GB
Qwen 2.5 72Bqwen55.3 GBcomfortableQ5 · / 72 GB
Qwen 2.5 VL 72Bqwen55.3 GBcomfortableQ5 · / 72 GB
Qwen2.5 72B Instructqwen55.3 GBcomfortableQ5 · / 72 GB
Llama 2 70Bllama53.8 GBcomfortableQ5 · / 72 GB
Llama 3 70Bllama53.8 GBcomfortableQ5 · / 72 GB
Llama 3.1 70Bllama53.8 GBcomfortableQ5 · / 72 GB
Llama 3.3 70Bllama53.8 GBcomfortableQ5 · / 72 GB
CodeLlama 70Bcodellama53.8 GBcomfortableQ5 · / 72 GB
DeepSeek R1 Distill 70Bdeepseek53.8 GBcomfortableQ5 · / 72 GB
Hermes 3 70Bhermes53.8 GBcomfortableQ5 · / 72 GB
Llama 3.1 Nemotron 70Bnemotron53.8 GBcomfortableQ5 · / 72 GB
Athene 70Bathene53.8 GBcomfortableQ5 · / 72 GB

Unlocked in a 4x rig

Server-style configuration (144 GB total) for the largest open-weight models.

Falcon 180Bfalcon113.2 GBcomfortableQ4 · / 144 GB
Mixtral 8x22Bmistral108.3 GBcomfortableQ5 · / 144 GB
Mistral Large 123Bmistral94.5 GBcomfortableQ5 · / 144 GB
NVIDIA Nemotron 3 Super 120B A12B BF16nemotron92.2 GBcomfortableQ5 · / 144 GB

Similar GPUs

VRAM estimates updated 2026-05-12. Apple Silicon: part of unified memory remains reserved for the system.