AppleAppleMacBook Pro 16unified memory

MacBook Pro 16 M2 Max (96GB) for local AI

MacBook Pro 16 M2 Max (96GB) provides 96 GB of VRAM for local AI. In the LocalIA catalog, 227 out of 242 models run comfortably on a single card.

VRAM
96GB
Category
Apple
Series
MacBook Pro 16
Vendor
Apple

Models that run comfortably

These models fit in 96 GB with room for context and stable inference.

Mistral Large 123Bmistral77.3 GBcomfortableQ4 · / 96 GB
NVIDIA Nemotron 3 Super 120B A12B BF16nemotron75.4 GBcomfortableQ4 · / 96 GB
Llama 4 Scout 17Bx16llama68.5 GBcomfortableQ4 · / 96 GB
Command R+ 104Bcommand79.9 GBcomfortableQ5 · / 96 GB
Qwen3 Next 80B A3B Instructqwen61.5 GBcomfortableQ5 · / 96 GB
Qwen 2.5 72Bqwen80.5 GBcomfortableQ8 · / 96 GB
Qwen 2.5 VL 72Bqwen80.5 GBcomfortableQ8 · / 96 GB
Qwen2.5 72B Instructqwen80.5 GBcomfortableQ8 · / 96 GB
Llama 2 70Bllama78.2 GBcomfortableQ8 · / 96 GB
Llama 3 70Bllama78.2 GBcomfortableQ8 · / 96 GB
Llama 3.1 70Bllama78.2 GBcomfortableQ8 · / 96 GB
Llama 3.3 70Bllama78.2 GBcomfortableQ8 · / 96 GB
CodeLlama 70Bcodellama78.2 GBcomfortableQ8 · / 96 GB
DeepSeek R1 Distill 70Bdeepseek78.2 GBcomfortableQ8 · / 96 GB
Hermes 3 70Bhermes78.2 GBcomfortableQ8 · / 96 GB
Llama 3.1 Nemotron 70Bnemotron78.2 GBcomfortableQ8 · / 96 GB
Athene 70Bathene78.2 GBcomfortableQ8 · / 96 GB
Llama 3.3 70B Instructllama78.2 GBcomfortableQ8 · / 96 GB
Llama 3.1 70B Instructllama78.2 GBcomfortableQ8 · / 96 GB
Mixtral 8x7Bmistral52.5 GBcomfortableQ8 · / 96 GB
Falcon 40Bfalcon44.7 GBcomfortableQ8 · / 96 GB
Command R 35Bcommand78.2 GBcomfortableFP16 · / 96 GB
Aya 23 35Baya78.2 GBcomfortableFP16 · / 96 GB
CodeLlama 34Bcodellama76.0 GBcomfortableFP16 · / 96 GB
Yi 1.5 34Byi76.0 GBcomfortableFP16 · / 96 GB
dolphin 2.9.1 yi 1.5 34byi76.0 GBcomfortableFP16 · / 96 GB
Qwen 2.5 32Bqwen71.5 GBcomfortableFP16 · / 96 GB
Qwen 2.5 Coder 32Bqwen71.5 GBcomfortableFP16 · / 96 GB
Qwen 3 32Bqwen71.5 GBcomfortableFP16 · / 96 GB
QwQ 32Bqwq71.5 GBcomfortableFP16 · / 96 GB

Tight models

These models barely fit. They can run, but context and speed will be limited.

Mixtral 8x22Bmistral88.6 GBtightQ4 · / 96 GB

Unlocked in a 2x rig

With two cards in parallel (192 GB total), larger models become reachable.

DeepSeek V2deepseek148.4 GBcomfortableQ4 · / 192 GB
DeepSeek Coder V2deepseek148.4 GBcomfortableQ4 · / 192 GB
Qwen 3 235B A22Bqwen147.7 GBcomfortableQ4 · / 192 GB
Qwen3 235B A22Bqwen147.7 GBcomfortableQ4 · / 192 GB
Falcon 180Bfalcon138.3 GBcomfortableQ5 · / 192 GB

Unlocked in a 4x rig

Server-style configuration (384 GB total) for the largest open-weight models.

Llama 3.1 405Bllama311.2 GBcomfortableQ5 · / 384 GB
Hermes 3 405Bhermes311.2 GBcomfortableQ5 · / 384 GB
Llama 4 Maverick 17Bx128llama307.3 GBcomfortableQ5 · / 384 GB
Nemotron 340Bnemotron261.2 GBcomfortableQ5 · / 384 GB

Similar GPUs

VRAM estimates updated 2026-05-12. Apple Silicon: part of unified memory remains reserved for the system.