NVIDIADatacenterHopper

NVIDIA H100 NVL for local AI

NVIDIA H100 NVL provides 94 GB of VRAM for local AI. In the LocalIA catalog, 227 out of 242 models run comfortably on a single card.

VRAM
94GB
Category
Datacenter
Series
Hopper
Vendor
NVIDIA

Models that run comfortably

These models fit in 94 GB with room for context and stable inference.

Mistral Large 123Bmistral77.3 GBcomfortableQ4 · / 94 GB
NVIDIA Nemotron 3 Super 120B A12B BF16nemotron75.4 GBcomfortableQ4 · / 94 GB
Llama 4 Scout 17Bx16llama68.5 GBcomfortableQ4 · / 94 GB
Command R+ 104Bcommand65.4 GBcomfortableQ4 · / 94 GB
Qwen3 Next 80B A3B Instructqwen61.5 GBcomfortableQ5 · / 94 GB
Qwen 2.5 72Bqwen55.3 GBcomfortableQ5 · / 94 GB
Qwen 2.5 VL 72Bqwen55.3 GBcomfortableQ5 · / 94 GB
Qwen2.5 72B Instructqwen55.3 GBcomfortableQ5 · / 94 GB
Llama 2 70Bllama78.2 GBcomfortableQ8 · / 94 GB
Llama 3 70Bllama78.2 GBcomfortableQ8 · / 94 GB
Llama 3.1 70Bllama78.2 GBcomfortableQ8 · / 94 GB
Llama 3.3 70Bllama78.2 GBcomfortableQ8 · / 94 GB
CodeLlama 70Bcodellama78.2 GBcomfortableQ8 · / 94 GB
DeepSeek R1 Distill 70Bdeepseek78.2 GBcomfortableQ8 · / 94 GB
Hermes 3 70Bhermes78.2 GBcomfortableQ8 · / 94 GB
Llama 3.1 Nemotron 70Bnemotron78.2 GBcomfortableQ8 · / 94 GB
Athene 70Bathene78.2 GBcomfortableQ8 · / 94 GB
Llama 3.3 70B Instructllama78.2 GBcomfortableQ8 · / 94 GB
Llama 3.1 70B Instructllama78.2 GBcomfortableQ8 · / 94 GB
Mixtral 8x7Bmistral52.5 GBcomfortableQ8 · / 94 GB
Falcon 40Bfalcon44.7 GBcomfortableQ8 · / 94 GB
Command R 35Bcommand78.2 GBcomfortableFP16 · / 94 GB
Aya 23 35Baya78.2 GBcomfortableFP16 · / 94 GB
CodeLlama 34Bcodellama76.0 GBcomfortableFP16 · / 94 GB
Yi 1.5 34Byi76.0 GBcomfortableFP16 · / 94 GB
dolphin 2.9.1 yi 1.5 34byi76.0 GBcomfortableFP16 · / 94 GB
Qwen 2.5 32Bqwen71.5 GBcomfortableFP16 · / 94 GB
Qwen 2.5 Coder 32Bqwen71.5 GBcomfortableFP16 · / 94 GB
Qwen 3 32Bqwen71.5 GBcomfortableFP16 · / 94 GB
QwQ 32Bqwq71.5 GBcomfortableFP16 · / 94 GB

Tight models

These models barely fit. They can run, but context and speed will be limited.

Mixtral 8x22Bmistral88.6 GBtightQ4 · / 94 GB

Unlocked in a 2x rig

With two cards in parallel (188 GB total), larger models become reachable.

DeepSeek V2deepseek148.4 GBcomfortableQ4 · / 188 GB
DeepSeek Coder V2deepseek148.4 GBcomfortableQ4 · / 188 GB
Qwen 3 235B A22Bqwen147.7 GBcomfortableQ4 · / 188 GB
Qwen3 235B A22Bqwen147.7 GBcomfortableQ4 · / 188 GB
Falcon 180Bfalcon138.3 GBcomfortableQ5 · / 188 GB

Unlocked in a 4x rig

Server-style configuration (376 GB total) for the largest open-weight models.

Llama 3.1 405Bllama311.2 GBcomfortableQ5 · / 376 GB
Hermes 3 405Bhermes311.2 GBcomfortableQ5 · / 376 GB
Llama 4 Maverick 17Bx128llama307.3 GBcomfortableQ5 · / 376 GB
Nemotron 340Bnemotron261.2 GBcomfortableQ5 · / 376 GB

Similar GPUs

VRAM estimates updated 2026-05-12.