NVIDIAWorkstationRTX A (Ampere)

RTX A2000 voor lokale AI

RTX A2000 biedt 12 GB VRAM voor lokale AI. In de LocalIA-catalogus draaien 168 van 242 modellen comfortabel op één kaart.

VRAM
12GB
Categorie
Workstation
Serie
RTX A (Ampere)
Vendor
NVIDIA

Modellen die comfortabel draaien

Deze modellen passen in 12 GB met marge voor context en stabiele inference.

DeepSeek V2 Litedeepseek10.1 GBcomfortabelQ4 · / 12 GB
DeepSeek Coder V2 Litedeepseek10.1 GBcomfortabelQ4 · / 12 GB
StarCoder 2 15Bstarcoder9.4 GBcomfortabelQ4 · / 12 GB
Phi-4 Reasoning Vision 15Bphi9.4 GBcomfortabelQ4 · / 12 GB
Qwen 2.5 14Bqwen8.8 GBcomfortabelQ4 · / 12 GB
Qwen 2.5 Coder 14Bqwen8.8 GBcomfortabelQ4 · / 12 GB
Qwen 3 14Bqwen8.8 GBcomfortabelQ4 · / 12 GB
DeepSeek R1 Distill 14Bdeepseek8.8 GBcomfortabelQ4 · / 12 GB
Phi-3 Medium 14Bphi8.8 GBcomfortabelQ4 · / 12 GB
Phi-4 14Bphi8.8 GBcomfortabelQ4 · / 12 GB
GLM-4.5 Airglm8.8 GBcomfortabelQ4 · / 12 GB
Qwen2.5 14B Instructqwen8.8 GBcomfortabelQ4 · / 12 GB
Qwen3 14Bqwen8.8 GBcomfortabelQ4 · / 12 GB
Qwen2.5 Coder 14B Instructqwen8.8 GBcomfortabelQ4 · / 12 GB
DeepSeek R1 Distill Qwen 14Bqwen8.8 GBcomfortabelQ4 · / 12 GB
Llama 2 13Bllama10.0 GBcomfortabelQ5 · / 12 GB
CodeLlama 13Bcodellama10.0 GBcomfortabelQ5 · / 12 GB
OLMo 2 13Bolmo10.0 GBcomfortabelQ5 · / 12 GB
Vicuna 13Bvicuna10.0 GBcomfortabelQ5 · / 12 GB
Mistral Nemo 12Bmistral9.2 GBcomfortabelQ5 · / 12 GB
Gemma 3 12Bgemma9.2 GBcomfortabelQ5 · / 12 GB
StableLM 2 12Bstable9.2 GBcomfortabelQ5 · / 12 GB
Solar 10.7Bsolar8.2 GBcomfortabelQ5 · / 12 GB
Falcon 3 10Bfalcon7.7 GBcomfortabelQ5 · / 12 GB
Gemma 2 9Bgemma10.1 GBcomfortabelQ8 · / 12 GB
Yi 1.5 9Byi10.1 GBcomfortabelQ8 · / 12 GB
Qwen 3.5 9Bqwen10.1 GBcomfortabelQ8 · / 12 GB
GLM-4 9Bglm10.1 GBcomfortabelQ8 · / 12 GB
GLM-4.7 Flashglm10.1 GBcomfortabelQ8 · / 12 GB
GLM-4.1V 9B Thinkingglm10.1 GBcomfortabelQ8 · / 12 GB

Vrijgespeeld in 2x-rig

Met twee kaarten parallel (24 GB totaal) worden grotere modellen bereikbaar.

Command R 35Bcommand22.0 GBkrapQ4 · / 24 GB
Aya 23 35Baya22.0 GBkrapQ4 · / 24 GB
CodeLlama 34Bcodellama21.4 GBkrapQ4 · / 24 GB
Yi 1.5 34Byi21.4 GBkrapQ4 · / 24 GB
dolphin 2.9.1 yi 1.5 34byi21.4 GBkrapQ4 · / 24 GB
Qwen 2.5 32Bqwen20.1 GBcomfortabelQ4 · / 24 GB
Qwen 2.5 Coder 32Bqwen20.1 GBcomfortabelQ4 · / 24 GB
Qwen 3 32Bqwen20.1 GBcomfortabelQ4 · / 24 GB
QwQ 32Bqwq20.1 GBcomfortabelQ4 · / 24 GB
DeepSeek R1 Distill 32Bdeepseek20.1 GBcomfortabelQ4 · / 24 GB
Qwen 2.5 VL 32Bqwen20.1 GBcomfortabelQ4 · / 24 GB
Granite 4 H-Small 32B-A9Bgranite20.1 GBcomfortabelQ4 · / 24 GB
GLM-4.6glm20.1 GBcomfortabelQ4 · / 24 GB
GLM-4.7glm20.1 GBcomfortabelQ4 · / 24 GB
GLM-5glm20.1 GBcomfortabelQ4 · / 24 GB

Vrijgespeeld in 4x-rig

Serverconfiguratie (48 GB totaal) voor de grootste open-weight modellen.

Qwen 2.5 72Bqwen45.3 GBkrapQ4 · / 48 GB
Qwen 2.5 VL 72Bqwen45.3 GBkrapQ4 · / 48 GB
Qwen2.5 72B Instructqwen45.3 GBkrapQ4 · / 48 GB
Llama 2 70Bllama44.0 GBkrapQ4 · / 48 GB
Llama 3 70Bllama44.0 GBkrapQ4 · / 48 GB
Llama 3.1 70Bllama44.0 GBkrapQ4 · / 48 GB
Llama 3.3 70Bllama44.0 GBkrapQ4 · / 48 GB
CodeLlama 70Bcodellama44.0 GBkrapQ4 · / 48 GB
DeepSeek R1 Distill 70Bdeepseek44.0 GBkrapQ4 · / 48 GB
Hermes 3 70Bhermes44.0 GBkrapQ4 · / 48 GB

Vergelijkbare GPUs

VRAM-schattingen bijgewerkt 2026-05-12.