MAZDA COMPUTING: SERVER GPU HPC SPECIALISTS
  • NVIDIA
    • ​PNY NVIDIA Professional GPUs
    • ​PNY NVIDIA Data Center GPUs
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Network, Security and Electrical Solutions
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions

NVIDIA Data Center
​
GPUs 

Explore NVIDIA data center GPUs supplied by Mazda Computing in partnership with PNY—H200 NVL, H100 NVL, RTX PRO 6000 Blackwell Server Edition, L40S, L40, L4, A16, A10, A2—for AI, VDI, edge, and universal acceleration.​
Picture

Accelerate AI, VDI, and Visualization
​
with PNY NVIDIA Data Center GPUs
​

Mazda Computing integrates and delivers NVIDIA data center GPUs through our partner PNY, helping teams deploy high-performance inference, training, visualization, and virtual GPU (vGPU) stacks—on time and to spec.
Picture

NVIDIA H200 NVL GPU

Breakthrough AI and HPC performance with HBM3e memory
  • Architecture: Hopper H200 NVL (HBM3e)
  • GPU Memory: 141 GB HBM3e per GPU (282 GB total per NVL pair)
  • Bandwidth: 4.8 TB/s — nearly 2× H100 HBM3
  • Form Factor: Dual PCIe cards linked via NVLink Bridge (900 GB/s)
  • Compute Capability: ~30 TFLOPS FP64 / ~120 TFLOPS FP16 / ~1.4 PFLOPS FP8
  • Ideal For: Large-scale LLM training, simulation, and HPC modeling
The H200 NVL offers unprecedented bandwidth and capacity for generative AI and scientific research—perfect for labs and enterprises scaling GPU clusters.​

NVIDIA H100 NVL GPU

Enterprise-grade AI and deep learning acceleration
  • Architecture: Hopper H100 NVL
  • GPU Memory: 94 GB HBM2e per GPU (188 GB total per NVL pair)
  • Bandwidth: 3.35 TB/s
  • Tensor Performance: Up to 1 PFLOP FP8 AI throughput
  • Connectivity: NVLink (900 GB/s) and PCIe Gen5
  • Ideal For: AI inference services, LLM deployment, and scientific simulation
H100 NVL remains the cornerstone of enterprise AI clusters, offering strong mixed-precision performance and unified memory for inference at scale.​

NVIDIA RTX PRO 6000 Blackwell Server Edition

Picture
Universal GPU for AI + Visual Computing
  • Architecture: Blackwell GB200 GPU Platform
  • Memory: 96 GB GDDR7 ECC
  • Cores: 24,064 CUDA | 752 Tensor | 188 RT
  • Power: 400–600 W configurable TDP
  • Interface: PCIe Gen5 x16 / Passive Cooling / 1 × 16-pin (CEM5)
  • Display Outputs: 4 × DisplayPort 2.1
  • Key Features: MIG support (4 × 24 GB partitions), Confidential Compute, NVENC/NVDEC acceleration
Ideal for enterprises that combine AI training, inference, and real-time visualization in one GPU fleet.
Mazda Computing can build validated multi-GPU servers with proper power and cooling designs for this model.

 NVIDIA L40S

High-density multi-workload GPU for AI Inference and Graphics
  • Architecture: Ada Lovelace (L40S)
  • Memory: 48 GB GDDR6 ECC
  • Performance: Up to 1.45 PFLOPS Tensor / 183 TFLOPS FP16
  • Power: 350 W Passive Cooling (PCIe)
  • Ideal For: AI inference, rendering, simulation, and VDI environments
  • Bonus: Supports vGPU and NVIDIA Omniverse for collaborative visualization
The L40S combines server-class graphics capability with AI acceleration, bridging data centers and creative pipelines.
Picture

The Ultimate AI Compute Server — AMD EPYC 9004 + NVIDIA Blackwell 

Mazda Computing NVIDIA MGX 43N8XRG-2TA AI Server is a high-density 4U GPU server designed for AI training, large-scale LLMs, simulation, and visualization.
Engineered for maximum performance, scalability, and reliability, it supports up to 8 dual-slot NVIDIA® GPUs and the latest AMD EPYC™ 9004 series “Genoa” CPUs — delivering true HPC-class throughput for DOE laboratories.
🟩​AI Configuration | 8× NVIDIA H200 NVL GPUs
  • GPU Architecture: Hopper H200 HBM3e
  • ​Memory: 141 GB HBM3e per GPU (1.13 TB total)
  • Memory Bandwidth: 4.8 TB/s — highest of any GPU available
  • Performance: ~1.4 PFLOPS FP8 AI throughput
  • Ideal For: Generative AI, LLM training, scientific modeling, CFD, and climate simulation

🟦 Hybrid Visualization + AI Configuration | 8× NVIDIA RTX PRO™ 6000 Blackwell Server GPUs
  • Architecture: Blackwell GB200
  • Memory: 96 GB GDDR7 ECC per GPU (768 GB total)
  • Cores: 24,064 CUDA | 752 Tensor | 188 RT
  • Performance: Up to 1.6 TB/s bandwidth | PCIe Gen 5 x16
  • Ideal For: AI inference, rendering, simulation, and hybrid compute pipelines
🧠 CPU & Memory Options
  • Processors: Up to 2 × AMD EPYC™ 9654 (96 cores / 192 threads each)
    • Total of 192 CPU cores | 384 threads
    • 8-channel DDR5 support per CPU @ 4800 MHz
  • Memory Capacity: Up to 6 TB DDR5 ECC RDIMM (48 DIMM slots)
    • Supports 2 DPC configurations for maximum bandwidth
    • Optional PMEM expansion for in-memory databases
  • Interconnect: PCIe Gen 5 x16 lanes for each GPU and NVLink Bridge support
⚙️ System Highlights
  • Form Factor: 4U rackmount
  • Storage: Up to 16 × 2.5" NVMe/SATA hot-swap bays + 2 M.2 NVMe boot drives
  • Networking: Dual 10/25 Gb Ethernet standard + optional 100 Gb InfiniBand (NDR / HDR)
  • Power Supply: 2 × 3000 W 80 PLUS Titanium redundant PSUs
  • Cooling: Optimized front-to-rear airflow with zoned fans for multi-GPU operation
  • Manageability: ASUS ASMB11-iKVM / BMC remote management
Picture

Copyright © 2025 Mazda Computing.  All Rights Reserved.
  • NVIDIA
    • ​PNY NVIDIA Professional GPUs
    • ​PNY NVIDIA Data Center GPUs
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Network, Security and Electrical Solutions
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions