MAZDA COMPUTING: SERVER GPU HPC SPECIALISTS
  • NVIDIA
    • Vera Rubin
    • ​PNY NVIDIA Professional GPUs
    • ​PNY NVIDIA Data Center GPUs
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Network, Security and Electrical Solutions
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions

Six New Chips. 
​
One AI Supercomputer. 
​

Built by NVIDIA and Supermicro, the Vera Rubin platform integrates next-gen CPU, GPU, interconnect, and networking technologies to overcome scaling, bandwidth, and data-movement challenges in large-scale AI and HPC environments.
Picture
Picture
The Vera Rubin platform represents a system-level approach to AI infrastructure, combining new CPU and GPU architectures with high-bandwidth interconnects and advanced networking components. Key technologies include the Vera CPU, Rubin GPU, NVLink 6, ConnectX-9, BlueField-4, and Spectrum-6 Ethernet, all designed to address scaling challenges associated with modern AI training, inference, and hybrid HPC workloads. 

Complementing NVIDIA’s announcement, Supermicro confirmed full development of NVIDIA Vera Rubin NVL72 and HGX Rubin NVL8 systems. These system designs provide early insight into expected compute density, GPU scaling models, and fabric integration approaches for future deployments. 

NVIDIA Rubin GPU​

Picture
Rubin GPUs with HBM4 and 50 PF NVFP4 Transformer Engine made for the next generation of AI.​​

NVIDIA Vera CPU

Picture
Vera CPUs are purpose-built for data movement and agentic reasoning, delivering high-bandwidth, energy-efficient compute with deterministic performance

NVIDIA NVLink 6 Switch​

Picture
NVLink 6 switches feature 3.6 terabytes per second (TB/s) of all-to-all, scale-up bandwidth per GPU, enabling high-speed GPU-to-GPU communications for AI.

NVIDIA ConnectX-9

​​SuperNIC

Picture
ConnectX‑9 SuperNICs deliver 1.6 terabits per second (Tb/s) of per-GPU bandwidth, with programmable remote direct-memory access (RDMA) for low‑latency, GPU‑direct networking at massive scale

NVIDIA BlueField-4

​DPU

Picture
BlueField-4 DPUs accelerate data processing across storage, networking, cybersecurity, and elastic scaling in AI factories

NVIDIA Spectrum-X Ethernet

​Co-Packaged Optics

Picture
Spectrum‑X Ethernet scale‑out switches with integrated silicon photonics deliver 5x better power efficiency, 10x higher network resiliency, and up to 5x more uptime over traditional networking with pluggable transceivers
Copyright © 2025 Mazda Computing.  All Rights Reserved.
  • NVIDIA
    • Vera Rubin
    • ​PNY NVIDIA Professional GPUs
    • ​PNY NVIDIA Data Center GPUs
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Network, Security and Electrical Solutions
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions