MAZDA COMPUTING: SERVER GPU HPC SPECIALISTS
  • AI
  • NVIDIA
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
    • PNY Solutons
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Intel Blue Servers
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions

AI Infrastructure Solututions

Total AI Solutions for Training, Inference, Metaverse, Media and Graphics
Picture

Supermicro’s NVIDIA MGX™ Systems

Infinite Possibilities in a Modular Building Block Platform Supporting Today’s and Future GPUs, CPUs, and DPUs
  • GPU: NVIDIA H100 PCIe, H100 NVL PCIe, L40, and more
  • NVIDIA MGX Reference Design: Enabling to construct a wide array of platforms supporting both Arm®- and x86-based servers and is compatible with current and future generations of GPUs, CPUs, and DPUs
  • CPU: NVIDIA Grace™ CPU Superchip or 4th Gen Intel® Xeon® Scalable processor
  • Memory: Up to 480GB On-board LPDDR5X DRAM (with Grace CPU Superchip) or up to 2TB 4800MT/s ECC DDR5 DRAM (with Intel CPU)
  • Drives: Up to 16 Hot-Swap E1.S NVMe
  • Networking: Supports NVIDIA BlueField-3 Data Processing Unit

GPU SuperServer SYS-221GE-NR

Picture
Key Applications
  • High Performance Computing
  • AI/Deep Learning Training
  • Large Language Model (LLM) Natural Language Processing
Key Features
  1. High density 2U GPU system with up to 4 NVIDIA® H100 PCIe GPUs
    Highest GPU communication using NVIDIA® NVLINK™
    PCIe-based H100 NVL with NVLink Support
  2. 32 DIMM slots; Up to 8TB: 32x 256 GB DRAM; Memory Type: 4800MHz ECC DDR5
  3. 7 PCIe 5.0 x16 FHFL Slots
  4. NVIDIA BlueField-3  Data Processing Unit Support for the most demanding accelerated computing workloads.
  5. E1.S NVMe Storage Support

GPU ARS-221GL-NR

Picture
Key Applications
  • High Performance Computing
  • AI/Deep Learning Training
  • Large Language Model (LLM) Natural Language Processing
  • General purpose CPU workloads, including analytics, data science, simulation, HPC, application servers, and more
Key Features
    • High density 2U GPU system with up to 4 NVIDIA® H100 PCIe GPUs
    • Highest GPU communication using NVIDIA® NVLINK™
    • PCIe-based H100 NVL with NVLink Support
  1. Energy-Efficient NVIDIA Grace™ CPU Superchip with 144 Cores
  2. 480GB or 240GB LPDDR5X onboard memory option for minimum latency and maximum power efficiency

Turn-Key Total Solutions
​

Accelerate and simplify your AI deployment with AI-ready infrastructure solutions. As a leading supplier of AI on-prem infrastructure, Supermicro’s turn-key reference designs leverage that vast experience of building some of the world’s largest AI clusters. The solutions span from large scale training clusters to intelligent edge inferencing solutions.
Picture

Simplify and Accelerate Deployment

Scale Built with Building Block Design

Reduce Costs and Environmental Impact 


​Large-Scale NVIDIA H100 AI Training Solution

​with Liquid Cooling

Embrace an Order-of-Magnitude Leap In Performance With Supermicro Rack Scale AI Solutions
  • Supreme AI Cluster for Exascale Computing
  • Scalable Design achieving unprecedented peak performance
  • Most Advanced Processors & Networking
  • Flexible and Superior Cooling Options
  • Representative Performance Benchmarks
  • Supermicro Advantages with Scale AI Solutions Plug and Play
Read the Solution Brief
Picture
Picture

Develop and Execute Advanced AI and HPC

​Applications In Your Office

Advanced System Reduces Power Consumption and Noise Levels While Delivering Massive AI and HPC Compute Performance
  • AI and HPC Use Cases
  • AI Development and Execution Locations
  • NVIDIA AI Enterprise Development Platform
  • AI Development System Hardware/Software Components
  • Liquid Cooled AI Development System
  • Supermicro AI Product Line
READ THE SOLUTION BRIEF

Create an Efficient and Scalable On-Prem

AI Cloud Using NVIDIA AI Enterprise and

​Red Hat OpenShift

Supermicro NVIDIA-Certified Systems, with AMD EPYC Processors
  • Red Hat OpenShift
  • NVIDIA AI Enterprise Software Suite
  • AI Software Stack, Enterprise Support Services
  • Management & Security
  • Supermicro Reference Architecture
  • Example Applications
Read the Solution Brief
Picture
Copyright © 2025 Mazda Computing.  All Rights Reserved.
  • AI
  • NVIDIA
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
    • PNY Solutons
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Intel Blue Servers
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions