MAZDA COMPUTING: SERVER GPU HPC SPECIALISTS
  • AI
  • NVIDIA
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
    • PNY Solutons
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Intel Blue Servers
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions

NVIDIA DGX Spark

NVIDIA DGX Spark is a powerful, compact desktop AI system built on the Grace Blackwell Superchip, delivering up to 1000 TOPS of AI performance. With 128GB of unified memory, it supports local prototyping, fine-tuning, and inferencing of models up to 200B parameters—and up to 405B parameters when connecting two systems with NVIDIA ConnectX. The system mirrors NVIDIA’s data center software stack, making deployment seamless across desktop, cloud, or HPC environments.

NVIDIA GPU, CPU, Networking, and AI Software Technologies
​

​With deep learning neural networks becoming more complex, training times have dramatically increased, resulting in lower productivity and higher costs. NVIDIA’s deep learning technology and complete solution stack significantly accelerate your AI training, resulting in deeper insights in less time, significant cost savings, and faster time to ROI.

NVIDIA GB10 Superchip
​

Experience up to 1000 AI TOPS of AI performance at FP4 precision with the NVIDIA Grace Blackwell architecture.​

​128 GB of Coherent
​Unified System Memory

Run AI development and testing workloads with AI models up to 200 billion parameters at your desktop with large, unified system memory.

​

NVIDIA ConnectX Networking

High-performance NVIDIA Connect-X networking enables connecting two NVIDIA DGX Spark systems together to work with AI models up to 405 billion parameters.

​

NVIDIA AI
​Software Stack

Use a full-stack solution for generative AI workloads, including tools, frameworks, libraries, and pretrained models.
Picture

Key Specifications

  • Built on NVIDIA GB10 Grace Blackwell Superchip
  • NVIDIA Blackwell GPU with fifth-generation Tensor Core technology
  • NVIDIA Grace CPU with 20-core high-performance Arm architecture
  • Up to 1000 TOPS of AI performance using FP4
  • 128 GB of coherent, unified system memory
  • Support for up to 200 billion parameter models
  • NVIDIA ConnectX™ networking to link two systems to work with models up to 405 billion parameters
  • Up to 4 TB of NVMe storage
Copyright © 2025 Mazda Computing.  All Rights Reserved.
  • AI
  • NVIDIA
    • DEEP LEARNING & AI
    • DGX Spark
    • HPC
    • Cloud & DataCenter
    • Autonomous Machines
    • PNY Solutons
  • SERVERS
    • Intel Xeon
    • AMD EPYC
    • Intel Blue Servers
    • Penguin Computing
    • Supermicro
    • Gov Servers
  • SOLUTIONS
    • Graid
    • OEM Solutions
    • Services
  • Fed
    • CDW-G ICPT
    • Sandia JIT Contract
    • NASA BPA
    • Capabilities
    • NLIT 2022
  • Company
    • About
    • Careers
    • Contact
    • Terms and Conditions