Mazda Computing Systems Featuring NVIDIA Ampere GPUs Provide State of The Art Performance
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges.
Third-Generation Tensor Cores
Multi-Instance GPU (MIG)
Smarter and Faster Memory
Converged Acceleration at the Edge
Faster Deep Learning with Sparsity Support
New Sparsity support in A100 Tensor Cores can exploit fine-grained structured sparsity in DL networks to double the throughput of Tensor Core operations.
The larger and faster L1 cache and shared memory unit in A100 provides 1.5x the aggregate capacity per streaming multiprocessor (SM). SM compared to V100 (192 KB vs. 128 KB per SM) to deliver additional acceleration for many HPC and AI workloads.
Several other new SM features improve efficiency and programmability and reduce software complexity.
with NVIDIA Tesla A100
To unlock next-generation discoveries, scientists look to simulations to better understand complex molecules for drug discovery, physics for potential new sources of energy, and atmospheric data to better predict and prepare for extreme weather patterns.
A100 introduces double-precision Tensor Cores, providing the biggest milestone since the introduction of double-precision computing in GPUs for HPC. This enables researchers to reduce a 10-hour, double-precision simulation running on NVIDIA V100 Tensor Core GPUs to just four hours on A100. HPC applications can also leverage TF32 precision in A100’s Tensor Cores to achieve up to 10x higher throughput for single-precision dense matrix multiply operations.