Home General Various News Radeon Instinct MI25 Launches + specifications

Radeon Instinct MI25 Launches + specifications

226

AMD has officially introduced Radeon Instinct family accelerators for deep learning. By doing so they introduced the Radeon Instinct MI25, which has 16GB of HBM2 memory based on a Vega 10 GPU with 4096 Stream cores and a 300W TDP. 

The GPU seems to be clocked slower than Radeon Frontier with ‘only’ 12.3 TFLOPs (~1501 MHz). The memory has the same bandwidth of 484 GB/s. AMD also confirmed that MI6 and MI8 are based on older architectures. Fiji will power the Radeon Instinct MI8 with all 4096 Stream Processors and 4GB of HBM1 memory. Instinct MI6 will be based on Polaris 10 and has  2304 shader cores with 16GB of GDDR5 memory. 

  • Vega 10 Architecture
  • 4096 Stream Processors
  • 24.6 TFLOPS Half Precision (FP16)
  • 12.3 TFLOPS Single Precision (FP32)
  • 768 GFLOPS Double Precision (FP64)
  • 16GB HBM2 Memory
  • 484GB/sec Memory Bandwidth
  • 300W TDP
  • PCIe Form Factor
  • Full Height Dual Slot
  • Passive Cooling

Radeon Instinct’s three initial accelerators are designed to address a wide range of machine intelligence applications:

  • The Radeon Instinct™ MI25 accelerator, based on the “Vega” GPU architecture with a 14nm FinFET process, will be the world’s ultimate training accelerator for large-scale machine intelligence and deep learning datacenter applications. The MI25 delivers superior FP16 and FP32 performance in a passively-cooled single GPU server card with 24.6 TFLOPS of FP16 or 12.3 TFLOPS of FP32 peak performance through its 64 compute units (4,096 stream processors). With 16GB of ultra-high bandwidth HBM2 ECC GPU memory and up to 484 GB/s of memory bandwidth, the Radeon Instinct MI25’s design is optimized for massively parallel applications with large datasets for Machine Intelligence and HPC-class system workloads.
  • The Radeon Instinct™ MI8 accelerator, harnessing the high-performance, energy-efficiency of the “Fiji” GPU architecture, is a small form factor HPC and inference accelerator with 8.2 TFLOPS of peak FP16|FP32 performance at less than 175W board power and 4GB of High-Bandwidth Memory (HBM) on a 512-bit memory interface. The MI8 is well suited for machine learning inference and HPC applications.
  • The Radeon Instinct™ MI6 accelerator, based on the acclaimed “Polaris” GPU architecture, is a passively cooled inference accelerator with 5.7 TFLOPS of peak FP16|FP32 performance at 150W peak board power and 16GB of ultra-fast GDDR5 GPU memory on a 256-bit memory interface. The MI6 is a versatile accelerator ideal for HPC and machine learning inference and edge-training deployments.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here