Home IT Info News Today Samsung Puts Intelligence into High-Bandwidth Memory | eWEEK

Samsung Puts Intelligence into High-Bandwidth Memory | eWEEK

436
Samsung Puts Intelligence into High-Bandwidth Memory | eWEEK


Samsung, extra extensively identified for making tv screens, smartphones and different widespread client units, is also a world chief in producing laptop reminiscence. The North Korean IT big introduced that it has developed the trade’s first high-bandwidth reminiscence (HBM) chip that’s built-in with synthetic intelligence processing energy—the HBM-PIM.

Like Intel, AMD, NVIDIA and others are baking safety, networking and different performance into processors, Samsung is doing the identical, solely with AI. The new processing-in-memory (PIM) structure brings real-time AI computing capabilities inside high-performance reminiscence in order to speed up large-scale processing in knowledge facilities, excessive efficiency computing (HPC) programs and AI-enabled cellular purposes.

The pioneering HBM-PIM is the trade’s first programmable PIM resolution tailor-made for numerous AI-driven workloads corresponding to HPC, coaching and inference, Samsung stated. The firm plans to construct upon this by additional collaborating with AI resolution suppliers for much more superior PIM-powered purposes, the corporate stated.

The HBM-PIM design has demonstrated “impressive performance and power gains on important classes of AI applications,” Rick Stevens of Argonne Labs stated in a media advisory.

Most of right this moment’s computing programs are based mostly on the von Neumann structure, which makes use of separate processor and reminiscence items to hold out tens of millions of intricate knowledge processing duties. This sequential processing strategy requires knowledge to continuously transfer backwards and forwards, leading to a system-slowing bottleneck particularly when dealing with ever-increasing volumes of information.

Instead, the HBM-PIM brings processing energy on to the place the information is saved by putting a DRAM-optimized AI engine inside every reminiscence financial institution — a storage sub-unit — enabling parallel processing and minimizing knowledge motion. When utilized to Samsung‘s present HBM2 Aquabolt resolution, the brand new structure is ready to ship greater than twice the system efficiency whereas decreasing power consumption by greater than 70%, the corporate claimed. The HBM-PIM additionally doesn’t require any {hardware} or software program modifications, permitting quicker integration into present programs, Samsung stated.

Samsung’s paper on the HBM-PIM was chosen for presentation on the famend International Solid-State Circuits Virtual Conference (ISSCC), which ended Feb. 22. Samsung’s HBM-PIM is now being examined inside AI accelerators by main AI resolution companions, with all validations anticipated to be accomplished inside the first half of this yr, the corporate stated.



Source hyperlink

LEAVE A REPLY

Please enter your comment!
Please enter your name here