HBM is a high-speed memory interface used in conjunction with high-performance graphics accelerators and network devices. In October 2021, SK hynix developed HBM3, the fourth generation of the HBM technology with a combination of multiple dynamic random-access memory (DRAM) chips vertically connected, which is a high-value product that innovatively raises the data processing rate.
SK hynix described HBM3 as the fastest DRAM in the world and comes with the biggest capacity and significantly improved level of quality. HBM3 can process up to 819 gigabytes (GB) per second, meaning that 163 full-high definition movies can be transmitted in a single second, representing a 78 percent increase in the data-processing speed compared with its previous version, HBM2E.
"HBM3 has opened a new market for superfast AI semiconductors," an unnamed SK hynix official said in a statement on June 9. Depending on Nvidia's schedule, SK Hynix will increase the production of HBM3. Nvidia designs graphics processing units, systems on chip (SoCs) and mobile processors for smartphones and tablets as well as vehicle navigation and entertainment systems.
HBM3 improves the accelerated computing performance of "H100," Nvidia's hardware accelerator that will be used in various AI-based high-tech sectors. Accelerated computing increases the speed of work using specialized hardware, often with parallel processing methods that bundle frequently repeated tasks.
© Aju Business Daily & www.ajunews.com Copyright: All materials on this site may not be reproduced, distributed, transmitted, displayed, published or broadcast without the authorization from the Aju News Corporation.