AuthorsJ. Langguth and L. Burchard
TitleML Accelerator Hardware: A Model for Parallel Sparse Computations?
AfilliationScientific Computing
Project(s)Department of High Performance Computing
StatusPublished
Publication TypeTalks, invited
Year of Publication2022
Location of TalkUniversity of Vienna, Austria
Abstract

Recently, dedicated accelerator hardware for machine learning applications such as the Graphcore IPUs and Cerebras WSE have evolved from the experimental state into market-ready products, and they have the potential to constitute the next major architectural shift after GPUs saw widespread adoption a decade ago. In this talk we will present the new hardware along with implementations of basic graph and matrix algorithms and show some early results on the attainable performance, as well as the difficulties of establishing fair comparisons to other architectures. We follow up by discussing the wider implications of the architecture for algorithm design and programming, along with the wider implications of adopting such hardware.
 

Citation Key42830

Contact person