Title: Hardware for ML and ML for Hardware
Speaker: Prof. Aman Arora
Date and Time: Monday, April 1st, 11:00am
Location: EH 2430
Abstract:
This talk delves into the exciting intersection of hardware design and machine learning, showcasing how these fields are mutually benefiting each other. The presentation will feature cutting-edge research projects that exemplify this dynamic. In the first half, I will discuss two research projects focused on optimizing hardware for machine learning (ML): the first involves architecting ML-specialized Field Programmable Gate Arrays (FPGAs), while the second focuses on designing a processing-in-memory (PIM) accelerator for ML. Then, in the second half, I will share two innovative projects in which ML is being used to revolutionize hardware design: one employing machine learning for power, performance, and area (PPA) prediction for FPGAs, and another utilizing large language models (LLMs) for hardware design and verification. I will share both published results and ongoing work in these projects. This talk highlights how the interplay between these two fields can be leveraged to advance both fields, promising a future of intelligent computing.
Biography:
Aman Arora is an assistant professor in the School of Computing and Augmented Intelligence at Arizona State University. He directs the Advent lab, where he and his students work on developing computer systems that can efficiently process modern workloads like Machine Learning, with a special focus on reconfigurable systems. He graduated with a Ph.D. from the University of Texas at Austin in 2023. He has over 10 years of experience in the semiconductor industry in design, verification, testing and architecture roles.
Hosted By: Prof. Sitao Huang