This event has passed.
Friday, May 3, 2024
All Day
Presenter: Bing Li
The NSF AI Institute for Edge Computing (Athena) is pleased to present the next in the Seminar Series by Bing Li, titled “Design Methodology for Computing-in-memory-based Neural Network Accelerator” on Friday, May 3, 2024, from 12-1pm EST in-person at Duke University Wilkinson building Room 221 and via Zoom https://duke.zoom.us/j/93773313579?pwd=b3VWVjUxS0FuejFwL2RpcHY0c3FVZz09.
Meeting ID: 966 4710 5443
Passcode: 761854
Abstract: Traditional von-Neumann architectures struggle with the "memory wall," a significant bottleneck due to the high computational and memory demands of neural networks. To address this challenge, researchers are adopting computing-in-memory (CiM) architectures, which process data directly within memory, drastically cutting memory access and computational costs while allowing for greater parallelism. This transition to CiM is pivotal in advancing computer technology and is essential for enabling more robust deployment of artificial intelligence applications.
However, CiM architectures present unique challenges compared to traditional neural network accelerators. Firstly, the hardware perspective reveals considerable diversity in terms of computing granularity and hardware resources. This diversity often does not align with the design of neural network operators and data precision, leading to inefficient resource utilization and impacting the energy efficiency of the CiM architecture. Secondly, the intricate coupling between memory devices in CiM crossbars complicates the mapping of pruned models, limiting the ability to deploy larger neural networks.
Given these challenges, our focus is on addressing the hardware-related constraints of CiM architectures to enable more efficient neural network deployment. Our approach involves developing methods that consider the hardware's diversity and aim to optimize neural network operations through improved scheduling and mapping techniques. In this talk, I will introduce our recent progress on the software-hardware co-design for high-efficient CiM neural network accelerator.
Bio: Bing Li now is a tenured associate professor with the Capital Normal University in Beijing. She was a postdoctoral research fellow with the Department of Electrical and Computer Engineering at Duke University, Durham, NC, USA from 2017 to 2019.She received the Ph.D. degree from University of Chinese Academy of Sciences in 2016.
She is committed to the research of processing-in memory architectures and system design. She has won the IEEE GLSVLSVI’21 Best Paper Award (second place), the National Academy of Sciences Postdoctoral Scholarship, and the Beijing Youth Project Fund. She has 7 authorized China-US technology invention patents, four of which have been used in industrial products.
Dr. Bing Li has served as TPC at the four top-level electronic design automation conferences (ASPDAC, ICCAD, DATE, DAC), and has served as session chair at DAC 2021, ASPDAC 2020, GLSVLSI'20 & GLSVLSI'21, ITC-Asia 2021 and other international conferences, serves as reviewer for IEEE TCAD, TCAS-I, TVLSI, TC, etc.
Host: Yiran Chen
Please contact info-athena@duke.edu if you have any questions or would like to learn more about the Athena Institute. Connect with us @TheAthenaInst and on LinkedIn https://www.linkedin.com/in/the-athena-ai-institute/