Athena Seminar Series: Computer Architectures for Mind-Machine Teaming

Friday, February 3, 2023 - 12:00 to 13:00


Bio: Abhishek Bhattacharjee is an Associate Professor of Computer Science at Yale University. His work spans computer architecture, operating systems, and compilers and has been recognized multiple times by the IEEE Micro's Top Picks in Computer Architecture journal. Abhishek's work on hardware optimizations for memory translation was integrated into AMD's flagship Zen architecture in 2017. As of 2020, AMD has shipped over 260 million Zen CPUs with Abhishek's memory translation optimizations (with that number significantly higher today and continuing to grow). Abhishek's work on OS optimizations for memory translation has been integrated into Linux and has influenced NVIDIA's memory translation hardware. More recently, Abhishek has been building flexible and low-power architectures for brain-computer interfacing -- the topic of this talk.

Abstract: Direct brain-computer communication promises to treat neurological disorders, explain brain function, and augment all aspects of human cognition and decision-making. Delivering on the promise of this form of mind-machine teaming requires the design of computer systems that delicately balance the tight power, latency, and bandwidth trade-offs needed to decode brain activity, stimulate biological neurons, and control assistive devices most effectively.

This talk presents the design of two systems that unlock several brain-computer interactions while navigating the power and performance trade-offs posed by brain interfacing. The first system, HALO, is an accelerator-rich processing fabric that enables flexible single-brain-site interfacing at high data rates using only tens of milliwatts of power. The second system, Hull, realizes multi-brain-site interfacing using a distributed system of networked accelerator-rich processing fabrics built atop HALO. Driven by important brain-computer interface applications (e.g., epilepsy, movement disorders, paralysis), we explore systems research questions pertaining to the design and integration of hardware accelerators, co-design of hardware accelerators with networking & storage stacks, and clean interfaces/abstractions for programmability. Key insights are undergirded via two chip tape-outs, the second in a 12nm CMOS process.

Participating Universities

Duke University logo
MIT logo
NC A&T logo
Princeton University logo
University of Michigan logo
University of Wisconsin logo
Yale logo