Organizers
General Chair:
Bruce Jacob, Naval Academy
Program Chairs:
Matthias Jung, University of Würzburg
Hameed Badawy, New Mexico State University
Publication Chair:
Wendy Elsasser, Rambus
Publicity Chair:
Chen Ding, University of Rochester
Web Chair:
Matthias Jung, University of Würzburg
Program Committee
- Abdel-Hameed Badawy, New Mexico State University
- Jonathan Beard, Google
- Yitzhak Birk, Technion
- Bruce Christenson, Intel
- Chen Ding, University of Rochester
- David Donofrio, Tactical Computing Laboratories
- Ronald Dreslinski, University of Michigan
- Wendy Elsasser, Rambus
- Dietmar Fey, University Erlangen-Nuremberg
- Maya Gokhale, LLNL
- Simon Hammond, US Dept of Energy / National Nuclear Security Administration
- Bruce Jacob, United States Naval Academy
- Michael Jantz, University of Tennessee
- Matthias Jung, University of Würzburg and Fraunhofer IESE
- John Leidel, Texas Tech University
- Petar Radojković, Barcelona Supercomputing Center
- Marc Reichenbach, U. Rostock
- Arun Rodrigues, Samsung
- Abhishek Singh, Samsung
- Chirag Sudarshan, FZ Jülich
- Robert Trout, Sadram Inc.
- Thomas Vogelsang, Rambus, Inc.
- Norbert Wehn, RPTU Kaiserslautern
- Ke Zhang, Institute of Computing Technology of Chinese Academy of Sciences; University of Chinese Academy of Sciences
Sponsors
Keynotes
Keynote 1: Candidates for new chapters for future revisions of “Memory Systems” : Economics and Trends, “Future” Memory, and the ever increasing importance of Mathematics to Memory Systems.
David Wang is a Memory System Architect that co-authored the Book, “Memory Systems: Cache, DRAM, Disk” with Bruce Jacob and Spencer Ng. in addition to his role as a working memory systems architect, David fancies himself as a historian of memory devices and system development over the course of the last 40 years. David has worked on various memory interface devices and memory system architecture for MetaRAM and Inphi. David was most recently employed by Samsung Semiconductor Inc as a director of product planning in the memory division. David left Samsung in 2022, and is currently developing a new concept for memory system repair.
Keynote 2: Challenges and Opportunities in Memory Systems for AI Accelerators
Demand for processors with very high-bandwidth memory systems has exploded in concert with the rapid advances in deep-learning and artificial intelligence. Within a decade, we can expect processors that require a memory system capable of delivering 100 terabytes per second from over 1 terabyte of capacity in less than 1 kilowatt. This simultaneous need to push the envelope for very high bandwidth at very low per-access energy to a large pool of data creates many challenges. This talk will detail some of these difficulties, and discuss some of the approaches architects and memory designers might take to address them.
Mike O’Connor manages the Memory Architecture Research Group at NVIDIA. His group is responsible for future DRAM and memory system architecture research. In a prior role at NVIDIA, he was the memory system architecture lead for several generations of NVIDIA GPUs. Mike’s career has also included positions at AMD, Texas Instruments, Silicon Access Networks (a network-processor startup), Sun Microsystems, and IBM. At AMD, he drove much of the architectural definition for the High-Bandwidth Memory (HBM) specification. Mike has a BSEE from Rice University and an MSEE & PhD from the University of Texas at Austin.
Here you can see the final program: