ReRAM-based Machine Learning
The transition towards exascale computing has resulted in major transformations in computing paradigms. The need to analyze and respond to such large amounts of data sets has led to the adoption of machine learning (ML) and deep learning (DL) methods in a wide range of applications.
One of the major challenges is the fetching of data from computing memory and writing it back without experiencing a memory-wall bottleneck. To address such concerns, in-memory computing (IMC) and supporting frameworks have been introduced. In-memory computing methods have ultra-low power and high-density embedded storage. Resistive Random-Access Memory (ReRAM) technology seems the most promising IMC solution due to its minimized leakage power, reduced power consumption and smaller hardware footprint, as well as its compatibility with CMOS technology, which is widely used in industry.
In this book, the authors introduce ReRAM techniques for performing distributed computing using IMC accelerators, present ReRAM-based IMC architectures that can perform computations of ML and data-intensive applications, as well as strategies to map ML designs onto hardware accelerators.
The book serves as a bridge between researchers in the computing domain (algorithm designers for ML and DL) and computing hardware designers.
About the Author
Hao Yu is a professor in the School of Microelectronics at Southern University of Science and Technology (SUSTech), China. His main research interests cover energy-efficient IC chip design and mmwave IC design. He is a senior member of IEEE and a member of ACM. He has written several books and holds 20 granted patents. He is a distinguished lecturer of IEEE Circuits and Systems and associate editor of Elsevier Integration, the VLSI Journal, Elsevier Microelectronics Journal, Nature Scientific Reports, ACM Transactions on Embedded Computing Systems and IEEE Transactions on Biomedical Circuits and Systems. He is also a technical program committee member of several IC conferences, including IEEE CICC, BioCAS, A-SSCC, ACM DAC, DATE and ICCAD. He obtained his Ph.D. degree from the EE department at UCLA, USA.
Leibin Ni is a Principle engineer at Huawei Technologies, Shenzhen, China. His research interests include emerging nonvolatile memory platforms, computing in-memory architecture, machine learning applications and low power designs. He is a member of IEEE. He received his Ph.D. from the Nanyang Technological University, Singapore.
Sai Manoj Pudukotai Dinakarrao is an assistant professor in the Department of Electrical and Computer Engineering at George Mason University (GMU), USA. His current research interests include hardware security, adversarial machine learning, Internet of things networks, deep learning in resource-constrained environments, in-memory computing, accelerator design, algorithms, design of self-aware many-core microprocessors and resource management in many-core microprocessors. He is a member of IEEE and ACM. He served as a guest editor to IEEE Design and Test Magazine and reviewer for multiple IEEE and ACM journals. Also, he is a technical program committee member of several CAD conferences, including ACM DAC, DATE, ICCAD, ASP-DAC, ESWEEK and many more. He received a Ph.D. degree in Electrical and Electronic Engineering from the Nanyang Technological University, Singapore.
Publication Year:
2021
Pages:
261
ISBN-13: 978-1-83953-081-4
Format:
HBK