Publication Details

Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap

SHAFIQUE Muhammad, HAFIZ Rehan, JAVED Muhammad Usama, ABBAS Sarmad, SEKANINA Lukáš, VAŠÍČEK Zdeněk and MRÁZEK Vojtěch. Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap. In: 2017 IEEE Computer Society Annual Symposium on VLSI. Los Alamitos: IEEE Computer Society Press, 2017, pp. 627-632. ISBN 978-1-5090-6762-6.
Czech title
Adaptivní a energeticky účinné architektury pro strojové učení: Výzvy, příležitosti a další výzkum
Type
conference paper
Language
english
Authors
Shafique Muhammad (TU-Wien)
Hafiz Rehan (ITU-Lahore)
Javed Muhammad Usama (ITU-Lahore)
Abbas Sarmad (ITU-Lahore)
Sekanina Lukáš, prof. Ing., Ph.D. (DCSY FIT BUT)
Vašíček Zdeněk, doc. Ing., Ph.D. (DCSY FIT BUT)
Mrázek Vojtěch, Ing., Ph.D. (DCSY FIT BUT)
Keywords


machine learning, approximate computing, deep learning, neural networks, energy efficiency

Abstract


Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.

Published
2017
Pages
627-632
Proceedings
2017 IEEE Computer Society Annual Symposium on VLSI
Conference
IEEE Computer Society Annual Symposium on VLSI, Bochum, DE
ISBN
978-1-5090-6762-6
Publisher
IEEE Computer Society Press
Place
Los Alamitos, US
DOI
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB11474,
   author = "Muhammad Shafique and Rehan Hafiz and Usama Muhammad Javed and Sarmad Abbas and Luk\'{a}\v{s} Sekanina and Zden\v{e}k Va\v{s}\'{i}\v{c}ek and Vojt\v{e}ch Mr\'{a}zek",
   title = "Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap",
   pages = "627--632",
   booktitle = "2017 IEEE Computer Society Annual Symposium on VLSI",
   year = 2017,
   location = "Los Alamitos, US",
   publisher = "IEEE Computer Society Press",
   ISBN = "978-1-5090-6762-6",
   doi = "10.1109/ISVLSI.2017.124",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/11474"
}
Files
Back to top