Prof. Ing. Lukáš Sekanina, Ph.D.

SHAFIQUE Muhammad, HAFIZ Rehan, JAVED Muhammad Usama, ABBAS Sarmad, SEKANINA Lukáš, VAŠÍČEK Zdeněk and MRÁZEK Vojtěch. Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap. In: 2017 IEEE Computer Society Annual Symposium on VLSI. Los Alamitos: IEEE Computer Society Press, 2017, pp. 627-632. ISBN 978-1-5090-6762-6.
Publication language:english
Original title:Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap
Title (cs):Adaptivní a energeticky účinné architektury pro strojové učení: Výzvy, příležitosti a další výzkum
Pages:627-632
Proceedings:2017 IEEE Computer Society Annual Symposium on VLSI
Conference:IEEE Computer Society Annual Symposium on VLSI
Place:Los Alamitos, US
Year:2017
ISBN:978-1-5090-6762-6
Publisher:IEEE Computer Society Press
Files: 
+Type Name +Title Size Last modified
iconisvlsi17.pdf1,37 MB2017-07-28 22:38:50
^ Select all
With selected:
Keywords
machine learning, approximate computing, deep learning, neural networks, energy efficiency
Annotation
Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.
BibTeX:
@INPROCEEDINGS{
   author = {Muhammad Shafique and Rehan Hafiz and Usama Muhammad Javed
	and Sarmad Abbas and Luk{\'{a}}{\v{s}} Sekanina and
	Zden{\v{e}}k Va{\v{s}}{\'{i}}{\v{c}}ek and Vojt{\v{e}}ch
	Mr{\'{a}}zek},
   title = {Adaptive and Energy-Efficient Architectures for Machine
	Learning: Challenges, Opportunities, and Research Roadmap},
   pages = {627--632},
   booktitle = {2017 IEEE Computer Society Annual Symposium on VLSI},
   year = {2017},
   location = {Los Alamitos, US},
   publisher = {IEEE Computer Society Press},
   ISBN = {978-1-5090-6762-6},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php?id=11474}
}

Your IPv4 address: 54.225.47.94
Switch to IPv6 connection

DNSSEC [dnssec]