Introduction to Hardware-Aware Neural Architecture Search
Tutorial at IEEE World Congress on Computational Intelligence 2022, Padua (IT) July 18-23, 2022
- Prof. Lukas Sekanina, Faculty of Information Technology, Brno University of Technology, Czech Rep.
As deep neural networks (DNNs) can have complex architectures with millions of trainable parameters, their design and training are difficult even for highly qualified experts. In order to reduce human effort, neural architecture search (NAS) methods have been developed to automate the entire design process. The NAS methods typically combine searching in the space of candidate architectures and optimizing (learning) the weights using a gradient method. With the aim of reaching desired latency and providing high energy efficiency, specialized hardware accelerators for DNN inference were developed for cutting-edge applications running on resource-constrained devices in recent years. In this direction, hardware-aware NAS methods were adopted to design DNN architecture (and weights) optimally for a given hardware platform. In this tutorial, we survey the critical elements of NAS methods that -- to various extents -- consider hardware implementation of the resulting DNNs. We will classify these methods into three major classes: single-objective NAS (no hardware is considered), hardware-aware NAS (DNN is optimized for a particular hardware platform), and NAS with hardware co-optimization (hardware is directly co-optimized with DNN as a part of NAS). We emphasize the multi-objective design approach that must be adopted in NAS and focus on co-design algorithms developed for concurrent optimization of DNN architectures and hardware platforms. As most research in this area deals with NAS for image classification using convolutional neural networks, our case studies will be devoted to this application. After attending the tutorial, the participants will understand why and how NAS and hardware co-optimization are currently used to build cutting-edge implementations of DNNs.
- L. Sekanina, Neural Architecture Search and Hardware Accelerator Co-Search: A Survey, in IEEE Access, vol. 9, pp. 151337-151362, 2021.
- X. Zhou, A. K. Qin, M. Gong and K. C. Tan, "A Survey on Evolutionary Construction of Deep Neural Networks," in IEEE Transactions on Evolutionary Computation, vol. 25, no. 5, pp. 894-912, Oct. 2021.
- P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, X. Chen, et al., "A comprehensive survey of neural architecture search: Challenges and solutions", ACM Comput. Surveys, vol. 54, no. 4, pp. 1-34, May 2021
- K. O. Stanley, J. Clune, J. Lehman and R. Miikkulainen, "Designing neural networks through neuroevolution", Nature Mach. Intell., vol. 1, pp. 24-35, Jan. 2019.
- V. Sze, Y.-H. Chen, T.-J. Yang and J. S. Emer, "Efficient processing of deep neural networks: A tutorial and survey", Proc. IEEE, vol. 105, no. 12, pp. 2295-2329, Dec. 2017
- S. Mittal, "A survey of FPGA-based accelerators for convolutional neural networks", Neural Comput. Appl., vol. 32, no. 4, pp. 1109-1139, Feb. 2020
- Literature on Neural Architecture Search
About the speakers
Prof. Lukas Sekanina received all his degrees from the Brno University of Technology, Czech Republic (Ing. in 1999, Ph.D. in 2002), where he is currently a full professor and Head of the Department of Computer Systems. In his research, he combines computational intelligence (evolutionary design, neural networks, cellular automata) and hardware design methods to automatically produce complex digital circuits and hardware accelerators exhibiting high-quality properties (such as high performance and low power). He was awarded the Fulbright scholarship and worked on the evolutionary circuit design with NASA Jet Propulsion Laboratory in Pasadena in 2004. He was a visiting lecturer with Pennsylvania State University (2001), Universidad Politécnica de Madrid (2012), and a visiting researcher with the University of Oslo in 2001. Awards: Czech Science Foundation President Award (2017); Gold (2015), Silver (2011, 2008) and Bronze (2018) medals from Human-competitive awards in genetic and evolutionary computation at GECCO; Siemens Award for outstanding PhD thesis in 2003; Siemens Award for outstanding research monograph in 2005; Best paper/poster awards (e.g., DATE 2017, NASA/ESA AHS 2013, EvoHOT 2005, DDECS 2002); keynote conference speaker (e.g., IEEE SSCI-ICES 2015, DCIS 2014, ARCS 2013, UC 2009). He has served as a program committee member of many conferences (e.g., DATE, FPL, ReConFig, DDECS, GECCO, IEEE CEC, ICES, AHS, EuroGP), Associate Editor of IEEE Transactions on Evolutionary Computation (2011-2014), and Editorial board member of Genetic Programming and Evolvable Machines Journal and International Journal of Innovative Computing and Applications. He served as General Chair of the 16th IEEE Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS 2013), Program Co-Chair of DDECS 2021, EuroGP 2018 - 2019, DTIS 2016, ICES 2008, and Topic chair of DATE 2020 and 2021 (D10 – Approximate computing). Prof. Sekanina is the author of Evolvable Components (a monograph published by Springer Verlag in 2004). He co-authored over 200 papers mainly on evolvable hardware, approximate circuits, neural hardware, and genetic programming. He is a Senior Member of IEEE.
Last update: January 3, 2022