Conference paper

KOCOUR Martin, LUQUE Jordi and PERALES Carlos Segura. A Reality Check on Inference at Mobile Networks Edge. In: Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking. Dressden: Association for Computing Machinery, 2019, pp. 54-59. ISBN 978-1-4503-6275-7. Available from: https://dl.acm.org/citation.cfm?doid=3301418.3313946
Publication language:english
Original title:A Reality Check on Inference at Mobile Networks Edge
Title (cs):Ověření inference na okraji mobilní sítě v reálných podmínkách
Pages:54-59
Proceedings:Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking
Conference:The 2nd International Workshop on Edge Systems, Analytics and Networking (EdgeSys 2019)
Place:Dressden, DE
Year:2019
URL:https://dl.acm.org/citation.cfm?doid=3301418.3313946
ISBN:978-1-4503-6275-7
DOI:10.1145/3301418.3313946
Publisher:Association for Computing Machinery
URL:http://www.fit.vutbr.cz/research/groups/speech/publi/2019/cartas_edgesys19_p54.pdf [PDF]
Keywords
Edge computing, Artificial Intelligence
Annotation
Intelligence (AI) platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous work shows computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.
Abstract
Edge computing is considered a key enabler to deploy Artificial Intelligence platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous works show computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.
BibTeX:
@INPROCEEDINGS{
   author = {Martin Kocour and Jordi Luque and Segura Carlos
	Perales},
   title = {A Reality Check on Inference at Mobile Networks
	Edge},
   pages = {54--59},
   booktitle = {Proceedings of the 2nd International Workshop on Edge
	Systems, Analytics and Networking},
   year = 2019,
   location = {Dressden, DE},
   publisher = {Association for Computing Machinery},
   ISBN = {978-1-4503-6275-7},
   doi = {10.1145/3301418.3313946},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php?id=11956}
}

Your IPv4 address: 34.237.75.18
Switch to https