Embedded Tutorials

Prof. Mehdi Tahoori (Karlsruhe Institute of Technology, Germany)
Cross-layer Resilient System Design

Improvements in chip manufacturing technology have propelled an astonishing growth of computing systems which are integrated into our daily lives. However, this trend is facing serious challenges, both at device and system levels. At the device level, as the minimum feature size continues to shrink, a host of vulnerabilities influence the robustness, reliability, and availability of embedded and critical systems. Some of these factors are caused by the stochastic nature of the nanoscale manufacturing process (e.g., process variability, sub-wavelength lithographic inaccuracies), while other factors appear because of high frequencies and nanoscale features (e.g. RLC noise, on-chip temperature variation, increased sensitivity to radiation and transistor aging). At the other end of the spectrum, these systems are seeing a tremendous increase in software content. Whereas traditional software design paradigms have assumed that the underlying hardware is fully predictable and error-free, there is now a critical need to build a software stack that is responsive to variations, and resilient against emerging vulnerabilities in the underlying hardware. To cost-efficiently tackle resiliency challenges, a new "cross-layer" trend has emerged in which different levels of design stacks, in hardware and software, work together to find a globally optimal solution. The interdisciplinary topic of cross layer resiliency cross various disciplines and requires collaboration and cooperation of various communities such as design automation, testing and design for testability, computer architecture, embedded systems and software, validation and verification, fabrication, device, circuits, and systems. Such cross-layer approach will lead to possible paradigm shifts to consider reliability throughout the design flow, from devices to systems and applications.

Prof. Ilia Polian, Prof. Martin Kreuzer (University of Passau, Germany)
Fault-based Attacks on Cryptographic Hardware

Mobile and embedded systems increasingly process sensitive data, ranging from personal information including health records or financial transactions to parameters of technical systems such as car engines. Cryptographic circuits are employed to protect these data from unauthorized access and manipulation. Fault-based attacks are a relatively new threat to system integrity. They circumvent the protection by inducing faults into the hardware implementation of cryptographic functions, thus affecting encryption and/or decryption in a controlled way. By doing so, the attacker obtains supplementary information that she can utilize during cryptanalysis to derive protected data, such as secret keys. In the recent years, a large number of fault-based attacks and countermeasures to protect cryptographic circuits against them have been developed. However, isolated techniques for each individual attack are no longer sufficient, and a generic protective strategy is lacking.

Dr. Jan Kořenek (CESNET, z.s.p.o., Czech Republic)
Hardware Acceleration of Algorithms in Computer Networks using FPGA

With the growing speed of computer networks, network devices need more processing power to achieve wire speed throughput. As current processor have limited performance, routers and other network devices use hardware acceleration to achieve wire speed throughput with reasonable power consumption. Usually, the throughput is decreased by time critical operations which has to be performed for every packet or every byte of network traffic. The presentation will be focused on hardware acceleration of time critical operations in networking using FPGA and provides results of recent research in longest prefix matching (IP look-up), packet classification and regular expressions matching. The end of the presentation will be devoted to the rapid development of hardware accelerated network applications.

Prof. Natasha Sharygina, Grigory Fedyukovich, Antti E. J. Hyvärinen (University of Lugano, Switzerland)
Interpolation-Based Model Checking for Efficient Incremental Analysis of Software

Verification based on model checking has recently obtained an important role in certain software engineering tasks, such as developing operating system device drivers. This extended abstract discusses how model checking can be made more efficient by using the structure from program function calls. We use this idea in two orthogonal ways, both of which fundamentally depend on automatically summarizing the relevant behavior of the function calls based on an earlier verification. The first approach assumes a piece of software needs to be verified with respect to a set of properties, whereas the second approach considers a case where an early version of a software has been verified but needs to be re-verified after an upgrade. These techniques have been implemented in tools FunFrog and eVolCheck for verifying C programs. Both of them have been tested on a range of academic and industrial benchmarks, and provide in many cases an order of magnitude speed-up with respect to the baseline. They seem to scale to programs with thousands of lines of code.