Computation Systems Architectures
|Language of Instruction:||Czech|
|Guarantor:||Jaroš Jiří, doc. Ing., Ph.D. (DCSY)|
|Deputy guarantor:||Sekanina Lukáš, prof. Ing., Ph.D. (DCSY)|
|Lecturer:||Jaroš Jiří, doc. Ing., Ph.D. (DCSY)|
|Instructor:||Bordovský Gabriel, Ing. (DCSY)|
Jaroš Marta, Ing. (DCSY)
|Faculty:||Faculty of Information Technology BUT|
|Department:||Department of Computer Systems FIT BUT|
| || ||To familiarize yourself with the architecture of modern computational systems based on x86, ARM and RISC-V multicore processors in configurations with uniform (UMA) and non-uniform (NUMA) shared memory, often accompanied with a GPU accelerator. To understand hardware aspects of computational systems that have a significant impact on the application performance and power consumption. To be able to assess computing possibilities of a particular architecture and to predict the performance of applications. To clarify the role of a compiler and its cooperation with processors. To be able to orientate oneself on the computational system market, to evaluate and compare various systems.|
| || ||The course covers architecture of modern computational systems composed of universal as well as special-purpose processors and their memory subsystems. Instruction-level parallelism is studied on scalar, superscalar and VLIW processors. Then the processors with thread-level parallelism are discussed. Data parallelism is illustrated on SIMD streaming instructions and on graphical processors. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors and the advanced NUMA systems are described. Finally, the generic architecture of the graphics processing units and basic programming techniques using OpenMP are also covered. Techniques of low-power processors are also explained.|
|Knowledge and skills required for the course:|
| || ||Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++, compiler's tasks and functions.|
|Subject specific learning outcomes and competencies:|
| || ||Overview of the architecture of modern computational systems, their capabilities, limits and future trends. The ability to estimate performance of software applications on a given computer system, identify performance issues and propose their rectification. Practical user experience with supercomputers.|
|Generic learning outcomes and competencies:|
| || ||Understanding of hardware limitations having impact on the efficiency of software solutions. ||Why is the course taught:|
| || ||There's a large range of problems and programming languages, where the performance of the final application, the amount of consumed memory or electric power is not significant. However, what shall we do in situations where these aspects are of critical importance?|
The purpose of the AVS course is to examine and analyze the architecture of current multi-core super-scalar processors, memory subsystems and accelerator cards such as GPUS in order to understand their potential and limits. The practical part of the course is devoted to teaching the OpenMP library allowing efficient parallelization and vectorization on both CPUs and GPUs.
|Syllabus of lectures:|
- Scalar processors, pipelined instruction processing and compiler assistance.
- Superscalar processors, dynamic instruction scheduling.
- Data flow through the hierarchy of cache memories.
- Branch prediction, optimization of instruction and data fetching.
- Processors with data level parallelism.
- Multi-threaded and multi-core processors.
- Loop parallelism and code vectorization.
- Functional parallelism and acceleration of recursive algorithms.
- Synchronization on systems with shared memory.
- Algorithm for cache coherency.
- Architectures with distributed shared memory .
- Architecture and programming of graphics processing units.
- Low power processors and techniques.
|Syllabus of computer exercises:|
- Anselm and Salomon supercomputer intro.
- Performance measurement for sequential codes, Roof-line model and Amdahl's law.
- Problem decomposition and cache blocking.
- Vectorisation using OpenMP.
- Loops and tasks using OpenMP
- Functional parallelism and synchronization using OpenMP.
|Syllabus - others, projects and individual work of students:|
- Performance evaluation and code optimization using OpenMP.
- Development of an application in OpenMP on a NUMA node.
- Baer, J.L.: Microprocessor Architecture. Cambridge University Press, 2010, 367 s., ISBN 978-0-521-76992-1.
- Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 1136 s., ISBN 1-55860-596-7.
- van der Pas, R., Stotzer, E., and Terboven, T.: Using OpenMP-The Next Step, MIT Press Ltd, ISBN 9780262534789, 2017.
| || |
- Missed labs can be substituted in alternative dates.
- There will be a place for missed labs in the last week of the semester.
| || ||Assessment of two projects, 14 hours in total and, computer laboratories and a midterm examination.|
| || ||To get 20 out of 40 points for projects and midterm examination.|