Parallel System Architecture and Programming
|Language of Instruction:||Czech|
|Guarantor:||Dvořák Václav, prof. Ing., DrSc. (DCSY)|
|Lecturer:||Bidlo Michal, Ing., Ph.D. (DCSY)|
Dvořák Václav, prof. Ing., DrSc. (DCSY)
|Instructor:||Dvořák Václav, prof. Ing., DrSc. (DCSY)|
Pospíchal Petr, Ing. (DCSY)
|Faculty:||Faculty of Information Technology BUT|
|Department:||Department of Computer Systems FIT BUT|
| || ||To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. To get acquainted with the most important parallel programming tools (MPI, OpenMP), to learn their practical use and solving problems in parallel.|
| || ||The course covers architecture and programming of parallel systems with functional- and data-parallelism. First the parallel system theory and program parallelization are discussed. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors (SMP) and the advanced DSM NUMA systems are described. The course goes on in message passing programming in standardized interface MPI. Interconnection networks are dealt with separately and then their role in clusters, many-core chips and in the most powerful systems is revealed. In conclusion SIMD accelerators and GPGPU are dealt with. |
|Knowledge and skills required for the course:|
| || ||Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++.|
|Subject specific learning outcomes and competencies:|
| || ||Overview of principles of parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI and OpenMP. The use of SIMD accelerators and GPGPU.|
|Generic learning outcomes and competencies:|
| || ||Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.|
|Syllabus of lectures:|
| ||1. Introduction to parallel processing |
2. Patterns for parallel programming
3. Shared memory programming - Introduction into OpenMP
4. Synchronization and performance awareness in OpenMP
5. Shared memory and cache coherency
6. Components of symmetrical multiprocessors
7. CC NUMA DSM architectures
8. Message passing interface
9. Collective communications
10. Interconnection networks: topology and routing algorithms
11. Interconnection networks: switching, flow control, message processing and performance
12. Distributed memory architectures, data-parallel architectures
13. Case studies of parallel applications
|Syllabus of numerical exercises:|
| ||Tutorials are not scheduled for this course.|
|Syllabus - others, projects and individual work of students:|
- Performance prediction of the given parallel application on a compute cluster.
- Development of an application on SMP in OpenMP.
- A parallel program in MPI on the blade cluster.
- Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 4th Edition, Morgan Kaufman Publishers, Inc., 2007, 1136 p., ISBN 1-55860-596-7.
- Quinn, M.J: Parallel Programming in C with MPI and OpenMP. McGraw Hill, 2004, 529 p., ISBN: 0072822562.
- Dvořák, V.: Parallel systems architecture and programming. Opora v anglickém jazyce, FIT VUT v Brně, 2008.
- Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 4. vydání, Morgan Kaufman Publishers, Inc., 2007, 1136 p., ISBN 1-55860-596-7.
| || ||Three small projects in duration of 5, 4 a 4 hours ; midterm examination.|
| || ||To complete successfuly session work and be able to write examination, one has to get at leat 20 points out of maximum 40.|