Parallel System Architecture and Programming
|Language of Instruction:||Czech|
|Guarantor:||Dvořák Václav, prof. Ing., DrSc. (DCSY)|
|Lecturer:||Dvořák Václav, prof. Ing., DrSc. (DCSY)|
Jaroš Jiří, doc. Ing., Ph.D. (DCSY)
|Instructor:||Jaroš Jiří, doc. Ing., Ph.D. (DCSY)|
Nikl Vojtěch, Ing. (DCSY)
Vaverka Filip, Ing. (DCSY)
|Faculty:||Faculty of Information Technology BUT|
|Department:||Department of Computer Systems FIT BUT|
| || ||To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. To get acquainted with the most important parallel programming tools (MPI, OpenMP), to learn their practical use and solving problems in parallel.|
| || ||The course covers architecture and programming of parallel systems with functional- and data-parallelism. First the parallel system theory and program parallelization are discussed. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors (SMP) and the advanced DSM NUMA systems are described. The course goes on in message passing programming in standardized interface MPI. Interconnection networks are dealt with separately and then their role in clusters, many-core chips and in the most powerful systems is revealed. In conclusion SIMD accelerators and GPGPU are dealt with. |
|Knowledge and skills required for the course:|
| || ||Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++.|
|Subject specific learning outcomes and competencies:|
| || ||Overview of principles of parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI and OpenMP. The use of SIMD accelerators and GPGPU.|
|Generic learning outcomes and competencies:|
| || ||Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.|
|Syllabus of lectures:|
- Introduction to parallel processing.
- Patterns for parallel programming.
- Shared memory programming - Introduction into OpenMP.
- Synchronization and performance awareness in OpenMP.
- Shared memory and cache coherency.
- Components of symmetrical multiprocessors.
- CC NUMA DSM architectures.
- Message passing interface.
- Collective communications, communicators, and disk operations.
- Interconnection networks: topology and routing algorithms.
- Interconnection networks: switching, flow control, message processing and performance.
- Message-passing architectures, current supercomputer systems. Distributed file systems.
- Data-parallel architectures and programming.
|Syllabus of numerical exercises:|
| ||Tutorials are not scheduled for this course.|
|Syllabus - others, projects and individual work of students:|
- Development of an application on SMP in OpenMP.
- A parallel program in MPI on the blade cluster.
- Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605
- Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 856 s., ISBN: 9780123838728
| || ||Twol projects in duration of 26 hours in total ; midterm examination.|
| || ||To complete successfully term work and be able to write examination, one has to get at least 20 points out of maximum 40.|