Parallel System Architecture and Programming
|Language of Instruction:||Czech|
|Guarantor:||Jaroš Jiří, doc. Ing., Ph.D. (DCSY)|
|Lecturer:||Jaroš Jiří, doc. Ing., Ph.D. (DCSY)|
|Instructor:||Jaroš Marta, Ing. (DCSY)|
Kukliš Filip, Ing. (DCSY)
|Faculty:||Faculty of Information Technology BUT|
|Department:||Department of Computer Systems FIT BUT|
| || ||To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. To get acquainted with the most important parallel programming tools (MPI, OpenMP), to learn their practical use and solving problems in parallel.|
| || ||The course covers architecture and programming of parallel systems with functional- and data-parallelism. First the parallel system theory and program parallelization are discussed. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors (SMP) and the advanced DSM NUMA systems are described. The course goes on in message passing programming in standardized interface MPI. Interconnection networks are dealt with separately and then their role in clusters, many-core chips and in the most powerful systems is revealed. |
|Knowledge and skills required for the course:|
| || ||Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++.|
|Subject specific learning outcomes and competencies:|
| || ||Overview of principles of parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI and OpenMP. Practical experience with the work on supercomputers Anselm and Salomon.|
|Generic learning outcomes and competencies:|
| || ||Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.|
|Syllabus of lectures:|
- Introduction to parallel processing.
- Patterns for parallel programming.
- Shared memory programming - Introduction into OpenMP.
- Synchronization and performance awareness in OpenMP.
- Shared memory and cache coherency.
- Components of symmetrical multiprocessors.
- CC NUMA DSM architectures.
- Message passing interface.
- Collective communications, communicators, and disk operations.
- Hybrid programming OpenMP/MPI
- Interconnection networks: topology and routing algorithms.
- Interconnection networks: switching, flow control, message processing and performance.
- Message-passing architectures, current supercomputer systems. Distributed file systems.
|Syllabus of computer exercises:|
- Anselm and Salomon supercomputer intro
- OpenMP: Loops and sections
- OpenMP: Tasks and synchronization
- MPI: Point-to-point communications
- MPI: Collective communications
- MPI: I/O, debuggers, profilers and traces
|Syllabus - others, projects and individual work of students:|
- Development of an application on SMP in OpenMP on a NUMA node.
- A parallel program in MPI on the supercomputer.
- Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605
- Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 856 s., ISBN: 9780123838728
| || |
- Missed labs can be substituted in alternative dates (monday or friday)
- There will be a place for missed labs in the last week of the semester.
| || ||Assessment of two projects, 13 hours in total and, computer laboratories and a midterm examination.|
| || ||To get 20 out of 40 points for projects and midterm examination.|