Parallel System Architecture and Programming
|Hour/sem||Lectures||Sem. Exercises||Lab. exercises||Comp. exercises||Other|
|Guarantee:||Jaroš Jiří, doc. Ing., Ph.D., DCSY|
|Lecturer:||Jaroš Jiří, doc. Ing., Ph.D., DCSY|
|Instructor:||Čudová Marta, Ing., DCSY|
Jaroš Jiří, doc. Ing., Ph.D., DCSY
Nikl Vojtěch, Ing., DCSY
|Faculty:||Faculty of Information Technology BUT|
|Department:||Department of Computer Systems FIT BUT|
|Mon||lecture||lectures||D0207||08:00||09:50||1MIT||17 MPV||17 MPV|
|Mon||lecture||lectures||D0207||08:00||09:50||1MIT||18 MSK||18 MSK|
| || ||To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. To get acquainted with the most important parallel programming tools (MPI, OpenMP), to learn their practical use and solving problems in parallel.|
| || ||The course covers architecture and programming of parallel systems with functional- and data-parallelism. First the parallel system theory and program parallelization are discussed. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors (SMP) and the advanced DSM NUMA systems are described. The course goes on in message passing programming in standardized interface MPI. Interconnection networks are dealt with separately and then their role in clusters, many-core chips and in the most powerful systems is revealed. |
|Knowledge and skills required for the course:|
| || ||Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++.|
|Subject specific learning outcomes and competences:|
| || ||Overview of principles of parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI and OpenMP. Practical experience with the work on supercomputers Anselm and Salomon.|
|Generic learning outcomes and competences:|
| || ||Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.|
|Syllabus of lectures:|
- Introduction to parallel processing.
- Patterns for parallel programming.
- Shared memory programming - Introduction into OpenMP.
- Synchronization and performance awareness in OpenMP.
- Shared memory and cache coherency.
- Components of symmetrical multiprocessors.
- CC NUMA DSM architectures.
- Message passing interface.
- Collective communications, communicators, and disk operations.
- Interconnection networks: topology and routing algorithms.
- Interconnection networks: switching, flow control, message processing and performance.
- Message-passing architectures, current supercomputer systems. Distributed file systems.
- Data-parallel architectures and programming.
|Syllabus of computer exercises:|
- Anselm and Salomon supercomputer intro
- Parallel debuggers, profilers and traces
- OpenMP: Loops and sections
- OpenMP: Tasks and synchronization
- MPI: Point-to-point communications
- MPI: Collective communications
|Syllabus - others, projects and individual work of students:|
- Development of an application on SMP in OpenMP on a NUMA node.
- A parallel program in MPI on the supercomputer.
- Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605
- Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 856 s., ISBN: 9780123838728
| || ||Two projects in duration of 26 hours in total; midterm examination.|
| || ||To complete successfully term work and be able to write examination, one has to get at least 20 points out of maximum 40.|