Computational Modelling Group

Workshop  11th April 2018 9 a.m.  176L/1125

MPI course delivered by EPCC

David Henty

Web page
ARCHER, C, Fortran, HPC, Iridis, Linux, MPI, Multi-core, NGCM, Scientific Computing
Denis Kramer

Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

The course is normally delivered in an intensive two-day format, or as in this case, over three days. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

The course if free, but registration at the Archer Webpage is required.