skip to content
 

Course overview

The overall aim of this course is to provide course attendees with a strong background in programming techniques suitable for general scientific programming and high-performance computing (HPC). At the end of the course they should be able to write a range of simple algorithms in C++ or Fortran, understand what issues affect the performance of the code, and be familiar with methods of utilising multiple CPU cores. They will also have been introduced to a range of topics suited to high-performance software development, including command-line Linux, version control, data structures, and super-computer cluster queueing systems.

There will also be opportunities to listen to seminars on a range of academic and industrial applications of these techniques and to learn how they are used in practice. The attendees will also work together on small projects, with the chance for networking opportunities with their peers and leaders in HPC fields.

A draft timetable is available, but is subject to change. A list of the main lecturers and staff for the Academy is also available.

The Academy will include the following courses:

Introduction to Linux

The main operating system used within HPC is Linux, and the course will assume its use throughout. A brief introduction to some of its main features and functionality will be given.

Scientific Programming

Two of the main languages used in Scientific Computing are C++ and Fortran. These are relatively high-level languages that still permit sufficiently low-level constructs to obtain the best performance from the computer hardware. Participants will be able to choose one of these languages to learn during the course. The lectures will consist of an introduction to the chosen language, suitable for new-comers, so long as they have experience with some existing programming language.

Numerical Application

Two common application areas for scientific computing are Computational Fluid Dynamics and Electronic (Atomistic) Structure simulation. Students will be able to choose one area and be given an introduction to the numerical methods and approximations required to solve some simple test-problems. Lectures will be given on the accuracy and robustness of these numerical methods from a mathematical stand-point and practicals will guide students through their implementation and testing, using their chosen language.

Performance Programming

Designed to teach students to think about and explore factors that affect the performance of their code. Relevant factors include compiler, operating system, hardware architecture, and the interplay between them. Emphasis will be on the general principles of performance measurement, not the specific packages being used. Includes familiarisation with basic serial debugging and profiling tools

Parallel Architectures

This course will introduce shared-memory and distributed-memory HPC architectures and programming models and algorithms; basic parallel performance measurement and Amdahl’s law. It will also cover the concept of communicating parallel processes: synchronous/ asynchronous; blocking/ non-blocking. Basic concepts regarding data dependencies and data sharing between threads and processes will be introduced using pseudocode exercises and thought experiments

Introduction to HPC Clusters

Participants will be introduced to the concept of super-computers, and the various ways of accessing their resources. As part of the practicals, access will be given to the University of Cambridge's CSD3 cluster.

OpenMP

OpenMP is a parallel computing language interface. The course will cover the OpenMP model, initialisation, parallel regions and parallel loops; shared and private data; loop scheduling and synchronisation. By the end, students should be able to modify and run example programs on a multi-core system, and understand the performance characteristics of different loop scheduling options.

MPI (Message Passing Interface)

MPI is used to run large scientific programs on clusters of computers. This course covers almost all of the MPI features used in scientific applications, for C, C++ and Fortran. The topics include message- passing and MPI concepts, the use of MPI, MPI's datatypes, collective operations, blocking and non-blocking point-to-point messages, subsets of processes, practical guidelines and problem decomposition. By the end, students should be able to modify and run an example program on a distributed-memory system and understand its basic performance characteristics.

Software Development and Core algorithms

This will review the material from the entire course, compare and contrast different programming approaches and place the course in the wider context of computational science as a research discipline. It will also outline other important areas in parallel software design and development that are beyond the scope of this initial academy. The course will include: Code maintainability, libraries, version control, and the choice and design of suitable algorithms.

Group Projects

As part of the course, students will also be expected to work in groups to explore some problems related to the topics they have covered in lectures. They will then have to present some results to the other students, as part of the overall aim of preparing the students for an academic environment.

Industrial Seminars

Throughout the course, there will be occasional seminars by HPC experts, as well as industrial researchers who use Scientific Computing techniques as part of their daily work.