Introduction to Code Parallelization using MPI


Instructor: Ping Luo

Time: Friday, November 2, 2:30-5:00PM

Location: SCC 102.B

Prerequisites: Working knowledge of C/C++ or FORTRAN; Ada account.

This course covers basic MPI concepts and code parallelization using MPI. The presentation, to a good extent, is example driven.

Course Materials

Presentation Slides

The presentation slides are available in PDF format.

  • Introduction to Code Parallelization with MPI (Spring 2018) slides: PDF
  • Introduction to Code Parallelization with MPI (Fall 2017 Part I) slides: PDF
  • Introduction to Code Parallelization with MPI (Fall 2017 Part II) slides: PDF


This course focuses, among others, on the following topics:

  • Layout of an MPI program
  • Compilation and running of MPI programs
  • Message passing concepts
  • Point-to-point communication
  • Collective communications
  • Blocking and non-blocking communication
  • Self-scheduling
  • Matrix-vector multiplication
  • Solving the Poisson problem
  • MPI/OMP hybrid programming

Note: During the class sessions many aspects of the material will be illustrated live via a login to Ada. Attendees are welcome to follow these parts with their own laptops. They will need to configure their laptops to use the TAMULink wireless network. Relevant details on this can be found at:

You are encouraged to contact the HPRC helpdesk with any questions regarding Ada.