Hprc banner tamu.png

Difference between revisions of "Curie:Getting Started"

From TAMU HPRC
Jump to: navigation, search
(Job file example)
Line 76: Line 76:
 
To check the status of your job, use the "bjobs" command.
 
To check the status of your job, use the "bjobs" command.
  
For a more detailed discussion of the batch system (including more batch job examples), please see: http://sc.tamu.edu/wiki/index.php/Ada:Batch
+
For a more detailed discussion of the batch system (including more batch job examples), please see the [[Ada:Batch | Ada Batch Page]]
  
 
==== Submit a job ====
 
==== Submit a job ====

Revision as of 19:48, 1 July 2017

Hardware summary

Curie cluster is located in Wehner building.

  • Compute: 48 nodes of IBM Power 7R2, with 16 cores, 4.2 GHz POWER7 processors, 256 GB memory, 10 Gbps Ethernet for interconnect
  • Login: 2 nodes of IBM Power 7R2, with 16 cores, 4.2 GHz POWER7 processors, 256 GB memory, 10 Gbps Ethernet for interconnect, and 1 Gbps Ethernet to campus network
  • Interconnect: 10 Gbps Ethernet
  • Network to Ada cluster (Teague building): two bounded 40 Gbps Ethernet

Apply account and obtain allocation

Curie cluster uses the same account appreciation as Ada cluster. To apply an account on Ada/Curie, please visit account application page (http://sc.tamu.edu/apply/).

No allocation is required for Curie (as March 2015). Allocation application and policy will be announced when new account management system is online.

Logging in

To login Curie cluster, use any SSH client program and connect to curie.tamu.edu with your NetID and NetID password. You will then login onto one of two login nodes (curie1.tamu.edu and curie2.tamu.edu). If you are outside campus, you will need to use VPN connecting campus network (see TAMU ServiceNow Knowledge Base page on VPN).

File system access

Home and scratch directory

Curie cluster has home and scratch directory on GPFS, a parallel file system. Ada and Curie cluster shares the same home directory and scratch directory. By default, there are 10 GB for home directory and 1 TB for scratch directory. Additional space for scratch directory and dedicated project space are available upon request.

Local disk on compute node

Compute nodes have access to 600GB local disk (under /work).

Quota

You can query file system quota by running "showquota" command.

Additional info

Additional information on file system can be found in this page (https://sc.tamu.edu/wiki/index.php/Ada:Filesystems_and_Files).

Transfer files and data

To transfer data to/from Curie cluster, you can use SCP/SFTP clients program, like WinSCP, FileZilla under Windows, FileZilla under MacOS, and scp/sftp under Linux. Reference Ada Data Transfer page (https://sc.tamu.edu/wiki/index.php/Ada:Fast_Data_Transfer) for additional instruction.

Setting up environment

Module is used for managing software environment. Common commands for module are

  • module spider : see all available software
  • module load PkgName : load software PkgName
  • module list : list loaded modules
  • module purge : remove all loaded modules

Additional information on modules can be found in this page (https://sc.tamu.edu/wiki/index.php/Ada:Computing_Environment#Modules)

Building executables

xlc, xlf, gcc compilers are available. MPICH, MPICH2, MVAPICH2, and OpenMPI are available for MPI applications.

Running batch jobs

Submitting jobs to the compute nodes on curie will be done using the LSF workload manager. Actualy, the same LSF manages jobs for both curie and ada.

Below is a sample batch job that requests 12 cpus/slots on 1 node with a walltime of 20 minutes

Job file example

#BSUB -n 12 -R 'rusage[mem=600] span[ptile=12]' -M 150 -W 20
#BSUB -J mpi_helloWorld -o mpi_helloWorld.%J
#BSUB -L /bin/bash
#BSUB -q curie

module load intel
mpiexec.hydra -np 12 ./hello.x 

##

NOTE: the "#BSUB -q curie" flag will make sure the job will run on curie. If this flag is not added to your batch script, the job will probably run on ada (and will fail because it's a different architecture)


To check the status of your job, use the "bjobs" command.

For a more detailed discussion of the batch system (including more batch job examples), please see the Ada Batch Page

Submit a job

To submit your job you will use the bsub command: "bsub < myjob" (where myjob is the name of your batch job).

Queues

4 queues have been setup and jobs are redirected to one of queues according to requested wall time.

  • curie_devel
  • curie_medium
  • curie_long
  • curie_general

Other common command for tracking your jobs

  • bjobs : check your job
  • bkill jobid : delete a job

... more to be added ...

Backup data

... to be added ...

Compiling/Running

The prefered compilers on curie are the IBM XL Compilers. The table below shows the various compilers:

  • Fortran compilers: xlf/xlf_r
  • C compilers: xlc/xlc_r
  • C++ compilers: xlc++/xlc++_r

The compilers with the "_r" extension will generate threadsafe code and are used to compile threaded programs (e.g. programs containing OpenMP directives/constructs)

by default, the XL compilers generate 32bit code. To generate 64 bit code you can use the -q64 compiler flag or set the environmental variable OBJECT_MODE to 64 (NOTE, when you load any of the XL compiler modules OBJECT_MODE is set to 64 automatically).

Serial Programs

Below are some of the most common compiler flags (these flags are accepted by all XL compilers with some notable exceptions):

basic flags:

-o <file>   : name of output file
-c          : only compile, skip linking
-L <dir>    : include <dir> in library search path
-l<name>    : searches file libname.so or libname.a for linking
-q64 (-q32) : instruct the compiler to generate 64 (or 32) bit code

optimization flags:

-O[n]       : level of optimization; range from O0 to O5 (O5 highest level)
-qarch=auto : specifies architecture for which the code will be generated
-qtune=auto : specifies architecture for which the program is OPTIMIZED
-qsmp=auto  : enables automatic parallelization (very conservative)
-qhot       : perform certain high-order transformations during optimization
-qessl      : use ESSL library instead of Fortan intrinsics (FORTRAN ONLY)

debug flags:

-g         : generate debugging info for use by a symbolic debugger
-qcheck    : add runtime checks for array bounds, initialized variables, etc

OpenMP Programs

To compile programs containing OpenMP parallel directeves/pragmas the following flag needs to be used

openmp flags:

-qsmp=omp

NOTE: when compiling OpenMP programs the appropriate compiler with the "_r" extension needs to be used otherwise the program will not work and most likely crash.

Example: compile OpenMP program hello_omp.c and name if hello_omp.x

xlc_r -qsmp=omp -o hello_omp.x hello_omp.c

Running OpenMP programs:

OpenMP programs can be run exactly the same as serial programs. Use OpenMP environment variables to control the OpenMP behavior

Example: run the program hello_omp.x and set the number of threads to 8

export OMP_NUM_THREADS=8
./hello_omp.x

MPI programs

Curie has multiple MPI stacks installed (mpich, mpich2, openmpi). To see a complete list type "module spider mpi". In this README file we will use OpenMPI but the instructions also apply to any of the other mpi stacks (there might be a few exceptions). To load the latest OpenMPI stack (including xlf compilers) type "module load xlompi".

To compile you will need to use the appropriate wrapper:

  • Fortran wrapper  : mpifort
  • C compilers  : mpicc
  • C++ compilers  : mpic++

NOTE: these wrappers use the non-thread safe compilers underneath (i.e. without the "_r" extension). We will provide thread safe wrappers asap.

To see the full compiler command for any of the mpi wrappers use the "-show flag" (e.g. mpifort -show). This will show a full listing including compiler flags, LIBRARY paths, and INCLUDE paths and might be useful for debugging purposes.

Example: compile MPI program hello_mpi.c and name if hello_mpi.x

mpicc -o hello_mpi.x hello_mpi.c

Running MPI code: Unlike running serial or OpenMP code, running mpi code requires a launcher. The launcher will set up the environment and start the requested number of tasks. On curie the launcher is named "mpirun" and has the following syntax:

mpirun [mpi_flags] <executable> [exec params]

The following table shows some of the more common flags (for a full listing type "man mpirun")

-np                : number of tasks to start
-hosts <list>      : comma separated list of hostsnames to start tasks on
-hostfile <file>   : specifies hostfile (containing list of hosts to use)
-map-by ppr:N:node : places N processes on each node.

Example: run the program hello_mpi.x using 8 tasks, 4 on curie1, 4 on curie2

mpirun -np 8 -hosts curie1,curie2 -map-by ppr:4:node ./hello_mpi.x

Libraries

MASS Library

The Mathematical Acceleration SubSystem (MASS) is a collection of optimized mathematical functions. These functions are thread-safe, and can be used with C, C++ and Fortran programs. To load the MASS Library module type "module load xlmass".

MASS scalar library:

The MASS scalar library contains a collection of scalar mathematical function (e.g. sin, cos, erfc). To link the MASS scalar library add the "-lmass_64" flag ("-lmass" for 32 bit applications) to your compile/link command

Example: compile program hello_mass.c and link with MASS scalar library

xlc -lmass_64 hello_mass.c

MASS vector library:

The MASS vector library contains a collection of vector mathematical function (e.g. vsin, vcos, verfc). To link the MASS vector library add the "-lmassv_64" ("-lmassv" for 32 bit applications) to your compile/link command

Example: compile program hello_mass.c and link with MASS scalar library

xlc -lmassv_64 hello_mass.c


NOTE: if you compile your program using "-qhot -O3", "-O4", or "-O5" the compiler will try to vectorize system math functions by calling equivalent MASS vector functions if possible. Otherwise the compiler will replace system math functions with scalar MASS functions.


ESSL Library:

The Engineering and Scientific Subroutine Library (ESSL) contains a collection of highly optimized mathematical subroutines, including BLAS ,LAPACK, FFTW functionality support. The ESSL libraries are thread safe and work with C,C++, and Fortran programs. ESSL includes serial as well as SMP versions.

Compiling/linking programs containing ESSL routines depends on a number of factors; language, serial/SMP, integer size. Below we will provide the various compiler commands for the various cases (for illustration purposes all examples compile a program named hello_essl):

Fortran programs, 32bit integers :

Serial:  xlf_r -O -qnosave  hello_essl.f90 -lessl
SMP   :  xlf_r -O -qnosave -qsmp hello_essl.f90 -lesslsmp

Fortran programs, 64bit integers :

Serial:  xlf_r -O -qnosave -qintsize=8 hello_essl.f90 -lessl6464
SMP:     xlf_r -O -qnosave -qsmp  -qintsize=8 hello_essl.f90 -lesslsmp6464

C programs, 32bit integers :

Serial:  xlc_r -O hello_essl.c -lessl -lxlf90_r -lxlomp_ser
SMP   :  xlc_r -O -qsmp hello_essl.c -lesslsmp  -lxlf90_r -lxlsmp

C programs, 64bit integers :

Serial:  xlc_r -O -D_ESV6464 hello_essl.c -lessl6464 -lxlf90_r -lxlsmp
SMP:     xlc_r -O -D_ESV6464 hello_essl.c -lesslsmp6464 -lxlf90_r -lxlsmp

NOTE: for C++ programs the compiler commands are very similar to the commands to

     compile C programs.

NOTE: for C/C++ programs you will also need to include "<essl.h>" to your

     source code

For more information about the ESSL library, please visit http://publib.boulder.ibm.com/epubs/pdf/a2322684.pdf


NAG Fortran Library:

the NAG Libraries contain an extensive set of numerical and statistical algorithms (including full BLAS and LAPACK functionality). On curie the only library currently available is the Fortran Library (serial version). To load the NAG Fortran module type "module load NAG/Fortran-Library". This will define the following 2 environmental variables:

NAG_FORTRANLIB_INCLUDE    : location of include files
NAG_FORTRANLIB_INCLUDE    : locations of libraries

To compile/link a program that contains calls to NAG functions, you have add the include/library paths to your compile command.

Example: compile program hello_nag.f90 and link with NAG library

xlf -I$NAG_FORTRANLIB_INCLUDE hello_nag.f90 -L$NAG_FORTRANLIB_LIB -lnag_nag