Hprc banner tamu.png

Ada:Batch Job Submission

From TAMU HPRC
Revision as of 13:23, 12 January 2015 by S-vellas (talk | contribs) (Advanced Topics)
Jump to: navigation, search

Job Submission: the bsub command

bsub < jobfile                  # Submits specified job for processing by LSF

Here is an illustration,

[userx@login4]$ bsub < sample1.job
Verifying job submission parameters...
Job <224139> is submitted to default queue <devel>.
[userx@login4]$

The first thing LSF does upon submission is to tag your job with a numeric identifier, a job id. Above, that identifier is 224139. You will need it in order to track or manage (kill or modify) your jobs. Next, note that the default current working directory for the job is the directory you submitted the job from. If that's not what you need, you must explicitly indicate that, as we do above when we cd into a specific directory. On job completion, LSF will place in the submission directory the file stdout1.224139. It contains a log of job events and other data directed to standard out. Always inspect this file for useful information.

Three important job parameters:

#BSUB -n NNN                    # NNN: total number of cpus to allocate for the job
#BSUB -R "span[ptile=XX]"       # XX:  number of cores/cpus per node to use
#BSUB -R "select[node-type]"    # node-type: nxt, mem256gb, gpu, phi, mem1t, mem2t ...

We list these together because in many jobs they can be closely related and, therefore, must be consistently set. We recommend their adoption in all jobs, serial, single-node and multi-node. The following examples, with some commentary, illustrate their use.

#BSUB -n 900                    # 900: number of cpus to allocate for the job
#BSUB -R "span[ptile=20]"       # 20:  number of cores/cpus per node to use
#BSUB -R "select[nxt]"          # Allocates NeXtScale nodes

The above specifications will allocate 45 (=900/20) whole nodes. In many parallel jobs the selection of NeXtScale nodes at 20 cores per node is the best choice.

#BSUB -n 900                    # 900: total number of cpus to allocate for the job
#BSUB -R "span[ptile=16]"       # 16:  number of cores/cpus per node to use
#BSUB -R "select[nxt]" -x       # Allocates exclusively whole NeXtScale nodes

The above specifications will allocate 57 (= ceiling(900/16)) nodes. The exclusive (-x) node allocation requested here may be important for multi-node parallel jobs that need it. It will prevent the scheduling of other jobs on such nodes, jobs which might use 4 cores or less. The absence of -x, can find one or more of the 57 nodes hosting more than one job. This can drastically reduce the performance of the 900-core job. The justification for "waisting" 4 cores per node can be a valid one depending on specific program behavior, such as memory or communication traffic. For sure, the decision to go with 16 cores per node or less should be taken after carefull experimentation. Applying the -x option will cost you, in terms of SUs, the same as the use of 20 cores, not 16. So use it sensibly.


#BSUB -n 1                    # Allocate a total of 1 cpu/core for the job, appropriate for serial processing.
#BSUB -R "span[ptile=1]"      # Allocate 1 cpu per node.
#BSUB -R "select[gpu]"        # Make the allocated node have gpus, of 64GB or 256GB memory. A "select[phi]"
                              # specification would allocate a node with phi coprocessors.

Omitting the last two options in the above will cause LSF to place the job on any conveniently available core on any node, idle or busy, of any type, except on those with 1TB or 2TB memory.

It is worth emphasizing that, under the current LSF setup, only the -x option and a ptile value equal to the node's core limit will prevent LSF from scheduling jobs that match the balance of unreserved cores.


Common BSUB Options

... pending ...

More Examples

Example 2

## A sample OpenMP job. See subsection on Running OpenMP code for details.
##
#BSUB -J myomp1 -myomp1.%J -L /bin/bash -W 200 -M 300 -n 20 -R 'span[ptile=20]'
## will run on a NeXtScale or iDataPlex node

cd $SCRATCH/net-id/omp_dir

module load ictce         # load intel toolchain

export OMP_NUM_THREADS=20; export OMP_STACKSIZE=200M
#
./myomp_prog.exe

Example 3

## A multi-node (10) MPI job 
#BSUB -J mpitest -o mpitest.%J -L /bin/bash -W 30 -n 200 -R 'span[ptile=20]'
#
cd $SCRATCH/net-id/mpi_dir
#
module load ictce         # load intel toolchain. Provides for needed MPI runtime libs

export I_MPI_HYDRA_BOOTSTRAP=lsf
# tells Intel MPI to launch MPI processes using LSF's blaunch
#
export I_MPI_LSF_USE_COLLECTIVE_LAUNCH=1
# tell Intel MPI to launch only one blaunch instance (for scalability and stability)
#
export I_MPI_HYDRA_BRANCH_COUNT=10
# set this variable to the number of nodes (= 200/20) involved in computation

# launch MPI program using the "hydra" launcher on 200 cores across 10 nodes
mpiexec.hydra ./hw.mpi.C.exe
....
....

Environment Variables

When LSF selects and activates a node for the running of your job, by default, it duplicates the environment the job was submitted from. That environment in the process of your work may have been altered by you (e.g., by loading some modules or setting up or changing some common environment variables) to be different from that that the login created. The next job you submit, however, may require a different execution environment. Hence the recommendation that, in submitting jobs, specify the creation of a new login shell and within the job explicitly customize the environment as needed. A new login shell per job is initialized by specifying the #BSUB -L /bin/bash option.

All the nodes enlisted for the execution of a job carry most of the environment variables the login process created: HOME, PWD, PATH, USER, etc. In addition, LSF defines new ones in the environment of an executing job. Below, we show an abbreviated list.

LSB_QUEUE:     The name of the queue the job is dispatched from.
LSB_JOBNAME:   Name of the job.
LSB_JOBID:     Batch job ID assigned by LSF.
LSB_ERRORFILE: Name of the error file specified with a bsub -e.
LSB_HOSTS:     The list of nodes (their LSF symbolic names) that are used to run the batch job. A node name is repeated
as many times as to equal the specified or default ptile value. The memory size of LSB_HOSTS is limited to 4096 bytes.
LSB_MCPU_HOSTS:  The list of nodes (their LSF symbolic names) ) and the specified or default ptile value per node to run the batch job.

Example. The following is a Linux script to be used within a job to periodically track the load level on each of the allocated nodes.

#!/bin/bash
echo 'HOST_NAME       status  r15s   r1m  r15m   ut    pg  ls    it   tmp  swp   mem'
echo $LSB_MCPU_HOSTS | sed 's/ [1-4]./''\n''/g' | \
while read node_id
do
   lsload -l $node_id | sed '^HOST/d'
done

Job tracking and control commands

bjobs [-u all or user_name] [[-l] job_id]    # displays job information per user(s) or job_id, in summary or detail (-l) form.
bpeek [-f] job_id                            # displays the stdout and stderr output of an unfinished job.
bkill job_id                                 # kills, suspends, or resumes unfinished jobs. See man bkill for details.
bmod [bsub_options]   job_id                 # Modifies job submission options of a job. See man bmod for details.
lsload [node_name]                           # Lists on std out a node's utilization. Use bjobs -l jobid
                                             # to get the names of nodes associated with a jobid. See man lsload for details.

Examples

[userx@login4]$ bjobs -u all
JOBID      STAT  USER             QUEUE      JOB_NAME             NEXEC_HOST SLOTS RUN_TIME        TIME_LEFT
223537     RUN   adinar           long       NOR_Q                1          20    400404 second(s) 8:46 L
223547     RUN   adinar           long       NOR_Q                1          20    399830 second(s) 8:56 L
223182     RUN   tengxj1025       long       pro_at16_lowc        10         280   325922 second(s) 5:27 L
229307     RUN   natalieg         long       LES_MORE             3          900   225972 second(s) 25:13 L
229309     RUN   tengxj1025       long       pro_atat_lowc        7          280   223276 second(s) 33:58 L
229310     RUN   tengxj1025       long       cg16_lowc            5          280   223228 second(s) 33:59 L
. . .             . . .     . . .

[userx@login4]$ bjobs -l 229309

Job <229309>, Job Name <pro_atat_lowc>, User <tengxj1025>, Project <default>, M
                          ail <czjnbb@gmail.com>, Status <RUN>, Queue <long>, J
                          ob Priority <250000>, Command <## job name;#BSUB -J p
                          ro_atat_lowc; ## send stderr and stdout to the same f
                          ile ;#BSUB -o info.%J; ## login shell to avoid copyin
                          g env from login session;## also helps the module fun
                          ction work in batch jobs;#BSUB -L /bin/bash; ## 30 mi
                          nutes of walltime ([HH:]MM);#BSUB -W 96:00; ## numpro
                          cs;#BSUB -n 280; . . .
                          . . .

 RUNLIMIT
 5760.0 min of nxt1449
Tue Nov  4 21:34:43 2014: Started on 280 Hosts/Processors <nxt1449> <nxt1449> <
                          nxt1449> <nxt1449> <nxt1449> <nxt1449>  ...
                          . . .

Execution
                          CWD </scratch/user/tengxj1025/EXTD/pro_atat/lowc/md>;
Fri Nov  7 12:05:55 2014: Resource usage collected.
                          The CPU time used is 67536997 seconds.
                          MEM: 44.4 Gbytes;  SWAP: 0 Mbytes;  NTHREAD: 862

                          HOST: nxt1449
                          MEM: 3.2 Gbytes;  SWAP: 0 Mbytes; CPU_TIME: 9004415 s
                          econds . . .
                          . . .
                          . . .


[userx@login4]$ bmod -W 46:00 229309            # resets wall-clock time to 46 hrs for job 229309



The lsload command & Node utilization. It may happen that a job uses its allocated nodes inefficiently. Sometimes this is unavoidable, but many times it is very avoidable. It is unavoidable, for instance, if the amount of memory used per node is a large fraction of the total for that node, and only 1 cpu is used. In that case, cpu utilization will be at best at 5% (1/20) in a regular node. The main tool for tracking node utilization is the lsload command.

lsload [node_name]                # Lists on std out a node's utilization. Use bjobs -l jobid
                                  # to get the names of nodes associated with a jobid.

Below we list the output from the homemade shell script node_use. The 6 nodes attached to job 260291 exhibit fairly uneven usage (ut column): the first three nodes versus the bottom three.

./node_use 260291
HOST_NAME        status  r15s   r1m  r15m  ut    pg   ls  it    tmp  swp   mem
nxt1739              ok  20.4  21.1  20.9 100%   0.0   0  6224 491M  4.7G 43.2G
nxt2130              ok  20.3  20.1  19.8 100%   0.0   0  2920 495M  4.5G   43G
nxt2131              ok  20.2  20.6  20.4 100%   0.0   0  2920 495M  4.7G 42.8G
nxt2137              ok   8.0   8.4   8.4  40%   0.0   0  2920 495M  4.7G 52.7G
nxt1220              ok   8.0   8.0   8.0  40%   0.0   0  1959 497M  4.7G 52.7G
nxt1221              ok   8.0   8.0   8.1  40%   0.0   0  1959 497M  4.7G 52.7G

The above imbalance may be there by design or poor programming. If not by design, you should investigate further, enlisting the intervention, if need be, of our Helpdesk staff. The effective use of nodes by jobs causes them to finish sooner. This improves the whole system's efficiency.

Below we list the homemade script, node_use, for tracking a job's node usage

#/bin/bash
# usage: node_use jobid; Nov 2014. For use on Ada only.
echo 'HOST_NAME       status  r15s   r1m  r15m   ut    pg  ls    it   tmp  swp   mem'
#
bjobs -l $1 | egrep -i "HOST: " | sed '1,$s/HOST: //g' | \
 while read node_ID
 do
    lsload $node_ID | sed '/^HOST/d'
 done