Hprc banner tamu.png

Ada:Batch Job Files

From TAMU HPRC
Revision as of 10:25, 24 November 2014 by Yangliu (talk | contribs) (Created page with "==Job files== A user's request to do processing via the batch system is commonly, though not ecxlusively, expressed in a text file, called from here on job file or just job....")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Job files

A user's request to do processing via the batch system is commonly, though not ecxlusively, expressed in a text file, called from here on job file or just job. This file contains LSF directives and user-specified commands. The directives, one per line, are all prefaced by the #BSUB string. The rest of the directive, on the same line, can include the specification of any number of job parameters/options, many of which have values associated with them. The user-specified commands can be any combination of user-supplied executables and UNIX Shell commands. Below, using two job files, we illustrate, first, a recommended general job format and, second, a concrete job with comment lines. (In the second, we use one option per line so that we can also provide some explanation.)

#BSUB -option1 value1 -option2 value2 ...  
#BSUB  ... more options ...                 
# ...  a blank or                           
# ...  a comment line                       
#                                           
cmd1                                        
cmd2                                        
...                                         

Example 1

#BSUB -J myjob1           # sets the job name to myjob1.
#BSUB -L /bin/bash        # uses the bash login shell to initialize
#                           the job's execution environment.
#BSUB -W 12:30            # sets to 12.5 hours the job's runtime wall-clock limit.
#BSUB -M 15000            # sets to 1500MB (15GB) the per process memory limit.
#BSUB -n 3                # assigns in one node 3 cpus for execution.
#BSUB -x                  # assigns a whole node (same node as above) exclusively for the job.
#BSUB -o stdout1.%J       # directs the job's standard output to stdout1.jobid
#BSUB -P project1         # charges the consumed service units (SUs) to project1.
#BSUB -u e-mail_address   # sends email to the specified address (e.g., netid@tamu.edu,
#                           myname@gmail.com) with information about main job events.
##
cd $SCRATCH/myjob1        # make $SCRATCH/myjob1 the job's current working directory where all
#                           the needed files (e.g., prog.exe, input1, data_out1) are placed.
module load ictce         # load the INTEL tool chain to provide, among other things,
#                           needed runtime libraries for the execution of prog.exe below.
#                           (assumes prog.exe was compiled using INTEL compilers.)
#
# The next 3 lines concurrently execute 3 instances of the same program, prog.exe, with
# standard input and output data streams assigned to different files in each case.
#
(prog.exe < input1 >& data_out1 ) &
(prog.exe < input2 >& data_out2 ) &
(prog.exe < input3 >& data_out3 )
wait
##

You can alternatively, but also more compactly, specify multiple #BSUB options per line, if you want:

#BSUB -J myjob1 -W 12:30 -n 2 -x -o stdout1.%J -P project1