Hprc banner tamu.png

Test

From TAMU HPRC
Jump to: navigation, search

Test page to try different wiki formatting: TAMU logo nobevel.jpg


This text is collapsible.

Sample job file:

Job script #1:

#BSUB -J myjob1           # sets the job name to myjob1.
#BSUB -L /bin/bash        # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 12:30            # sets to 12.5 hours the job's runtime wall-clock limit.
#BSUB -n 3                # assigns 3 cpus/cores for execution.
#BSUB -R "span[ptile=3]"  # assigns 3 cores per node.
#BSUB -R "rusage[mem=15000]"  # schedules & selects a node/host that has 15000 MB avail minimum.
#BSUB -M 5000             # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o stdout1.%J       # directs the job's standard output to stdout1.jobid
#BSUB -P project1         # charges the consumed service units (SUs) to project1.
#BSUB -u e-mail_address   # sends email to the specified address (e.g., netid@tamu.edu,
#                           myname@gmail.com) with information about main job events.
##
cd $SCRATCH/myjob1        # makes $SCRATCH/myjob1 the job's current working directory where all
#                           the needed files (e.g., prog.exe, input1, data_out1) are placed.
module load intel         # loads the INTEL software tool chain to provide, among other things,
#                           needed runtime libraries for the execution of prog.exe below.
#                           (assumes prog.exe was compiled using INTEL compilers.)
#
# The next 3 lines concurrently execute 3 instances of the same program, prog.exe, with
# standard input and output data streams assigned to different files in each case. This style
# of concurrent execution can be extended up to 20-way or 40-way on nodes with 20 cores
# and 40 cores, respectively.
#
(prog.exe < input1 >& data_out1 ) &
(prog.exe < input2 >& data_out2 ) &
(prog.exe < input3 >& data_out3 )
wait
##