Hprc banner tamu.png

Difference between revisions of "SW:WRF"

Jump to: navigation, search
(Usage on the Login Nodes)
(Usage on the Login Nodes)
Line 16: Line 16:
   module load WRF/3.8.0-intel-2016a-dmpar
   module load WRF/3.8.0-intel-2016a-dmpar
==Usage on the Login Nodes==
Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing.
Instructions for login-node usage may be derived from other sections on this page.
You are expected to be <font color=red>responsible</font> and <font color=red>courteous to other users</font> when using software on the login nodes.
==Usage on the Compute Nodes==
==Usage on the Compute Nodes==

Revision as of 14:08, 29 June 2016



The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. - Homepage: http://www.wrf-model.org


WRF is open to all HPRC users.

To see all versions of WRF available on Ada:

 module spider WRF

Loading the WRF module will also load all other modules necessary to run WRF. To load the default WRF module on Ada:

 module load WRF

To load a particular version of WRF on Ada (Example: 3.8.0):

 module load WRF/3.8.0-intel-2016a-dmpar

Template:SW:Login Node Warning

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Ada Example

#BSUB -J wrftest          # sets the job name to myjob1.
#BSUB -L /bin/bash        # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 10:00            # sets to 10 hours the job's runtime wall-clock limit.
#BSUB -n 40               # assigns 40 cores for execution.
#BSUB -R "span[ptile=20]"   # assigns 20 cores per node.
#BSUB -R "rusage[mem=2700]" # reserves 2700MB per process/CPU for the job
#BSUB -M 2700               # sets to 2,700MB the per process enforceable memory limit.
#BSUB -o wrf.%J             # directs the job's standard output to wrf.jobid

# Load the modules
module load WRF/3.8.0-intel-2016a-dmpar

# Use mpirun to start wrf
mpirun wrf.exe

Terra Example