Skip to content

WRF

THIS PAGE IS UNDER CONSTRUCTION

The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.

There are modules containing compiled versions of WRF available, but we recommend that users compile WRF themselves due to the large number of available compilation options. The compiled versions might be useful for beginning users to spin up on how to run the available test and benchmark simulations, but anyone planning to run more realistic simulations will inevitably have to compile WRF themselves to tailor it to their specific situation.

Compiling WRF

The official instructions for compiling WRF are at:

https://www2.mmm.ucar.edu/wrf/users/wrf_users_guide/build/html/compiling.html

and provide guidance on how to compile both WRF and its various prerequisites. We can satisfy these prerequisites at TAMU HPRC by loading modules, so those instructions can be mostly skipped. It would be, however, useful to at least read through them all.

The compilation procedure we will use consists of downloading and unpacking the WRF source code, loading the appropriate modules, setting the appropriate environment variables, running a script to set compilation options, and finally compiling the code.

Downloading and Unpacking

We will download the source code from the Github WRF repository, although it would be a good idea to first register as a WRF user at:

https://www2.mmm.ucar.edu/wrf/users/download/wrf-regist.php

to enable the developers to better determine how to support and develop the model. The better the feedback they get, the better the needs of all users will be served. Now to the downloading and unpacking procedure. Locate the Release section for WRF on Github at:

https://github.com/wrf-model/WRF

and find the latest release (or whichever release you desire) and find its URL. After you've found the URL, download it into an appropriate location in your scratch directory, uncompress and unpack it, and enter the top-level directory.

cd $SCRATCH
mkdir WRF
cd WRF
wget https://github.com/wrf-model/WRF/releases/download/v4.5.1/v4.5.1.tar.gz
tar xzvf https://github.com/wrf-model/WRF/releases/download/v4.5.1/v4.5.1.tar.gz
cd WRFV4.5.1

Loading the Modules

You will require at least a compiler chain and modules with MPI, HDF5 and netCDF. A recent version of the foss compiler chain contains just about everything you'll need except for HDF5 and netCDF. You need to load a netCDF-combined module rather than separate modules for the C and Fortran netCDF libraries since WRF expects to find both libraries in the same location.

module load foss/2021b
module load netCDF-combined/4.8.1

Setting Environment Variables

The WRF installation script expects to find the HDF5 and netCDF libraries in standard system locations. They will not be in those locations if you load them as modules. As such, you need to set a couple of environment variables that specify the paths for them. It's also a good idea to set the NETCDF_classic variable at this point.

export NETCDF=/sw/hprc/sw/netCDF-combined/4.8.1-gompi-2021b
export HDF5=/sw/eb/sw/HDF5/1.10.8-gompi-2021b
export NETCDF_classic=1

Access

WRF is open to all HPRC users.

Loading the Module

To see all versions of WRF availabl:

[ NetID@cluster ~]$module spider WRF

To load a particular version of WRF on (Example: WRF/4.1.3-intel-2019b-dmpar):

[ NetID@cluster ~]$module load WRF/4.1.3-intel-2019b-dmpar

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:
* ONE HOUR of PROCESSING TIME per login session.

  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Back to top