Hprc banner tamu.png

Difference between revisions of "SW:moose"

From TAMU HPRC
Jump to: navigation, search
(MOOSE page overhaul to use new module with MOOSE prebuilt (MOOSE commit 114b3de))
 
(39 intermediate revisions by 4 users not shown)
Line 1: Line 1:
_TOC_
+
=MOOSE=
 +
MOOSE (Multiphysics Object-Oriented Simulation Environment) is an open-source, parallel finite element, multiphysics framework developed by Idaho National Laboratory. It provides a high-level interface for nonlinear solver technology [[https://github.com/idaholab/moose 1]].
  
== Build MOOSE framework in user scratch ==
+
__TOC__
  
    cd $SCRATCH
+
==Environment Set-Up==
    module load MOOSE/moose-dev-gcc-ompi
+
Load the MOOSE module. This will configure your shell environment to support building an application via the MOOSE framework to run your workload. This can be performed on the login node because it is not resource intensive and falls within the one (1) hour session window for the login nodes.
    $EBROOTMOOSE/install-moose.sh
+
module purge      # ensure your working environment is clean
 +
module load MOOSE # load the latest MOOSE version installed on the clusters
 +
Additional MOOSE versions installed (if available) can be viewed by running the following:
 +
mla MOOSE
 +
or
 +
module spider MOOSE
  
It will take a while for the installation to finish. When it is done, the moose framework will be install in your current directory. Now you can build your model with the moose framework. Do as follows.
+
==Building an Application==
 +
MOOSE is simply the framework used to create the applications that will actually be used for performing solving operations. The following describes the process for configuring and building an application executable.
  
1. Copy the your model to $EBROOTMOOSE/modules
+
===Generate Configuration Files===
 +
cd $SCRATCH            # navigate to your scratch directory
 +
stork.sh my_app_name  # creates a directory containing configuration files for building an application
 +
===Configure Application Modules===
 +
The features of an application executable created through the MOOSE framework are configured through a 'Makefile'. This file is generated when the 'stork.sh' script is run, and is housed in the directory name with the selected application name.
 +
cd my_app_name    # change into the directory containing application configuration files
 +
vim Makefile      # any text editor can be used to edit this file
  
    cp /path/to/my_module $EBROOTMOOSE/modules
+
The section that needs attention is the 'MODULES' section. Users can enable/disable features as needed, or set the 'ALL_MODULES' option to 'yes' to enable all available physics features:
 +
###############################################################################
 +
################### MOOSE Application Standard Makefile #######################
 +
###############################################################################
 +
#
 +
# Optional Environment variables
 +
# MOOSE_DIR        - Root directory of the MOOSE project
 +
#
 +
###############################################################################
 +
# Use the MOOSE submodule if it exists and MOOSE_DIR is not set
 +
MOOSE_SUBMODULE    := $(CURDIR)/moose
 +
ifneq ($(wildcard $(MOOSE_SUBMODULE)/framework/Makefile),)
 +
  MOOSE_DIR        ?= $(MOOSE_SUBMODULE)
 +
else
 +
  MOOSE_DIR        ?= $(shell dirname `pwd`)/moose
 +
endif
 +
 +
# framework
 +
FRAMEWORK_DIR      := $(MOOSE_DIR)/framework
 +
include $(FRAMEWORK_DIR)/build.mk
 +
include $(FRAMEWORK_DIR)/moose.mk
 +
 +
################################## MODULES ####################################
 +
# To use certain physics included with MOOSE, set variables below to
 +
# yes as needed.  Or set ALL_MODULES to yes to turn on everything (overrides
 +
# other set variables).
 +
 +
ALL_MODULES                := no
 +
 +
CHEMICAL_REACTIONS          := no
 +
CONTACT                    := no
 +
EXTERNAL_PETSC_SOLVER      := no
 +
FLUID_PROPERTIES            := no
 +
FSI                        := no
 +
FUNCTIONAL_EXPANSION_TOOLS  := no
 +
GEOCHEMISTRY                := no
 +
HEAT_CONDUCTION            := no
 +
LEVEL_SET                  := no
 +
MISC                        := no
 +
NAVIER_STOKES              := no
 +
PHASE_FIELD                := no
 +
POROUS_FLOW                := no
 +
RAY_TRACING                := no
 +
RDG                        := no
 +
RICHARDS                    := no
 +
STOCHASTIC_TOOLS            := no
 +
TENSOR_MECHANICS            := no
 +
XFEM                        := no
 +
 +
include $(MOOSE_DIR)/modules/modules.mk
 +
###############################################################################
 +
 +
# dep apps
 +
APPLICATION_DIR    := $(CURDIR)
 +
APPLICATION_NAME  := my_app_name
 +
BUILD_EXEC        := yes
 +
GEN_REVISION      := no
 +
include            $(FRAMEWORK_DIR)/app.mk
 +
 +
###############################################################################
 +
# Additional special case targets should be added here
  
2. Make sure moose-dev-gcc-ompi is loaded
+
===Create Application Executable===
 +
  make -j 8              # reads the edited Makefile and generates an application executable, my_app_name-opt
 +
./my_app_name-opt -h  # run the help option for the application executable
  
    module load MOOSE/moose-dev-gcc-ompi 
+
==Testing==
 +
This section provides instructions for testing the newly built application executable.
 +
cd $SCRATCH
 +
cd my_app_name
 +
./run_tests
  
3. Build your module
+
Example output:
 +
[netid@login my_app_name]$ ./run_tests
 +
test:kernels/simple_diffusion.test ................................................................... RUNNING
 +
test:kernels/simple_diffusion.test ............................................................. [FINISHED] OK
 +
--------------------------------------------------------------------------------------------------------------
 +
Ran 1 tests in 28.3 seconds. Average test time 28.0 seconds, maximum test time 28.0 seconds.
 +
1 passed, 0 skipped, 0 pending, 0 failed
  
    cd $EBROOTMOOSE/modules
+
==Application Usage==
    make
+
This section presents examples on using an application executable built through MOOSE to run solvers on workloads. The examples included with the MOOSE installation will be used in the following examples, and can be copied to the user directory through the following commands:
 +
cd $SCRATCH
 +
cd my_app_name
 +
cp -R $MOOSE_EXAMPLES .    # copy MOOSE examples to current directory
 +
cd examples
 +
make -j 4                  # compile all examples
  
== Sample job scripts on Terra ==
+
===Interactive===
Sample 1
+
This method is recommended for quick testing that '''require only a small amount of computational resources''', as this method takes place on the login nodes. Please see our policy regarding login node usage for additional information.
  
    #!/bin/bash
+
Each of the following examples will have their own "*-opt" application executable i.e. for example "ex01_inputfile will have ex01-opt" and each input file in that directory will be input into their respective application executables.
    ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
 
    #SBATCH --get-user-env=L      #Replicate login environment
 
    #SBATCH -J moose-sample1      #Set the job name to "moose-sample1"
 
    #SBATCH -t 1:00:00            #Set the wall clock limit to 1hr
 
    #SBATCH -N 20                  #Request 20 node
 
    #SBATCH --ntasks-per-node=28  #Request 28 task/core per node
 
    #SBATCH --mem=57000M          #Request 57G per node
 
    #SBATCH -o moose-sample1.%j    #Send stdout/err to "Example1Out.[jobID]"
 
   
 
    module load MOOSE/moose-dev-gcc-ompi
 
    mpirun /path/to/moose-opt -i moose.i
 
  
Sample 2
+
====Example 1: inputfile====
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i  # pass ex01.i as an input into the application executable
  
    #!/bin/bash
+
Example Output
    ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
+
Framework Information:
    #SBATCH --get-user-env=L       #Replicate login environment 
+
MOOSE Version:          git commit 114b3de on 2021-10-22
    #SBATCH -J  moose-sample2      #Set the job name to "moose-sample2"
+
LibMesh Version:        aebb5a5c0e1f6d8cf523a720e19f70a6d17c0236
    #SBATCH -t 1:00:00            #Set the wall clock limit to 1hr
+
PETSc Version:          3.15.1
    #SBATCH -N 20                  #Request 20 node
+
SLEPc Version:          3.15.1
    #SBATCH --ntasks-per-node=10  #Request 10 task/core per node
+
Current Time:            Tue Aug 23 10:42:22 2022
    #SBATCH --mem=58=7000M        #Request 57G per node
+
Executable Timestamp:    Tue Aug 23 09:52:25 2022
    #SBATCH -o moose-sample2.%j    #Send stdout/err to "moose-sample2.[jobID]"
+
   
+
Parallelism:
    module load MOOSE/moose-dev-gcc-ompi
+
  Num Processors:          1
    mpirun -np 200 -npernode 10 /path/to/moose-opt -i moose.i
+
  Num Threads:            1
 +
 +
Mesh:
 +
  Parallel Type:          replicated
 +
  Mesh Dimension:          3
 +
  Spatial Dimension:      3
 +
  Nodes:                  3774
 +
  Elems:                  2476
 +
  Num Subdomains:          1
 +
 +
Nonlinear System:
 +
  Num DOFs:                3774
 +
  Num Local DOFs:          3774
 +
  Variables:              "diffused"
 +
  Finite Element Types:    "LAGRANGE"
 +
  Approximation Orders:    "FIRST"
 +
 +
Execution Information:
 +
  Executioner:            Steady
 +
  Solver Mode:            Preconditioned JFNK
 +
 
 +
  0 Nonlinear |R| = 6.105359e+00
 +
      0 Linear |R| = 6.105359e+00
 +
      1 Linear |R| = 7.953078e-01
 +
      2 Linear |R| = 2.907082e-01
 +
      3 Linear |R| = 1.499648e-01
 +
      4 Linear |R| = 8.817703e-02
 +
      5 Linear |R| = 6.169067e-02
 +
      6 Linear |R| = 4.457036e-02
 +
      7 Linear |R| = 3.512192e-02
 +
      8 Linear |R| = 2.726412e-02
 +
      9 Linear |R| = 1.898046e-02
 +
      10 Linear |R| = 8.790202e-03
 +
      11 Linear |R| = 2.739170e-03
 +
       12 Linear |R| = 5.174430e-04
 +
      13 Linear |R| = 1.531603e-04
 +
      14 Linear |R| = 1.112251e-04
 +
      15 Linear |R| = 7.528159e-05
 +
      16 Linear |R| = 5.091118e-05
 +
  1 Nonlinear |R| = 5.091329e-05
 +
      0 Linear |R| = 5.091329e-05
 +
      1 Linear |R| = 4.108788e-05
 +
      2 Linear |R| = 2.790390e-05
 +
      3 Linear |R| = 1.973113e-05
 +
      4 Linear |R| = 9.917339e-06
 +
      5 Linear |R| = 5.460132e-06
 +
      6 Linear |R| = 2.598431e-06
 +
      7 Linear |R| = 1.160227e-06
 +
      8 Linear |R| = 5.413173e-07
 +
      9 Linear |R| = 2.704343e-07
 +
      10 Linear |R| = 1.411023e-07
 +
      11 Linear |R| = 7.671469e-08
 +
      12 Linear |R| = 6.251824e-08
 +
      13 Linear |R| = 5.206276e-08
 +
      14 Linear |R| = 3.648918e-08
 +
      15 Linear |R| = 1.706070e-08
 +
      16 Linear |R| = 6.136957e-09
 +
      17 Linear |R| = 2.917065e-09
 +
      18 Linear |R| = 1.896775e-09
 +
      19 Linear |R| = 9.173625e-10
 +
      20 Linear |R| = 3.720842e-10
 +
  2 Nonlinear |R| = 3.802911e-10
 +
  Solve Converged!
  
== Sample job script on Ada ==
+
Additional examples can be found in the copied 'examples' directory.
  
Sample 1
+
===Batch/Job Submission===
 +
This method is recommend for workloads that '''require a large amount of computational resources''', and takes place on the compute nodes by accessing the job submission system.
  
    #BSUB -J moose-sample1
+
====Example 1: General Usage====
    #BSUB -o moose-sample1.%J
+
''Grace/FASTER''
    #BSUB -e error.%J
+
#!/bin/bash
    #BSUB -L /bin/bash
+
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
    #BSUB -W 20:00
+
#SBATCH -t 1:00:00                # set the wall clock limit to 1hr
    #BSUB -n 400
+
#SBATCH -N 20                      # request 20 node
    #BSUB -M 2700
+
#SBATCH --ntasks-per-node=48      # request 48 tasks per node
    #BSUB -R "span[ptile=20]"
+
#SBATCH --mem=96G                  # request 96G per node
    #BSUB -R "rusage[mem=2700]"
+
#SBATCH -o %x.%j                  # send stdout/err to "moose-sample-1-grace.[jobID]"
 +
 +
# environment set-up
 +
module purge                      # ensure your working environment is clean
 +
module load MOOSE                  # load the latest MOOSE version installed on the clusters
 +
 +
# run MOOSE example 1
 +
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i
  
    module load MOOSE/moose-dev-gcc-ompi
+
''Terra''
    mpirun /path/to/moose-opt -i moose.i
+
#!/bin/bash
 +
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
 +
#SBATCH -t 1:00:00                # set the wall clock limit to 1hr
 +
#SBATCH -N 20                      # request 20 node
 +
#SBATCH --ntasks-per-node=28      # request 28 tasks per node
 +
#SBATCH --mem=56G                  # request 56G per node
 +
#SBATCH -o %x.%j                  # send stdout/err to "moose-sample-1-grace.[jobID]"
 +
 +
# environment set-up
 +
module purge                      # ensure your working environment is clean
 +
module load MOOSE                  # load the latest MOOSE version installed on the clusters
 +
 +
# run MOOSE example 1
 +
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i
  
Sample 2
+
====Example 2: Large Memory====
 +
Some models may require a higher memory usage to complete successfully. The following examples use half the amount of cores per node (--ntasks-per-node), such that each core is granted 4G (Grace/FASTER: 96G / 24 = 4G; Terra: 56G / 14 = 4G) memory (--mem) to use.
  
    #BSUB -J moose-sample2
+
''Grace/FASTER''
    #BSUB -o moose-sample2.%J
+
#!/bin/bash
    #BSUB -e error.%J
+
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
    #BSUB -L /bin/bash
+
#SBATCH -t 1:00:00                # set the wall clock limit to 1hr
    #BSUB -W 20:00
+
#SBATCH -N 20                      # request 20 node
    #BSUB -n 400
+
#SBATCH --ntasks-per-node=24      # request 24 tasks per node
    #BSUB -M 2700
+
#SBATCH --mem=96G                  # request 96G per node
    #BSUB -R "span[ptile=20]"
+
#SBATCH -o %x.%j                  # send stdout/err to "moose-sample-1-grace.[jobID]"
    #BSUB -R "rusage[mem=2700]"
+
   
+
# environment set-up
    module load MOOSE/moose-dev-gcc-ompi
+
module purge                      # ensure your working environment is clean
    mpirun -np 200 -npernode 10 /path/to/moose-opt -i moose.i
+
module load MOOSE                  # load the latest MOOSE version installed on the clusters
 +
 +
# run MOOSE example 1
 +
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i
 +
 
 +
''Terra''
 +
#!/bin/bash
 +
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
 +
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr
 +
#SBATCH -N 20                      # request 20 node
 +
#SBATCH --ntasks-per-node=14      # request 14 tasks per node
 +
#SBATCH --mem=56G                  # request 56G per node
 +
#SBATCH -o %x.%j                  # send stdout/err to "moose-sample-1-grace.[jobID]"
 +
 +
# environment set-up
 +
module purge                      # ensure your working environment is clean
 +
module load MOOSE                 # load the latest MOOSE version installed on the clusters
 +
 +
# run MOOSE example 1
 +
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i

Latest revision as of 14:14, 23 August 2022

MOOSE

MOOSE (Multiphysics Object-Oriented Simulation Environment) is an open-source, parallel finite element, multiphysics framework developed by Idaho National Laboratory. It provides a high-level interface for nonlinear solver technology [1].

Environment Set-Up

Load the MOOSE module. This will configure your shell environment to support building an application via the MOOSE framework to run your workload. This can be performed on the login node because it is not resource intensive and falls within the one (1) hour session window for the login nodes.

module purge       # ensure your working environment is clean
module load MOOSE  # load the latest MOOSE version installed on the clusters

Additional MOOSE versions installed (if available) can be viewed by running the following:

mla MOOSE

or

module spider MOOSE

Building an Application

MOOSE is simply the framework used to create the applications that will actually be used for performing solving operations. The following describes the process for configuring and building an application executable.

Generate Configuration Files

cd $SCRATCH            # navigate to your scratch directory
stork.sh my_app_name   # creates a directory containing configuration files for building an application

Configure Application Modules

The features of an application executable created through the MOOSE framework are configured through a 'Makefile'. This file is generated when the 'stork.sh' script is run, and is housed in the directory name with the selected application name.

cd my_app_name     # change into the directory containing application configuration files
vim Makefile       # any text editor can be used to edit this file

The section that needs attention is the 'MODULES' section. Users can enable/disable features as needed, or set the 'ALL_MODULES' option to 'yes' to enable all available physics features:

###############################################################################
################### MOOSE Application Standard Makefile #######################
###############################################################################
#
# Optional Environment variables
# MOOSE_DIR        - Root directory of the MOOSE project
#
###############################################################################
# Use the MOOSE submodule if it exists and MOOSE_DIR is not set
MOOSE_SUBMODULE    := $(CURDIR)/moose
ifneq ($(wildcard $(MOOSE_SUBMODULE)/framework/Makefile),)
  MOOSE_DIR        ?= $(MOOSE_SUBMODULE)
else
  MOOSE_DIR        ?= $(shell dirname `pwd`)/moose
endif

# framework
FRAMEWORK_DIR      := $(MOOSE_DIR)/framework
include $(FRAMEWORK_DIR)/build.mk
include $(FRAMEWORK_DIR)/moose.mk

################################## MODULES ####################################
# To use certain physics included with MOOSE, set variables below to
# yes as needed.  Or set ALL_MODULES to yes to turn on everything (overrides
# other set variables).

ALL_MODULES                 := no

CHEMICAL_REACTIONS          := no
CONTACT                     := no
EXTERNAL_PETSC_SOLVER       := no
FLUID_PROPERTIES            := no
FSI                         := no
FUNCTIONAL_EXPANSION_TOOLS  := no
GEOCHEMISTRY                := no
HEAT_CONDUCTION             := no
LEVEL_SET                   := no
MISC                        := no
NAVIER_STOKES               := no
PHASE_FIELD                 := no
POROUS_FLOW                 := no
RAY_TRACING                 := no
RDG                         := no
RICHARDS                    := no
STOCHASTIC_TOOLS            := no
TENSOR_MECHANICS            := no
XFEM                        := no

include $(MOOSE_DIR)/modules/modules.mk
###############################################################################

# dep apps
APPLICATION_DIR    := $(CURDIR)
APPLICATION_NAME   := my_app_name
BUILD_EXEC         := yes
GEN_REVISION       := no
include            $(FRAMEWORK_DIR)/app.mk

###############################################################################
# Additional special case targets should be added here

Create Application Executable

make -j 8              # reads the edited Makefile and generates an application executable, my_app_name-opt
./my_app_name-opt -h   # run the help option for the application executable

Testing

This section provides instructions for testing the newly built application executable.

cd $SCRATCH
cd my_app_name
./run_tests

Example output:

[netid@login my_app_name]$ ./run_tests
test:kernels/simple_diffusion.test ................................................................... RUNNING
test:kernels/simple_diffusion.test ............................................................. [FINISHED] OK
--------------------------------------------------------------------------------------------------------------
Ran 1 tests in 28.3 seconds. Average test time 28.0 seconds, maximum test time 28.0 seconds.
1 passed, 0 skipped, 0 pending, 0 failed

Application Usage

This section presents examples on using an application executable built through MOOSE to run solvers on workloads. The examples included with the MOOSE installation will be used in the following examples, and can be copied to the user directory through the following commands:

cd $SCRATCH
cd my_app_name
cp -R $MOOSE_EXAMPLES .    # copy MOOSE examples to current directory
cd examples
make -j 4                  # compile all examples

Interactive

This method is recommended for quick testing that require only a small amount of computational resources, as this method takes place on the login nodes. Please see our policy regarding login node usage for additional information.

Each of the following examples will have their own "*-opt" application executable i.e. for example "ex01_inputfile will have ex01-opt" and each input file in that directory will be input into their respective application executables.

Example 1: inputfile

cd ex01_inputfile
./ex01-opt -i ex01.i   # pass ex01.i as an input into the application executable

Example Output

Framework Information:
MOOSE Version:           git commit 114b3de on 2021-10-22
LibMesh Version:         aebb5a5c0e1f6d8cf523a720e19f70a6d17c0236
PETSc Version:           3.15.1
SLEPc Version:           3.15.1
Current Time:            Tue Aug 23 10:42:22 2022
Executable Timestamp:    Tue Aug 23 09:52:25 2022

Parallelism:
  Num Processors:          1
  Num Threads:             1

Mesh:
  Parallel Type:           replicated
  Mesh Dimension:          3
  Spatial Dimension:       3
  Nodes:                   3774
  Elems:                   2476
  Num Subdomains:          1

Nonlinear System:
  Num DOFs:                3774
  Num Local DOFs:          3774
  Variables:               "diffused"
  Finite Element Types:    "LAGRANGE"
  Approximation Orders:    "FIRST"

Execution Information:
  Executioner:             Steady
  Solver Mode:             Preconditioned JFNK
 
 0 Nonlinear |R| = 6.105359e+00
      0 Linear |R| = 6.105359e+00
      1 Linear |R| = 7.953078e-01
      2 Linear |R| = 2.907082e-01
      3 Linear |R| = 1.499648e-01
      4 Linear |R| = 8.817703e-02
      5 Linear |R| = 6.169067e-02
      6 Linear |R| = 4.457036e-02
      7 Linear |R| = 3.512192e-02
      8 Linear |R| = 2.726412e-02
      9 Linear |R| = 1.898046e-02
     10 Linear |R| = 8.790202e-03
     11 Linear |R| = 2.739170e-03
     12 Linear |R| = 5.174430e-04
     13 Linear |R| = 1.531603e-04
     14 Linear |R| = 1.112251e-04
     15 Linear |R| = 7.528159e-05
     16 Linear |R| = 5.091118e-05
 1 Nonlinear |R| = 5.091329e-05
      0 Linear |R| = 5.091329e-05
      1 Linear |R| = 4.108788e-05
      2 Linear |R| = 2.790390e-05
      3 Linear |R| = 1.973113e-05
      4 Linear |R| = 9.917339e-06
      5 Linear |R| = 5.460132e-06
      6 Linear |R| = 2.598431e-06
      7 Linear |R| = 1.160227e-06
      8 Linear |R| = 5.413173e-07
      9 Linear |R| = 2.704343e-07
     10 Linear |R| = 1.411023e-07
     11 Linear |R| = 7.671469e-08
     12 Linear |R| = 6.251824e-08
     13 Linear |R| = 5.206276e-08
     14 Linear |R| = 3.648918e-08
     15 Linear |R| = 1.706070e-08
     16 Linear |R| = 6.136957e-09
     17 Linear |R| = 2.917065e-09
     18 Linear |R| = 1.896775e-09
     19 Linear |R| = 9.173625e-10
     20 Linear |R| = 3.720842e-10
 2 Nonlinear |R| = 3.802911e-10
 Solve Converged!

Additional examples can be found in the copied 'examples' directory.

Batch/Job Submission

This method is recommend for workloads that require a large amount of computational resources, and takes place on the compute nodes by accessing the job submission system.

Example 1: General Usage

Grace/FASTER

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=48       # request 48 tasks per node
#SBATCH --mem=96G                  # request 96G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Terra

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=28       # request 28 tasks per node
#SBATCH --mem=56G                  # request 56G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Example 2: Large Memory

Some models may require a higher memory usage to complete successfully. The following examples use half the amount of cores per node (--ntasks-per-node), such that each core is granted 4G (Grace/FASTER: 96G / 24 = 4G; Terra: 56G / 14 = 4G) memory (--mem) to use.

Grace/FASTER

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=24       # request 24 tasks per node
#SBATCH --mem=96G                  # request 96G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Terra

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=14       # request 14 tasks per node
#SBATCH --mem=56G                  # request 56G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i