Hprc banner tamu.png

Difference between revisions of "SW:moose"

From TAMU HPRC
Jump to: navigation, search
(Testing your MOOSE framework)
(MOOSE page overhaul to use new module with MOOSE prebuilt (MOOSE commit 114b3de))
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
Multiphysics Object-Oriented Simulation Environment - An open-source, parallel finite element framework
+
=MOOSE=
 +
MOOSE (Multiphysics Object-Oriented Simulation Environment) is an open-source, parallel finite element, multiphysics framework developed by Idaho National Laboratory. It provides a high-level interface for nonlinear solver technology [[https://github.com/idaholab/moose 1]].
  
 
__TOC__
 
__TOC__
  
== Building the MOOSE framework in your $SCRATCH ==
+
==Environment Set-Up==
 +
Load the MOOSE module. This will configure your shell environment to support building an application via the MOOSE framework to run your workload. This can be performed on the login node because it is not resource intensive and falls within the one (1) hour session window for the login nodes.
 +
module purge      # ensure your working environment is clean
 +
module load MOOSE  # load the latest MOOSE version installed on the clusters
 +
Additional MOOSE versions installed (if available) can be viewed by running the following:
 +
mla MOOSE
 +
or
 +
module spider MOOSE
  
    cd $SCRATCH # where to install
+
==Building an Application==
    module purge # clear module environment
+
MOOSE is simply the framework used to create the applications that will actually be used for performing solving operations. The following describes the process for configuring and building an application executable.
    module load MOOSE/moose-dev-gcc-ompi # load the MOOSE development module
 
    time $EBROOTMOOSE/install-moose.sh build # get/build/test MOOSE
 
  
This will download and build the MOOSE framework in $SCRATCH/moose. It runs a script called update_and_rebuild_libmesh.sh that takes the majority of the timeAfter that, it will switch to the $SCRATCH/moose/test directory and run 'make' and 'run_tests' to finish the build.
+
===Generate Configuration Files===
 +
cd $SCRATCH            # navigate to your scratch directory
 +
stork.sh my_app_name  # creates a directory containing configuration files for building an application
 +
===Configure Application Modules===
 +
The features of an application executable created through the MOOSE framework are configured through a 'Makefile'. This file is generated when the 'stork.sh' script is run, and is housed in the directory name with the selected application name.
 +
  cd my_app_name    # change into the directory containing application configuration files
 +
vim Makefile      # any text editor can be used to edit this file
  
== Testing your MOOSE framework ==
+
The section that needs attention is the 'MODULES' section. Users can enable/disable features as needed, or set the 'ALL_MODULES' option to 'yes' to enable all available physics features:
 +
###############################################################################
 +
################### MOOSE Application Standard Makefile #######################
 +
###############################################################################
 +
#
 +
# Optional Environment variables
 +
# MOOSE_DIR        - Root directory of the MOOSE project
 +
#
 +
###############################################################################
 +
# Use the MOOSE submodule if it exists and MOOSE_DIR is not set
 +
MOOSE_SUBMODULE    := $(CURDIR)/moose
 +
ifneq ($(wildcard $(MOOSE_SUBMODULE)/framework/Makefile),)
 +
  MOOSE_DIR        ?= $(MOOSE_SUBMODULE)
 +
else
 +
  MOOSE_DIR        ?= $(shell dirname `pwd`)/moose
 +
endif
 +
 +
# framework
 +
FRAMEWORK_DIR      := $(MOOSE_DIR)/framework
 +
include $(FRAMEWORK_DIR)/build.mk
 +
include $(FRAMEWORK_DIR)/moose.mk
 +
 +
################################## MODULES ####################################
 +
# To use certain physics included with MOOSE, set variables below to
 +
# yes as needed.  Or set ALL_MODULES to yes to turn on everything (overrides
 +
# other set variables).
 +
 +
ALL_MODULES                := no
 +
 +
CHEMICAL_REACTIONS          := no
 +
CONTACT                    := no
 +
EXTERNAL_PETSC_SOLVER      := no
 +
FLUID_PROPERTIES            := no
 +
FSI                        := no
 +
FUNCTIONAL_EXPANSION_TOOLS  := no
 +
GEOCHEMISTRY                := no
 +
HEAT_CONDUCTION            := no
 +
LEVEL_SET                  := no
 +
MISC                        := no
 +
NAVIER_STOKES              := no
 +
PHASE_FIELD                := no
 +
POROUS_FLOW                := no
 +
RAY_TRACING                := no
 +
RDG                        := no
 +
RICHARDS                    := no
 +
STOCHASTIC_TOOLS            := no
 +
TENSOR_MECHANICS            := no
 +
XFEM                        := no
 +
 +
include $(MOOSE_DIR)/modules/modules.mk
 +
###############################################################################
 +
 +
# dep apps
 +
APPLICATION_DIR    := $(CURDIR)
 +
APPLICATION_NAME  := my_app_name
 +
BUILD_EXEC        := yes
 +
GEN_REVISION      := no
 +
include            $(FRAMEWORK_DIR)/app.mk
 +
 +
###############################################################################
 +
# Additional special case targets should be added here
  
At this point, to do further testing of your installation you can build all the exercises with:
+
===Create Application Executable===
 +
make -j 8              # reads the edited Makefile and generates an application executable, my_app_name-opt
 +
./my_app_name-opt -h  # run the help option for the application executable
  
    ml purge
+
==Testing==
    ml MOOSE/moose-dev-gcc-ompi # make sure module is loaded
+
This section provides instructions for testing the newly built application executable.
    cd $SCRATCH/moose/examples
+
cd $SCRATCH
    make -j 8 # build
+
cd my_app_name
 +
./run_tests
  
For descriptions, and instructions on how to run the examples, see the [https://mooseframework.org/examples/index.html MOOSE Examples] page.
+
Example output:
 +
[netid@login my_app_name]$ ./run_tests
 +
test:kernels/simple_diffusion.test ................................................................... RUNNING
 +
test:kernels/simple_diffusion.test ............................................................. [FINISHED] OK
 +
--------------------------------------------------------------------------------------------------------------
 +
Ran 1 tests in 28.3 seconds. Average test time 28.0 seconds, maximum test time 28.0 seconds.
 +
1 passed, 0 skipped, 0 pending, 0 failed
  
Do do some very extensive, and lengthy, testing, you can build all the MOOSE modules and run their tests.
+
==Application Usage==
 +
This section presents examples on using an application executable built through MOOSE to run solvers on workloads. The examples included with the MOOSE installation will be used in the following examples, and can be copied to the user directory through the following commands:
 +
cd $SCRATCH
 +
cd my_app_name
 +
cp -R $MOOSE_EXAMPLES .    # copy MOOSE examples to current directory
 +
cd examples
 +
make -j 4                  # compile all examples
  
    ml purge
+
===Interactive===
    ml MOOSE/moose-dev-gcc-ompi # make sure module is loaded
+
This method is recommended for quick testing that '''require only a small amount of computational resources''', as this method takes place on the login nodes. Please see our policy regarding login node usage for additional information.
    cd $SCRATCH/moose/modules
 
    make -j 8 # build
 
    ./run_tests -j 8 # run
 
  
== Using your MOOSE framework ==
+
Each of the following examples will have their own "*-opt" application executable i.e. for example "ex01_inputfile will have ex01-opt" and each input file in that directory will be input into their respective application executables.
  
(This needs more work... see the MOOSE web page for more details after the framework is installed)
+
====Example 1: inputfile====
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i  # pass ex01.i as an input into the application executable
  
=== latest notes ===
+
Example Output
 +
Framework Information:
 +
MOOSE Version:          git commit 114b3de on 2021-10-22
 +
LibMesh Version:        aebb5a5c0e1f6d8cf523a720e19f70a6d17c0236
 +
PETSc Version:          3.15.1
 +
SLEPc Version:          3.15.1
 +
Current Time:            Tue Aug 23 10:42:22 2022
 +
Executable Timestamp:    Tue Aug 23 09:52:25 2022
 +
 +
Parallelism:
 +
  Num Processors:          1
 +
  Num Threads:            1
 +
 +
Mesh:
 +
  Parallel Type:          replicated
 +
  Mesh Dimension:          3
 +
  Spatial Dimension:      3
 +
  Nodes:                  3774
 +
  Elems:                  2476
 +
  Num Subdomains:          1
 +
 +
Nonlinear System:
 +
  Num DOFs:                3774
 +
  Num Local DOFs:          3774
 +
  Variables:              "diffused"
 +
  Finite Element Types:    "LAGRANGE"
 +
  Approximation Orders:    "FIRST"
 +
 +
Execution Information:
 +
  Executioner:            Steady
 +
  Solver Mode:            Preconditioned JFNK
 +
 
 +
  0 Nonlinear |R| = 6.105359e+00
 +
      0 Linear |R| = 6.105359e+00
 +
      1 Linear |R| = 7.953078e-01
 +
      2 Linear |R| = 2.907082e-01
 +
      3 Linear |R| = 1.499648e-01
 +
      4 Linear |R| = 8.817703e-02
 +
      5 Linear |R| = 6.169067e-02
 +
      6 Linear |R| = 4.457036e-02
 +
      7 Linear |R| = 3.512192e-02
 +
      8 Linear |R| = 2.726412e-02
 +
      9 Linear |R| = 1.898046e-02
 +
      10 Linear |R| = 8.790202e-03
 +
      11 Linear |R| = 2.739170e-03
 +
      12 Linear |R| = 5.174430e-04
 +
      13 Linear |R| = 1.531603e-04
 +
      14 Linear |R| = 1.112251e-04
 +
      15 Linear |R| = 7.528159e-05
 +
      16 Linear |R| = 5.091118e-05
 +
  1 Nonlinear |R| = 5.091329e-05
 +
      0 Linear |R| = 5.091329e-05
 +
      1 Linear |R| = 4.108788e-05
 +
      2 Linear |R| = 2.790390e-05
 +
      3 Linear |R| = 1.973113e-05
 +
      4 Linear |R| = 9.917339e-06
 +
      5 Linear |R| = 5.460132e-06
 +
      6 Linear |R| = 2.598431e-06
 +
      7 Linear |R| = 1.160227e-06
 +
      8 Linear |R| = 5.413173e-07
 +
      9 Linear |R| = 2.704343e-07
 +
      10 Linear |R| = 1.411023e-07
 +
      11 Linear |R| = 7.671469e-08
 +
      12 Linear |R| = 6.251824e-08
 +
      13 Linear |R| = 5.206276e-08
 +
      14 Linear |R| = 3.648918e-08
 +
      15 Linear |R| = 1.706070e-08
 +
      16 Linear |R| = 6.136957e-09
 +
      17 Linear |R| = 2.917065e-09
 +
      18 Linear |R| = 1.896775e-09
 +
      19 Linear |R| = 9.173625e-10
 +
      20 Linear |R| = 3.720842e-10
 +
  2 Nonlinear |R| = 3.802911e-10
 +
  Solve Converged!
  
See Create an Application and Compile and Test Your Application at the [https://mooseframework.inl.gov/getting_started/index.html MOOSE Getting Started page].  Replace all instances of '''~/projects''' with '''$SCRATCH'''
+
Additional examples can be found in the copied 'examples' directory.
  
=== old notes ===
+
===Batch/Job Submission===
 +
This method is recommend for workloads that '''require a large amount of computational resources''', and takes place on the compute nodes by accessing the job submission system.
  
(you should probably try the above first)
+
====Example 1: General Usage====
 +
''Grace/FASTER''
 +
#!/bin/bash
 +
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
 +
#SBATCH -t 1:00:00                # set the wall clock limit to 1hr
 +
#SBATCH -N 20                      # request 20 node
 +
#SBATCH --ntasks-per-node=48      # request 48 tasks per node
 +
#SBATCH --mem=96G                  # request 96G per node
 +
#SBATCH -o %x.%j                  # send stdout/err to "moose-sample-1-grace.[jobID]"
 +
 +
# environment set-up
 +
module purge                      # ensure your working environment is clean
 +
module load MOOSE                  # load the latest MOOSE version installed on the clusters
 +
 +
# run MOOSE example 1
 +
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i
  
Now you can build your model with the moose framework. Do as follows.
+
''Terra''
 +
#!/bin/bash
 +
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
 +
#SBATCH -t 1:00:00                # set the wall clock limit to 1hr
 +
#SBATCH -N 20                      # request 20 node
 +
#SBATCH --ntasks-per-node=28      # request 28 tasks per node
 +
#SBATCH --mem=56G                  # request 56G per node
 +
#SBATCH -o %x.%j                  # send stdout/err to "moose-sample-1-grace.[jobID]"
 +
 +
# environment set-up
 +
module purge                      # ensure your working environment is clean
 +
module load MOOSE                  # load the latest MOOSE version installed on the clusters
 +
 +
# run MOOSE example 1
 +
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i
  
    ml MOOSE/moose-dev-gcc-ompi # make sure module is loaded
+
====Example 2: Large Memory====
    cp -rp /path/to/my_module $SCRATCH/moose/modules # copy your module to framework
+
Some models may require a higher memory usage to complete successfully. The following examples use half the amount of cores per node (--ntasks-per-node), such that each core is granted 4G (Grace/FASTER: 96G / 24 = 4G; Terra: 56G / 14 = 4G) memory (--mem) to use.
    cd $SCRATCH/moose/modules/my_module
 
    # MIGHT NEED A STEP HERE TO USE MOOSE TO CREATE A Makefile
 
    make
 
  
==Sample job scripts on Terra==
+
''Grace/FASTER''
Sample 1
+
#!/bin/bash
 +
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
 +
#SBATCH -t 1:00:00                # set the wall clock limit to 1hr
 +
#SBATCH -N 20                      # request 20 node
 +
#SBATCH --ntasks-per-node=24      # request 24 tasks per node
 +
#SBATCH --mem=96G                  # request 96G per node
 +
#SBATCH -o %x.%j                  # send stdout/err to "moose-sample-1-grace.[jobID]"
 +
 +
# environment set-up
 +
module purge                      # ensure your working environment is clean
 +
module load MOOSE                  # load the latest MOOSE version installed on the clusters
 +
 +
# run MOOSE example 1
 +
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
 +
cd ex01_inputfile
 +
./ex01-opt -i ex01.i
  
    #!/bin/bash
+
''Terra''
    ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
+
#!/bin/bash
    #SBATCH --get-user-env=L      #Replicate login environment
+
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
    #SBATCH -J moose-sample1      #Set the job name to "moose-sample1"
+
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr  
    #SBATCH -t 1:00:00             #Set the wall clock limit to 1hr  
+
#SBATCH -N 20                     # request 20 node
    #SBATCH -N 20                 #Request 20 node
+
#SBATCH --ntasks-per-node=14      # request 14 tasks per node
    #SBATCH --ntasks-per-node=28  #Request 28 tasks per node
+
#SBATCH --mem=56G                 # request 56G per node
    #SBATCH --mem=56G             #Request 56G per node
+
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"
    #SBATCH -o moose-sample1.%j   #Send stdout/err to "Example1Out.[jobID]"
+
   
+
# environment set-up
    module load MOOSE/moose-dev-gcc-ompi
+
module purge                      # ensure your working environment is clean
    mpirun /path/to/moose-opt -i moose.i
+
  module load MOOSE                  # load the latest MOOSE version installed on the clusters
 
+
Sample 2    Use half of the cores per node such that each core can have 2G*2=4G memory to use. This is useful for models that need large amount of memory.
+
# run MOOSE example 1
 
+
cd $SCRATCH/my_app_name/examples  # navigate to the copied MOOSE examples directory
    #!/bin/bash
+
cd ex01_inputfile
    ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
+
./ex01-opt -i ex01.i
    #SBATCH --get-user-env=L      #Replicate login environment  
 
    #SBATCH -J moose-sample2      #Set the job name to "moose-sample2"
 
    #SBATCH -t 1:00:00            #Set the wall clock limit to 1hr
 
    #SBATCH -N 20                  #Request 20 node
 
    #SBATCH --ntasks-per-node=28  #Request 28 tasks per node
 
    #SBATCH --mem=56G              #Request 56G per node
 
    #SBATCH -o moose-sample2.%j    #Send stdout/err to "moose-sample2.[jobID]"
 
   
 
    module load MOOSE/moose-dev-gcc-ompi
 
    mpirun -np 280 -npernode 14 /path/to/moose-opt -i moose.i
 
 
 
==Sample job script on Ada==
 
 
 
Sample 1
 
 
 
    #BSUB -J moose-sample1
 
    #BSUB -o moose-sample1.%J
 
    #BSUB -e error.%J
 
    #BSUB -L /bin/bash
 
    #BSUB -W 20:00
 
    #BSUB -n 400
 
    #BSUB -M 2700
 
    #BSUB -R "span[ptile=20]"
 
    #BSUB -R "rusage[mem=2700]"
 
   
 
    module load MOOSE/moose-dev-gcc-ompi
 
    mpirun /path/to/moose-opt -i moose.i
 
 
 
Sample 2: Use half of the cores per node such that each core can have 2700x2=5400M memory to use. This is useful for models that need large amount of memory.
 
 
 
    #BSUB -J moose-sample2
 
    #BSUB -o moose-sample2.%J
 
    #BSUB -e error.%J
 
    #BSUB -L /bin/bash
 
    #BSUB -W 20:00
 
    #BSUB -n 400
 
    #BSUB -M 2700
 
    #BSUB -R "span[ptile=20]"
 
    #BSUB -R "rusage[mem=2700]"
 
   
 
    module load MOOSE/moose-dev-gcc-ompi
 
    mpirun -np 200 -npernode 10 /path/to/moose-opt -i moose.i
 

Latest revision as of 13:14, 23 August 2022

MOOSE

MOOSE (Multiphysics Object-Oriented Simulation Environment) is an open-source, parallel finite element, multiphysics framework developed by Idaho National Laboratory. It provides a high-level interface for nonlinear solver technology [1].

Environment Set-Up

Load the MOOSE module. This will configure your shell environment to support building an application via the MOOSE framework to run your workload. This can be performed on the login node because it is not resource intensive and falls within the one (1) hour session window for the login nodes.

module purge       # ensure your working environment is clean
module load MOOSE  # load the latest MOOSE version installed on the clusters

Additional MOOSE versions installed (if available) can be viewed by running the following:

mla MOOSE

or

module spider MOOSE

Building an Application

MOOSE is simply the framework used to create the applications that will actually be used for performing solving operations. The following describes the process for configuring and building an application executable.

Generate Configuration Files

cd $SCRATCH            # navigate to your scratch directory
stork.sh my_app_name   # creates a directory containing configuration files for building an application

Configure Application Modules

The features of an application executable created through the MOOSE framework are configured through a 'Makefile'. This file is generated when the 'stork.sh' script is run, and is housed in the directory name with the selected application name.

cd my_app_name     # change into the directory containing application configuration files
vim Makefile       # any text editor can be used to edit this file

The section that needs attention is the 'MODULES' section. Users can enable/disable features as needed, or set the 'ALL_MODULES' option to 'yes' to enable all available physics features:

###############################################################################
################### MOOSE Application Standard Makefile #######################
###############################################################################
#
# Optional Environment variables
# MOOSE_DIR        - Root directory of the MOOSE project
#
###############################################################################
# Use the MOOSE submodule if it exists and MOOSE_DIR is not set
MOOSE_SUBMODULE    := $(CURDIR)/moose
ifneq ($(wildcard $(MOOSE_SUBMODULE)/framework/Makefile),)
  MOOSE_DIR        ?= $(MOOSE_SUBMODULE)
else
  MOOSE_DIR        ?= $(shell dirname `pwd`)/moose
endif

# framework
FRAMEWORK_DIR      := $(MOOSE_DIR)/framework
include $(FRAMEWORK_DIR)/build.mk
include $(FRAMEWORK_DIR)/moose.mk

################################## MODULES ####################################
# To use certain physics included with MOOSE, set variables below to
# yes as needed.  Or set ALL_MODULES to yes to turn on everything (overrides
# other set variables).

ALL_MODULES                 := no

CHEMICAL_REACTIONS          := no
CONTACT                     := no
EXTERNAL_PETSC_SOLVER       := no
FLUID_PROPERTIES            := no
FSI                         := no
FUNCTIONAL_EXPANSION_TOOLS  := no
GEOCHEMISTRY                := no
HEAT_CONDUCTION             := no
LEVEL_SET                   := no
MISC                        := no
NAVIER_STOKES               := no
PHASE_FIELD                 := no
POROUS_FLOW                 := no
RAY_TRACING                 := no
RDG                         := no
RICHARDS                    := no
STOCHASTIC_TOOLS            := no
TENSOR_MECHANICS            := no
XFEM                        := no

include $(MOOSE_DIR)/modules/modules.mk
###############################################################################

# dep apps
APPLICATION_DIR    := $(CURDIR)
APPLICATION_NAME   := my_app_name
BUILD_EXEC         := yes
GEN_REVISION       := no
include            $(FRAMEWORK_DIR)/app.mk

###############################################################################
# Additional special case targets should be added here

Create Application Executable

make -j 8              # reads the edited Makefile and generates an application executable, my_app_name-opt
./my_app_name-opt -h   # run the help option for the application executable

Testing

This section provides instructions for testing the newly built application executable.

cd $SCRATCH
cd my_app_name
./run_tests

Example output:

[netid@login my_app_name]$ ./run_tests
test:kernels/simple_diffusion.test ................................................................... RUNNING
test:kernels/simple_diffusion.test ............................................................. [FINISHED] OK
--------------------------------------------------------------------------------------------------------------
Ran 1 tests in 28.3 seconds. Average test time 28.0 seconds, maximum test time 28.0 seconds.
1 passed, 0 skipped, 0 pending, 0 failed

Application Usage

This section presents examples on using an application executable built through MOOSE to run solvers on workloads. The examples included with the MOOSE installation will be used in the following examples, and can be copied to the user directory through the following commands:

cd $SCRATCH
cd my_app_name
cp -R $MOOSE_EXAMPLES .    # copy MOOSE examples to current directory
cd examples
make -j 4                  # compile all examples

Interactive

This method is recommended for quick testing that require only a small amount of computational resources, as this method takes place on the login nodes. Please see our policy regarding login node usage for additional information.

Each of the following examples will have their own "*-opt" application executable i.e. for example "ex01_inputfile will have ex01-opt" and each input file in that directory will be input into their respective application executables.

Example 1: inputfile

cd ex01_inputfile
./ex01-opt -i ex01.i   # pass ex01.i as an input into the application executable

Example Output

Framework Information:
MOOSE Version:           git commit 114b3de on 2021-10-22
LibMesh Version:         aebb5a5c0e1f6d8cf523a720e19f70a6d17c0236
PETSc Version:           3.15.1
SLEPc Version:           3.15.1
Current Time:            Tue Aug 23 10:42:22 2022
Executable Timestamp:    Tue Aug 23 09:52:25 2022

Parallelism:
  Num Processors:          1
  Num Threads:             1

Mesh:
  Parallel Type:           replicated
  Mesh Dimension:          3
  Spatial Dimension:       3
  Nodes:                   3774
  Elems:                   2476
  Num Subdomains:          1

Nonlinear System:
  Num DOFs:                3774
  Num Local DOFs:          3774
  Variables:               "diffused"
  Finite Element Types:    "LAGRANGE"
  Approximation Orders:    "FIRST"

Execution Information:
  Executioner:             Steady
  Solver Mode:             Preconditioned JFNK
 
 0 Nonlinear |R| = 6.105359e+00
      0 Linear |R| = 6.105359e+00
      1 Linear |R| = 7.953078e-01
      2 Linear |R| = 2.907082e-01
      3 Linear |R| = 1.499648e-01
      4 Linear |R| = 8.817703e-02
      5 Linear |R| = 6.169067e-02
      6 Linear |R| = 4.457036e-02
      7 Linear |R| = 3.512192e-02
      8 Linear |R| = 2.726412e-02
      9 Linear |R| = 1.898046e-02
     10 Linear |R| = 8.790202e-03
     11 Linear |R| = 2.739170e-03
     12 Linear |R| = 5.174430e-04
     13 Linear |R| = 1.531603e-04
     14 Linear |R| = 1.112251e-04
     15 Linear |R| = 7.528159e-05
     16 Linear |R| = 5.091118e-05
 1 Nonlinear |R| = 5.091329e-05
      0 Linear |R| = 5.091329e-05
      1 Linear |R| = 4.108788e-05
      2 Linear |R| = 2.790390e-05
      3 Linear |R| = 1.973113e-05
      4 Linear |R| = 9.917339e-06
      5 Linear |R| = 5.460132e-06
      6 Linear |R| = 2.598431e-06
      7 Linear |R| = 1.160227e-06
      8 Linear |R| = 5.413173e-07
      9 Linear |R| = 2.704343e-07
     10 Linear |R| = 1.411023e-07
     11 Linear |R| = 7.671469e-08
     12 Linear |R| = 6.251824e-08
     13 Linear |R| = 5.206276e-08
     14 Linear |R| = 3.648918e-08
     15 Linear |R| = 1.706070e-08
     16 Linear |R| = 6.136957e-09
     17 Linear |R| = 2.917065e-09
     18 Linear |R| = 1.896775e-09
     19 Linear |R| = 9.173625e-10
     20 Linear |R| = 3.720842e-10
 2 Nonlinear |R| = 3.802911e-10
 Solve Converged!

Additional examples can be found in the copied 'examples' directory.

Batch/Job Submission

This method is recommend for workloads that require a large amount of computational resources, and takes place on the compute nodes by accessing the job submission system.

Example 1: General Usage

Grace/FASTER

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=48       # request 48 tasks per node
#SBATCH --mem=96G                  # request 96G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Terra

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=28       # request 28 tasks per node
#SBATCH --mem=56G                  # request 56G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Example 2: Large Memory

Some models may require a higher memory usage to complete successfully. The following examples use half the amount of cores per node (--ntasks-per-node), such that each core is granted 4G (Grace/FASTER: 96G / 24 = 4G; Terra: 56G / 14 = 4G) memory (--mem) to use.

Grace/FASTER

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=24       # request 24 tasks per node
#SBATCH --mem=96G                  # request 96G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Terra

#!/bin/bash
#SBATCH -J moose-sample-1-grace    # set the job name to "moose-sample1-grace"
#SBATCH -t 1:00:00                 # set the wall clock limit to 1hr 
#SBATCH -N 20                      # request 20 node
#SBATCH --ntasks-per-node=14       # request 14 tasks per node
#SBATCH --mem=56G                  # request 56G per node
#SBATCH -o %x.%j                   # send stdout/err to "moose-sample-1-grace.[jobID]"

# environment set-up
module purge                       # ensure your working environment is clean
module load MOOSE                  # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples   # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i