Hprc banner tamu.png

SW:ANSYS:CFX

From TAMU HPRC
Jump to: navigation, search

CFX

Description

ANSYS CFX is a high-performance computational fluid dynamics (CFD) software tool that delivers reliable and accurate solutions quickly and robustly across a wide range of CFD and multi-physics applications. - Homepage: http://www.ansys.com/Products/Fluids/ANSYS-CFX

Help

  • Student Forum. Register here and discuss simulation with students worldwide
  • Access to advanced material and video - see instructions here

Documentation

The PDF documentation for ANSYS release 18.2 is available for our users and can be found at: https://hprc.tamu.edu/softwareDocs/ansys/

Access

ANSYS is open to all HPRC users when used within the terms of our ANSYS license agreement.

IMPORTANT NOTE REGARDING THE ANSYS LICENSE: (July 12, 2017)

   Use of ANSYS is only permitted for users that are affiliated with Texas A&M at 
   College Station.  Users meeting this criteria are permitted to use ANSYS on HPRC 
   systems from anywhere in the United States (including Alaska and Hawaii).  Use 
   of this software outside the designated area represents a breach of the license 
   and any users caught doing so may be subject to account suspension and/or other action.

If you have particular concerns about whether specific usage falls within the TAMU HPRC license, please send an email to the HPRC Helpdesk. Usage of ANSYS is restricted by the number of available tokens. To see the number of available tokens, use the License Checker Tool.

Loading the Module

To see all versions of ANSYS available on our systems:

[NetID@cluster ~]$ module spider ANSYS

To load a particular version of ANSYS (Example: 17.1):

[NetID@cluster ~]$ module load ANSYS/19.3

Known Issues

There is a known issue when using the Ansys GUI for any version of Ansys. The Geometry Editor in the Ansys Workbench will not run properly without also loading an NVIDIA module.
To bypass this bug, load the OpenGL/NVIDIA module before running the Ansys Workbench:

[ netID@cluster ~]$ module load OpenGL/NVIDIA

There is a known issue with all Ansys versions on Terra that requires unsetting a Slurm environment variable. To prevent this error, please make sure to have the following line in your job script before the launch command.

unset SLURM_GTIDS

The same error occurs when using the Ansys GUI in a VNC job on Terra. To prevent this error, please use the following command in your VNC job:

[ netID@terra ~]$ unset SLURM_GTIDS

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:

  • ONE HOUR of PROCESSING TIME per login session.
  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Ada Examples

Example 1: A serial (single core) CFX Job example:

#BSUB -J CFXJob1	     # sets the job name to CFXJob1.
#BSUB -L /bin/bash           # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 5:00                # sets to 5 hours the job's runtime wall-clock limit.
#BSUB -n 1                   # assigns 1 core for execution.
#BSUB -R "span[ptile=1]"     # assigns 1 core per node.
#BSUB -R "rusage[mem=5000]"  # reserves ~5GB per process/CPU for the job 
#BSUB -M 5000		     # sets to ~5GB the per process enforceable memory limit.
#BSUB -o stdout1.%J          # directs the job's standard output to stdout1.jobid 


## Load the necessary modules
module purge
module load ANSYS/18.2

## Launch the CFX Solver with proper parameters
cfx5solve -batch -def FileName.def 

Example 2: A parallel (multiple cores) CFX Job example:

#BSUB -J CFXJob1	     # sets the job name to CFXJob1.
#BSUB -L /bin/bash           # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 5:00                # sets to 5 hours the job's runtime wall-clock limit.
#BSUB -n 10                  # assigns 10 cores for execution.
#BSUB -R "span[ptile=10]"    # assigns 10 cores per node.
#BSUB -R "rusage[mem=5000]"  # reserves ~5GB per process/CPU for the job (5GB * 10 Cores = 50GB per node) 
#BSUB -M 5000		     # sets to ~5GB the per process enforceable memory limit.
#BSUB -o stdout1.%J          # directs the job's standard output to stdout1.jobid


## Load the necessary modules
module purge
module load ANSYS/18.2

## Launch the CFX Solver with proper parameters
cfx5solve -batch -def FileName.def -start-method "Intel MPI Local Parallel" -part 10 

Example 3: A parallel (multiple cores, multiple nodes) CFX Job example:

#BSUB -J CFXJob1	     # sets the job name to CFXJob1.
#BSUB -L /bin/bash           # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 5:00                # sets to 5 hours the job's runtime wall-clock limit.
#BSUB -n 40                  # assigns 40 cores for execution.
#BSUB -R "span[ptile=10]"    # assigns 10 cores per node.
#BSUB -R "rusage[mem=5000]"  # reserves ~5GB per process/CPU for the job (5GB * 10 Cores = 50GB per node) 
#BSUB -M 5000		     # sets to ~5GB the per process enforceable memory limit.
#BSUB -o stdout1.%J          # directs the job's standard output to stdout1.jobid


## Load the necessary modules
module purge
module load ANSYS/18.2
 
## reading this script will set the $CFX_DIST_LIST variable to the compute nodes and the number of cores to use for each node
source /sw/local/bin/cfx_dist_list.sh

## Launch the CFX Solver with proper parameters (this example uses the solver in partitioning mode via -part option)
cfx5solve -batch -def FileName.def -start-method "Intel MPI Distributed Parallel" -part 40 -par -par-dist $CFX_DIST_LIST

To submit the batch job, run: (where jobscript is a file that looks like one of the above examples)

[ NetID@ada1 ~]$ bsub < jobscript

Terra Examples

Example 1: A serial (single core) CFX Job example: (Last updated February 12, 2018)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE
#SBATCH --get-user-env=L

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=CFXJob       # Sets the job name to CFXJob
#SBATCH --time=5:00:00          # Sets the runtime limit to 5 hr
#SBATCH --ntasks=1              # Requests 1 core
#SBATCH --ntasks-per-node=1     # Requests 1 core per node (1 node)
#SBATCH --mem=5G                # Requests 5GB of memory per node
#SBATCH --output=stdout1.o%J    # Sends stdout and stderr to stdout1.o[jobID]

## Load the necessary modules
module purge
module load ANSYS/18.0

## Known error fix
unset SLURM_GTIDS 

## Launch the CFX Solver with proper parameters
cfx5solve -batch -def FileName.def 

Example 2: A parallel (multiple cores) CFX Job example: (Last updated February 12, 2018)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE
#SBATCH --get-user-env=L

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=CFXJob       # Sets the job name to CFXJob
#SBATCH --time=5:00:00          # Sets the runtime limit to 5 hr
#SBATCH --ntasks=10             # Requests 10 cores
#SBATCH --ntasks-per-node=10    # Requests 10 cores per node (1 node)
#SBATCH --mem=50G               # Requests 50GB of memory per node
#SBATCH --output=stdout1.o%J    # Sends stdout and stderr to stdout1.o[jobID]

## Load the necessary modules
module purge
module load ANSYS/18.0
  
## Known error fix
unset SLURM_GTIDS 
  
## Launch the CFX Solver with proper parameters
cfx5solve -batch -def FileName.def -start-method "Intel MPI Local Parallel" -part 10 

Example 3: A parallel (multiple cores, multiple nodes) CFX Job example: (Last updated February 12, 2018)

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE
#SBATCH --get-user-env=L

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=CFXJob       # Sets the job name to CFXJob
#SBATCH --time=5:00:00          # Sets the runtime limit to 5 hr
#SBATCH --ntasks=20             # Requests 20 cores
#SBATCH --ntasks-per-node=10    # Requests 10 cores per node (1 node)
#SBATCH --mem=50G               # Requests 50GB of memory per node
#SBATCH --output=stdout1.o%J    # Sends stdout and stderr to stdout1.o[jobID]

## Load the necessary modules
module purge
module load ANSYS/18.2
 
## Known error fix
unset SLURM_GTIDS 

## reading this script will set the $CFX_DIST_LIST variable to the compute nodes and the number of cores to use for each node 
source /sw/local/bin/cfx_dist_list.sh
 
## Launch the CFX Solver with proper parameters
cfx5solve -batch -def FileName.def -start-method "Intel MPI Distributed Parallel" -par-dist $CFX_DIST_LIST

To submit the batch job, run: (where jobscript is a file that looks like one of the above examples)

[ NetID@terra ~]$ sbatch jobscript

Usage on the VNC Nodes

The VNC nodes allow for usage of the a graphical user interface (GUI) without disrupting other users.

VNC jobs and GUI usage do come with restrictions. All VNC jobs are limited to a single node (20 cores, 64GB or 256GB). There are fewer VNC nodes than comparable compute nodes.

For more information, including instructions, on using software on the VNC nodes, please visit our Ada Remote Visualization page.