Hprc banner tamu.png

Difference between revisions of "SW:ANSYS:Turbogrid"

From TAMU HPRC
Jump to: navigation, search
(Loading the Module)
Line 16: Line 16:
 
===Loading the Module===
 
===Loading the Module===
 
To see all versions of ANSYS available on Ada:
 
To see all versions of ANSYS available on Ada:
  [NetID@cluster ~]$ module spider ANSYS
+
  [NetID@cluster ~]$ '''module spider ANSYS'''
  
 
To load the default ANSYS module on Ada:
 
To load the default ANSYS module on Ada:
  [NetID@cluster ~]$ module load ANSYS
+
  [NetID@cluster ~]$ '''module load ANSYS'''
  
 
To load a particular version of ANSYS on Ada (Example: 17.1):
 
To load a particular version of ANSYS on Ada (Example: 17.1):
  [NetID@cluster ~]$ module load ANSYS/17.1
+
  [NetID@cluster ~]$ '''module load ANSYS/17.1'''
  
 
{{:SW:Login_Node_Warning}}
 
{{:SW:Login_Node_Warning}}

Revision as of 15:13, 10 February 2017

TurboGrid

ANSYS TurboGrid automates the production of high-quality hexahedral meshes needed for blade passages in rotating machinery. As a result, TurboGrid minimizes mesh dependencies when assessing differences in performance predictions between designs. - Homepage http://www.ANSYS.com/Products/Fluids/ANSYS-TurboGrid

Access

ANSYS TurboGrid is open to all HPRC users when used within the terms of our ANSYS license agreement.

IMPORTANT NOTE REGARDING THE ANSYS LICENSE:

   Terms of the ANSYS Academic license require that all users be within 50 miles 
       of the TAMU main campus in College Station, TX. Use of this software outside
       the designated area represents a breach of the license and any users caught
       doing so may be subject to account suspension and/or other action.

If you have particular concerns about whether specific usage falls within the TAMU HPRC license, please send an email to the HPRC Helpdesk. Usage of ANSYS TurboGrid is restricted by the number of available tokens. To see the number of available tokens, use the License Checker Tool.

Loading the Module

To see all versions of ANSYS available on Ada:

[NetID@cluster ~]$ module spider ANSYS

To load the default ANSYS module on Ada:

[NetID@cluster ~]$ module load ANSYS

To load a particular version of ANSYS on Ada (Example: 17.1):

[NetID@cluster ~]$ module load ANSYS/17.1

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:

  • ONE HOUR of PROCESSING TIME per login session.
  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Ada Example

#BSUB -J TurboGridJob1	  # sets the job name to TurboGridJob1.
#BSUB -L /bin/bash        # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 5:00             # sets to 5 hours the job's runtime wall-clock limit.
#BSUB -n 1                # assigns 1 core for execution.
#BSUB -R "span[ptile=1]"  # assigns 1 core per node.
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 2 Cores = 10GB per node) 
#BSUB -M 5000		  # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o stdout1.%J       # directs the job's standard output to stdout1.jobid


## Load the necessary modules
module load ANSYS/17.1

## Launch the solver with proper parameters
cfxtg -batch FileName.tse

To submit the batch job, run:

[NetID@ada1 ~]$ bsub < jobscript

Terra Example

   COMING SOON

Usage on the VNC Nodes

The VNC nodes allow for usage of the a graphical user interface (GUI) without disrupting other users.

VNC jobs and GUI usage do come with restrictions. All VNC jobs are limited to a single node (Terra: 28 cores/64GB). There are fewer VNC nodes than comparable compute nodes.

For more information, including instructions, on using software on the VNC nodes, please visit our Terra Remote Visualization page.