Hprc banner tamu.png

Difference between revisions of "SW:ABAQUS"

From TAMU HPRC
Jump to: navigation, search
(Ada Example)
Line 21: Line 21:
  
 
===Ada Example===
 
===Ada Example===
 +
A single core example, no user subroutine:
 
<pre>
 
<pre>
 
#BSUB -J AbaqusJob        # sets the job name to AbaqusJob.
 
#BSUB -J AbaqusJob        # sets the job name to AbaqusJob.
Line 28: Line 29:
 
#BSUB -R "span[ptile=1]"  # assigns 1 core per node.
 
#BSUB -R "span[ptile=1]"  # assigns 1 core per node.
 
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 1 Core = 5GB per node)  
 
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 1 Core = 5GB per node)  
#BSUB -M 5000     # sets to 5,000MB (~5GB) the per process enforceable memory limit.
+
#BSUB -M 5000   # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o Abaqus.%J           # directs the job's standard output to Abaqus.jobid
+
#BSUB -o Abaqus.%J       # directs the job's standard output to Abaqus.jobid
 +
 
 +
 
 +
## Load the module
 +
module load ABAQUS
 +
 
 +
## Launch Abaqus with proper parameters
 +
abaqus memory="5GB" cpus=1 job=JobName input=InputFile.inp
 +
</pre>
 +
 
 +
A multicore (10 core) example, with a user subroutine:
 +
<pre>
 +
#BSUB -J AbaqusJob        # sets the job name to AbaqusJob.
 +
#BSUB -L /bin/bash        # uses the bash login shell to initialize the job's execution environment.
 +
#BSUB -W 2:00            # sets to 2 hours the job's runtime wall-clock limit.
 +
#BSUB -n 10              # assigns 10 cores for execution.
 +
#BSUB -R "span[ptile=10]" # assigns 10 cores per node.
 +
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 10 Core = 50GB per node)
 +
#BSUB -M 5000   # sets to 5,000MB (~5GB) the per process enforceable memory limit.
 +
#BSUB -o Abaqus.%J        # directs the job's standard output to Abaqus.jobid
  
  
 
## Load the modules
 
## Load the modules
## NOTE: Intel module only necessary if you would like to run with a user subroutine
 
 
module load intel
 
module load intel
 
module load ABAQUS  
 
module load ABAQUS  
  
 
## Launch Abaqus with proper parameters  
 
## Launch Abaqus with proper parameters  
## To use a subroutine add: user=FileName.for
+
abaqus memory="50GB" cpus=10 job=JobName input=InputFile.inp mp_mode=mpi user=FileName.for
## For multiple cores add: mp_mode=mpi
 
abaqus memory="5GB" cpus=1 job=JobName input=InputFile.inp  
 
 
</pre>
 
</pre>
  

Revision as of 11:41, 13 July 2016

Abaqus

Finite Element Analysis software for modeling, visualization, and best-in-class implicit and explicit dynamics FEA. - Homepage: http://www.3ds.com/products-services/simulia/products/abaqus/

Access

Abaqus is open to all HPRC users when used within the terms of our license agreement. If you have particular concerns about whether specific usage falls within the TAMU HPRC license, please send an email to the HPRC Helpdesk. Usage of Abaqus is restricted by the number of available tokens. To see the number of available tokens, use the License Checker Tool.

Loading the Module

To see all versions of Abaqus available on Ada:

 module spider ABAQUS

To load the default Abaqus module on Ada:

 module load ABAQUS

To load a particular version of Abaqus on Ada (Example: 6.13.5):

 module load ABAQUS/6.13.5-linux-x86_64

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:

  • ONE HOUR of PROCESSING TIME per login session.
  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Ada Example

A single core example, no user subroutine:

#BSUB -J AbaqusJob        # sets the job name to AbaqusJob.
#BSUB -L /bin/bash        # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00             # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 1                # assigns 1 core for execution.
#BSUB -R "span[ptile=1]"  # assigns 1 core per node.
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 1 Core = 5GB per node) 
#BSUB -M 5000		  # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o Abaqus.%J        # directs the job's standard output to Abaqus.jobid


## Load the module
module load ABAQUS 

## Launch Abaqus with proper parameters 
abaqus memory="5GB" cpus=1 job=JobName input=InputFile.inp 

A multicore (10 core) example, with a user subroutine:

#BSUB -J AbaqusJob        # sets the job name to AbaqusJob.
#BSUB -L /bin/bash        # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00             # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 10               # assigns 10 cores for execution.
#BSUB -R "span[ptile=10]" # assigns 10 cores per node.
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 10 Core = 50GB per node) 
#BSUB -M 5000		  # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o Abaqus.%J        # directs the job's standard output to Abaqus.jobid


## Load the modules
module load intel
module load ABAQUS 

## Launch Abaqus with proper parameters 
abaqus memory="50GB" cpus=10 job=JobName input=InputFile.inp mp_mode=mpi user=FileName.for

Terra Example

Usage on the VNC Nodes

The VNC nodes allow for usage of the a graphical user interface (GUI) without disrupting other users.

VNC jobs and GUI usage do come with restrictions. All VNC jobs are limited to a single node (Terra: 28 cores/64GB). There are fewer VNC nodes than comparable compute nodes.

For more information, including instructions, on using software on the VNC nodes, please visit our Terra Remote Visualization page.