Hprc banner tamu.png

Difference between revisions of "SW:Starccm"

From TAMU HPRC
Jump to: navigation, search
(Description)
Line 4: Line 4:
 
__TOC__
 
__TOC__
 
==Description==
 
==Description==
Much more than just a CFD solver, STAR-CCM+ is an entire engineering process for solving problems involving flow (of fluids or solids), heat transfer and stress. - Homepage: [https://http://www.cd-adapco.com/products/star-ccm https://http://www.cd-adapco.com/products/star-ccm]
+
Much more than just a CFD solver, STAR-CCM+ is an entire engineering process for solving problems involving flow (of fluids or solids), heat transfer and stress.  
 +
* Homepage: [https://http://www.cd-adapco.com/products/star-ccm https://http://www.cd-adapco.com/products/star-ccm]
  
 
==Access==
 
==Access==

Revision as of 16:13, 8 December 2016

THIS PAGE IS UNDER CONSTRUCTION

STAR-CCM+

Description

Much more than just a CFD solver, STAR-CCM+ is an entire engineering process for solving problems involving flow (of fluids or solids), heat transfer and stress.

Access

STAR-CCM+ is open to all HPRC users when used within the terms of our license agreement.

If you have particular concerns about whether specific usage falls within the TAMU HPRC license, please send an email to the HPRC Helpdesk. Usage of STAR-CCM+ is restricted by the number of available tokens. To see the number of available tokens, use the License Checker Tool.

Loading the Module

To see all versions of STAR-CCM+ available on Ada:

[NetID@cluster ~]$ module spider STAR-CCM+

To load the default STAR-CCM+ module on Ada:

[NetID@cluster ~]$ module load STAR-CCM+ 

To load a particular version of STAR-CCM+ on Ada (Example: 10.06.009):

[NetID@cluster ~]$ module load STAR-CCM+/10.06.009

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:

  • ONE HOUR of PROCESSING TIME per login session.
  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Ada Example Job Script

Updated: June 26, 2016

#BSUB -J StarJob             # sets the job name to StarJob.
#BSUB -L /bin/bash           # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00                # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 1                   # assigns 1 core for execution.
#BSUB -R "span[ptile=1]"     # assigns 1 core per node.
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 1 Core = 5GB per node) 
#BSUB -M 5000		     # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o StarCCM.%J          # directs the job's standard output to StarCCM.jobid


# Load the modules
module load STAR-CCM+ 

# Launch STAR-CCM+ with proper parameters 
starccm+ -np 1 -batchsystem lsf -rsh blaunch -batch simfile

To submit the batch job, run:

[NetID@ada1 ~]$ bsub < jobscript

Terra Example

Usage on the VNC Nodes

The VNC nodes allow for usage of the a graphical user interface (GUI) without disrupting other users.

VNC jobs and GUI usage do come with restrictions. All VNC jobs are limited to a single node (Terra: 28 cores/64GB). There are fewer VNC nodes than comparable compute nodes.

For more information, including instructions, on using software on the VNC nodes, please visit our Terra Remote Visualization page.