STAR-CCM+
Description
Much more than just a CFD solver, STAR-CCM+ is an entire engineering process for solving problems involving flow (of fluids or solids), heat transfer and stress.
Access
STAR-CCM+ is available ONLY for HPRC users at the Texas A\&M College Station and Galveston campuses. Our license does NOT permit use of STAR-CCM+ by any HPRC users from the TAMU Qatar branch campus.
If you have particular concerns about whether specific usage falls within the TAMU HPRC license, please send an email to the HPRC Helpdesk. Usage of STAR-CCM+ is restricted by the number of available tokens. To see the number of available tokens, use the License Checker Tool.
Loading the Module
To see all versions of STAR-CCM+ available on Ada:
[NetID@cluster ~]$
module spider STAR-CCM+
Usage on the Login Nodes
Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.
The most important processing limits here are:
* ONE HOUR of PROCESSING TIME per login session.
- EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.
Anyone found violating the processing limits
will have their processes killed without warning. Repeated violation of
these limits will result in account suspension.
Note: Your login session will disconnect after
one hour of inactivity.
Usage on the Compute Nodes
Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.
For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:
- ACES: Batch Processing
- FASTER: Batch Processing
- Grace: Batch Processing
Recommendations for Parallel Processing
A general good rule to follow is to not have fewer than 50,000 cells per core when parallel processing. The reason for this is when fewer than 50,000 cells per core are used, the computational time decrease does not scale linearly. The most efficient computing time in terms of linear scaling occur with 50,000 cells per core or more.
Grace Example
Updated: July 1, 2024
Note:A script (starccm_wrapper) is provided to use the correct settings for running STAR-CCM+ in a batch job on Grace.
#!/bin/bash
##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=starccm # set job name to starccm
#SBATCH --time=1:30:00 # set time to 1.5 hours
#SBATCH --ntasks=96 # assign 56 total cores
#SBATCH --ntasks-per-node=48 # assign 28 cores per node
#SBATCH --mem=28G # request 28 GB of memory per node.
#SBATCH --output=starccm.out.%j # set output file to starccm.out.$SLURM_JOB_ID
module load STAR-CCM+/18.02.010
starccm_wrapper -batch simfile
To submit the batch job, run:
[NetID@grace1 ~]$
sbatch
jobscript
Using the Graphical User Interface
To use the graphical user interface (GUI), please submit as an interactive app using Our OpenOnDemand portal.
Please see the Interactive Apps page for more information on how to submit interactive apps.