From TAMU High Performance Research Computing
Getting Started: Understanding HPC
New to High Performance Computing (HPC)? The HPC Introduction Page explains the "why" and "how" of high performance computing. Also see the Policies Page to better understand the rules and etiquette of cluster usage.
Getting Started: Your First Batch Job
Getting an Account
All computer systems managed by the HPRC are available for use to TAMU faculty, staff, and students who require large-scale computing capabilities. The HPRC hosts the Ada, Curie and Terra clusters at TAMU. To apply for or renew an HPRC account, please visit the Account Applications page. For information on how to obtain an allocation to run jobs on one of our clusters, please visit the Allocations Policy page. All accounts expire and must be renewed in September of each year.
Quick Start Guides
The Quick Start Guides for the Ada and Terra clusters are available to help new users learn to submit new jobs in a single session.
Customizing your Batch Jobs
Many experienced users have the need to customize their batch job scripts to better suit their workflow.
Ada Batch Pages provide information on LSF job scripts.
Terra Batch Pages provide information on SLURM job scripts.
Finally, the Batch Translation Page provides a reference for translating between PBS (Eos), LSF (Ada), and SLURM (Terra).
Problems and Solutions
While we cannot predict all bugs and errors, some issues are extremely common on our clusters.
See the Common Problems and Quick Solutions Page for a small collection of the most prevalent issues.
Newest Resource: Terra
Our newest cluster, Terra, supplements Ada in providing for the general-compute needs of our users.
Terra features 304 compute nodes each with 28 cores. 64GB and 128GB variants of the compute nodes are available, along with a dual-GPU K80 card on some nodes.
The Slurm scheduler manages the Terra batch system. The Batch Translation Guide is available for converting between Slurm, PBS, and LSF job-submission scripts.
More information on Terra, the hardware configuration, and the Slurm batch system can be found within the Terra User Guide.
Upcoming Event: Scaling to Petascale Institute
Scaling to Petascale Institute
Dates: June 26th - 30th, 2017
Description: Call for Participation
Do you have a computational problem that would benefit from using a large scale computing system?
Do you need to scale your simulation or data analysis to a petascale system?
This institute is for people developing, modifying and supporting research projects who seek to enhance their knowledge and skills to scale software to petascale and emerging extreme scale computing systems. Participants should have familiarity with Linux, programming in Fortran, C, C++, Python or similar language, and familiarity with MPI (message passing interface). There will be hands-on activities for many of the sessions.
This event is open to anyone interested.
For more information, consult the Petascale Institute or the institute web site: https://bluewaters.ncsa.illinois.edu/petascale-summer-institute