Hprc banner tamu.png

Main Page

From TAMU HPRC
Revision as of 13:54, 25 August 2020 by Hsuk1001 (talk | contribs)
Jump to: navigation, search
Welcome to the TAMU HPRC Wiki

Getting Started: Understanding HPC

New to High Performance Computing (HPC)? This HPC Introduction Page explains the "why" and "how" of high performance computing. Also see the Policies Page to better understand the rules and etiquette of cluster usage.

Getting Started: Your First Batch Job

Getting an Account
All computer systems managed by the HPRC are available for use to TAMU faculty, staff, and students who require large-scale computing capabilities. The HPRC hosts the Ada, Curie and Terra clusters at TAMU. To apply for or renew an HPRC account, please visit the Account Applications page. For information on how to obtain an allocation to run jobs on one of our clusters, please visit the Allocations Policy page. All accounts expire and must be renewed in September of each year.

Quick Start Guides
The Quick Start Guides for the Ada and Terra clusters are available to help new users learn to submit new jobs in a single session.

Customizing your Batch Jobs
Many experienced users have the need to customize their batch job scripts to better suit their workflow.

Ada Batch Pages provide information on LSF job scripts.

Terra Batch Pages provide information on SLURM job scripts.

Finally, the tamubatch Page provides information on how to use tamubatch to create and submit jobs easily.

Problems and Solutions
While we cannot predict all bugs and errors, some issues are extremely common on our clusters.

See the Common Problems and Quick Solutions Page for a small collection of the most prevalent issues.

Recent Change: Two Factor Authentication for SSH

Beginning on November 4, 2019, Duo Two-Factor Authentication is required for accessing the HPRC clusters.

The Two Factor Authentication Page provides guidance on using two-factor authentication for various SSH clients.

New GPU nodes in the Ada cluster

Four new GPU nodes are now available in the Ada Cluster. Each GPU node has two Intel Skylake Xeon Gold 5118 20-core processors, 192 GB of memory and two NVIDIA 32GB V100 GPUs.

To use these new GPU nodes, please submit jobs to the v100 queue on Ada by including the following job directive in your job scripts:

#BSUB -q v100

You do NOT need to include any -R "select[gpu]" directives to submit jobs to the v100 queue.

Proudly Serving Members of the Texas A&M University System

Tamu square.jpg Tamu q square.png Tamu hsc square.jpg Tamu tti square.png Tamu agrilife square.jpg Tamu tees nosquare.png Tamu teex nosquare.png Tamu tvmdl nosquare.png