Hprc banner tamu.png

Difference between revisions of "Main Page"

From TAMU HPRC
Jump to: navigation, search
(more Grace)
m
Line 69: Line 69:
 
* [[:Ada | Ada User Guide]]
 
* [[:Ada | Ada User Guide]]
 
* [[:Terra | Terra User Guide]]
 
* [[:Terra | Terra User Guide]]
* [[:Grace:Intro | Grace User Guide]]
+
* [[:Grace:QuickStart | Grace User Guide]]
 
* [[:HPRCLab | Workstations]]
 
* [[:HPRCLab | Workstations]]
 
* [https://hprc.tamu.edu/resources/ Hardware Overview]
 
* [https://hprc.tamu.edu/resources/ Hardware Overview]

Revision as of 14:41, 17 December 2020

Welcome to the TAMU HPRC Wiki


Announcements

  • New GPU nodes in the Ada cluster: Four new GPU nodes are now available in the Ada Cluster. Each GPU node has two Intel Skylake Xeon Gold 5118 20-core processors, 192 GB of memory and two NVIDIA 32GB V100 GPUs. To use these new GPU nodes, please submit jobs to the v100 queue on Ada by including the following job directive in your job scripts:
#BSUB -q v100

Getting an Account

  • Understanding HPRC: For a brief overview of what services HPRC offers, see this video in our getting started series on YouTube.
  • New to HPRC's resources? This page explains the HPRC resources available to the TAMU community. Also see the Policies Page to better understand the rules and etiquette of cluster usage..
  • Accessing the clusters: All computer systems managed by the HPRC are available for use to TAMU faculty, staff, and students who require large-scale computing capabilities. The HPRC hosts the Ada, Terra, and Grace clusters at TAMU. To apply for or renew an HPRC account, please visit the Account Applications page. For information on how to obtain an allocation to run jobs on one of our clusters, please visit the Allocations Policy page. All accounts expire and must be renewed in September of each year.

Using the Clusters

  • QuickStart Guides: For just the "need-to-know" information on getting started with our clusters, visit our QuickStart pages. Topics discussed include cluster access, file management, the batch system, setting up a software environment using modules, creating your own job files, and project account management. Ada QuickStart Guide, Terra Quickstart Guide
  • Batch Jobs: As a shared resource between many users, each cluster must employ a batch system to schedule a time for each user's job to run. Without such a system, one user could use a disproportionate amount of resources, and cause other users' work to stall. Ada's batch system is called LSF, and Terra's batch system is called SLURM. While similar in function, they differ in their finer details, such as job file syntax. Information relevant to each system can be found below.

Creating your own batch jobs: the tamubatch Page provides information on how to use tamubatch to create and submit jobs easily.

  • Troubleshooting: While we cannot predict all bugs and errors, some issues on our clusters are common enough to catalog. See the Common Problems and Quick Solutions Page for a small collection of the most prevalent issues. For further assistance, users can contact help@hprc.tamu.edu to open a support ticket.

HPRC's YouTube Channel

  • Prefer visual learning? HPRC has launched its official YouTube channel where you can find video versions of our help guides, recordings of our short courses, and more! Subscribe here.
Proudly Serving Members of the Texas A&M University System

Tamu square.jpg Tamu q square.png Tamu hsc square.jpg Tamu tti square.png Tamu agrilife square.jpg Tamu tees nosquare.png Tamu teex nosquare.png Tamu tvmdl nosquare.png