- 1 Terra Usage Policies
- 2 Accessing Terra
- 3 Navigating Terra & Quotas
- 4 The Batch System
- 5 Writing a Job
- 6 Submitting and Monitoring Jobs
- 7 Additional Topic: Finding Output
- 8 Additional Topic: Finding Software
- 9 Additional Topic: Transferring Files
- 10 Additional Topic: Graphic User Interfaces
- 11 Additional Topic: Translating Ada/LSF jobs to Terra/Slurm
Terra Usage Policies
Access to Terra is granted with the condition that you will understand and adhere to all TAMU HPRC and Terra-specific policies.
General policies can be found on the HPRC Policies page.
Terra-specific policies, which are similar to Ada, can be found on the Terra Policies page.
Most access to Terra is done via a secure shell session.
Users on Mac and Linux/Unix should use whatever SSH-capable terminal is available on their system.
The command to connect to Terra is as follows. Be sure to replace [NetID] with your TAMU NetID.
Your login password is the same that used on Howdy. You will not see your password as your type it into the login prompt.
When you first access Terra, you will be within your home directory. This directory has smaller quotas and should not be used for general purpose.
The home directory is located at:
You will want to navigate to your scratch directory. This directory is located at:
You can navigate to scratch easily with the following command:
Your scratch directory has a default (extendible) quota of 1TB/50,000 files.
Your home directory has a default (non-extensible) quota of 10GB/10,000 files.
You can see the current status of your quota with:
If you need a quota increase, please contact us with justification and the expected length of time that you will need the extended quota.
The Batch System
The batch system is a load distribution implementation that ensures convenient and fair use of a shared resource. Submitting jobs to a batch system allows a user to reserve specific resources with minimal interference to other users. All users are required to submit resource-intensive processing to the compute nodes through the batch system - attempting to circumvent the batch system is not allowed.
On Terra, Slurm is the batch system that provides job management. More information on Slurm can be found in the Terra Batch page.
Writing a Job
In order to properly run a program on Terra, you will need to submit a job.
The simple example below request 1 core on 1 node with 5GB of RAM for 1 hour. "ExampleModule" should be replaced or omitted based on your software needs.
#!/bin/bash #SBATCH --export=NONE #SBATCH --get-user-env=L #SBATCH -t 01:00:00 #SBATCH -J ExampleJob #SBATCH -N 1 #SBATCH -n 1 #SBATCH --mem=5G #SBATCH -o ExJobOut.%j module load ExampleModule ./myprogram
This job file should be written on a Linux/Unix environment.
If written on an older Mac or DOS workstation, you will need to use "dos2unix" to remove certain characters that interfere with parsing the script.
Submitting and Monitoring Jobs
After submitted a job, you are able to monitor it with several methods.
Additional Topic: Finding Output
Additional Topic: Finding Software
There is a set of pre-installed software on Terra that is hidden within the module system. By hiding individual software suites and dependencies within modules, we are able to support multiple versions of each software suite with minimal clashing.
You can see the most popular software on the HPRC Available Software page.
You can find most available software on Terra with the following command:
You can search for particular software by keyword using:
module spider keyword
If you need new software or an update, please contact us with your request.
There are restrictions on what software we can install. There is also regularly a queue of requested software installations.
Account for delays in your installation request timeline.
Additional Topic: Transferring Files
Additional Topic: Graphic User Interfaces
The use of GUIs on Terra is a more complicated process than running non-interactive jobs or doing resource-light interactive processing.
You have two options for using GUIs on Terra.
The first option is to run on the login node. When doing this, you must observe the fair-use policy of login node usage. Users commonly violate these policies by accident, resulting in terminated processes, confusion, and warnings from our admins.
The second option is to use a VNC job. This method is outside the scope of this guide. See the Terra Remote Visualization page for more information.
Additional Topic: Translating Ada/LSF jobs to Terra/Slurm
The HPRC Batch Translation page contains information on converting between LSF, PBS, and Slurm.
Some LSF-Slurm translation examples have also been made available on the ADA-TERRA JOBS TRANSLATION PAGE. Our staff has also written some example jobs for specific software. These software-specific examples can be seen on the INDIVIDUAL SOFTWARE PAGES where available.