The HPRC group provides its users with access to several specially configured "HPRC Lab" Linux workstations at two separate locations (see below) on campus. The primary function of such workstations is to provide stand-alone platforms for interactive pre- and post-processing tasks, including visualization, that are directly related to larger-scale computation on the clusters themselves. This manner of handling such tasks is more convenient and effective than the direct use of clusters for them.
Access to the HPRC Lab workstations is available to any user of HPRC clusters. We are working on automating that process, but for now if you would like to use these systems, please contact us, email preferred.
Locations and availability
Four Linux workstations (hostnames: hprclab2.tamu.edu, hprclab3.tamu.edu, hprclab4.tamu.edu, hprclab5.tamu.edu) have been installed in the basement of Teague in room B013. This lab is generally not accessibile but the systems there can be used remotely (e.g. for Latex). [NOTE: we are trying to find a better home for these.]
Three additional workstations (hostname: hprclab1.tamu.edu,hprclab6.tamu.edu,hprclab7.tamu.edu) have also been installed at the Student Computing Center (SCC). This lab is typically open 24 hours during the weekdays and for a more limited time on the weekends.
The daily schedule the SCC can be found here.
The final workstation, hprclab0.tamu.edu is available at our help desk at 114B Henderson Hall on Jones street (across from Northside Dorms) from 9-5 M-F (when open... please email HPRC help to check on availability).
The support staff at both labs mentioned above should be the initial point of contact for obvious hardware problems (e.g. smoke coming out of the system). All other issues (including hardware problems) should be reported to the HPRC Helpdesk by calling 845-0219 or by sending email to email@example.com. When sending email, be sure to follow these recommendations so we can help you in a prompt and efficient manner.
Usage policy and guidelines
The HPRC Lab Linux workstation accounts are not to be used for work that is unrelated to the purpose for which a user was granted a HPRC account. These machines are meant primarily for interactive tasks (post-processing or visualization work). Users who need to work primarily through the batch systems on the clusters (submitting batch jobs and running programs on the command line) may do so using any of the Windows machines in the Open Access Labs. They simply need a secure shell window to log in to one or more of the clusters. When both types of users wish to use the Linux workstations, those with non-interactive needs must yield use of the machines to those who have interactive/graphical/visualisation tasks to perform. Furthermore, no user may occupy a workstation for more than 2 hours in a single sitting, unless there is no one else waiting to use a workstation.
Users may login to these workstations either at the console (while sitting in front of the workstation), or remotely over the network using a secure shell connection (SSH). In either case, the user must use their NetID and password.
Disk space management
At present, there are no quota on user's /home directories. User's are expected to move their files off the system when not using them.
- Directories which are not shared, like
/home/aggiefor instance, represent separate and independent disk storage that is local to each workstation. Saving a file in
/home/aggieon the workstation hprclab2.tamu.edu means that you cannot access it in
/scratch/aggieon any workstation other than hprclab2.tamu.edu.
- If you like, you can use sshfs to access your data on HPRC clusters. For example:
cd # change to home directory mkdir ~/ada # make a directory for ada mounts mkdir ~/ada/home # for ada:/home mkdir ~/ada/scratch # ada:/scratch sshfs $MYNETID@ada.tamu.edu:/home/$MYNETID ~/ada/home # mount ada:/home/$MYNETID sshfs $MYNETID@ada.tamu.edu:/scratch/user/$MYNETID ~/ada/scratch # mount ada:/scratch/user/$MYNETID
Then you can access your ada files with something like ls ~/ada/scratch.
These mounts will automatically disconnect when you log out.
NOTE: This is only provided as a way to easily copy files/data back and forth between clusters and workstations. IT IS REALLY SLOW. Don't try to run models from ada filesystems... copy them to the local system (and then remember to copy back) any data files/models/etc as needed.
Locally installed software
The following software packages have been installed on each of the Linux workstations to enable users to offload postprocessing and visualization tasks from the heavily loaded (and therefore interactively less responsive) clusters.
The following commericial (licensed) applications are presently installed.
- Abaqus 6.12 CAE - via abq6121 cae (1)
- Matlab 2013a - via matlab
- NOTE: for users that are running Abaqus CAE and Viewer on a SCLAB workstation and using X11 forwarding to display the GUI on their Windows PC. The -mesa command line option should be used if your X11 emulator does not support OpenGL (eg. Xming).
In addition to commercial applications, there is a plethora of open-source software installed on these systems. These include:
- Grace - xmgrace
- GROMACS - (see: rpm -ql gromacs | grep bin)
- LAMMPS - lmp_g++
- LibreOffice - ooffice
- NAMD - namd
- Ncview - ncview
- OpenFOAM - (see: rpm -ql OpenFOAM | grep /bin/)
- Paraview - paraview
- RStudio - rstudio
- TexLive - latex, dvips, bibtex etc.
Launching remote software
If there is a need to launch software that resides on one of the clusters but have it display it's graphical interface on the Linux workstation where the user is sitting, the following instructions can be used: Assume the program name is "xemacs" and that it resides on and will be executed on the machine "eos". At the prompt in a unix shell window (on your workstation), type:
NOTE: By default, ssh on the HPRC Lab workstations includes '-X' (forward X11)
- System: Dell OptiPlex 9020s (in Teague B013) and 9030s (SCC)
- Processor: Intel(R) Core(TM) i7-4770S @ 3.10GHz (B013) or i7-4790S CPU @ 3.20GHz (SCC)
- RAM: 8GB
- Operating System: Fedora 23 x86_64
- Hard Disk: 465GB
- Network: 1000/100Mbps Ethernet
Student Computing Center (SCC... next to the Evans Library... behind the CCG)
Teague B013 (basement)