Hprc banner tamu.png

Difference between revisions of "SW:Portal"

From TAMU HPRC
Jump to: navigation, search
(Services Provided)
(Services Provided)
Line 11: Line 11:
 
*Interactive applications
 
*Interactive applications
 
**[[SW:ABAQUS | Abaqus]]
 
**[[SW:ABAQUS | Abaqus]]
**[[SW:Ansys | Ansys]]
+
**[[SW:ANSYS | Ansys]]
 
**IGV
 
**IGV
 
**LS-PREPOST
 
**LS-PREPOST

Revision as of 20:04, 28 July 2020

What is TAMU HPRC OnDemand Portal

The TAMU HPRC OnDemand portal is based on Open OnDemand (https://openondemand.org/), an open source web platform through which users can access HPC clusters and services with a web browser. The portal provides an intuitive and easy-to-use interface and allows new users to be instantly productive at using the HPC resources for their research, and at the same time, provides an alternative convenient way for experienced users to access the HPC resources. The portal has a flexible and extensible design that makes it easy to deploy new services as needed.

Services Provided

  • Job submission and monitoring
  • File transfer and management
  • File editing
  • Shell access
  • Interactive applications
    • Abaqus
    • Ansys
    • IGV
    • LS-PREPOST
    • Matlab
    • Jupyter
    • ParaView
    • VNC
    • Rstudio
    • JupyterLab
    • JBrowse

How to Access

We recommend you access the Ada or Terra portal through their landing page at

      https://portal.hprc.tamu.edu

Click the portal you want to connect. The portals are CAS authenticated. All active HPRC users have access to both portals using their NetID and password. You will only be authenticated once, and before your session expires, you can freely access both portals without further authentication.

If accessing from off campus, the TAMU VPN is needed.

You can go directly to the Ada or Terra portal using one of the following URLs:

https://portal-terra.hprc.tamu.edu
https://portal-ada.hprc.tamu.edu


Two-Factor Authentication Requirement

Starting October 1, 2018, the Division of Information Technology will require use of Duo NetID Two Factor Authentication on its Virtual Private Network (VPN) (connect.tamu.edu) service.

Duo provides a second layer of security to Texas A&M accounts.


If you are not already enrolled in Duo and plan to use VPN, you can enroll now at duo.tamu.edu. Enrolling is as easy as 1-2-3:

1. Choose your device and download the Duo Mobile app. (We strongly recommend the mobile app as the most user-friendly option.)

2. Start your enrollment at https://gateway.tamu.edu/duo-enroll/;

3. Remember: Once you sign up, you will need your Duo-enrolled device when you log in to most Texas A&M resources.

For more information, consult IT's knowledge base article for Duo: https://u.tamu.edu/KB0012105

Using the Portal

Each service provided by the portal is available at the navigation bar at the top of the page.

Navigation-bar.png

Files

File-explorer.png

The first option in the navigation bar is the "Files" drop down menu. From this menu, a user can view a file explorer at either their home directory or scratch directory.

Some users may find the visual interface of the file explorer more intuitive than shell based file exploring. All files in the directory are shown on screen, along with the file tree or hierarchy.

Normal file management commands are available with the click of a button. These include:

  • Viewing files
  • Text editing
  • Copy/Paste
  • Renaming files
  • Creating files
  • Creating directories
  • Deleting files
  • File upload/download

The 'View' button will display the highlighted file in the browser, as long as the file type is supported by the browser. Luckily modern browsers support many different types of files, from simple text files, to image files, to complicated multimedia files. This feature can be very convenient and useful if you want to quickly review a file, since you don't have to download the file to your local machine first and then review it, as what you would be doing if you had connected to a cluster using putty or mobaxterm.

File Editor

File editor allows you to edit a file selected. It cannot be accessed from the main menu, but is available through the File app or Job Composer. In File app, you first select a file, then click 'Edit' from the File app interface. Then a new tab will be opened and you can edit the file in the editor. In Job Composer, you can edit the job script by clicking 'Open Editor' at the bottom of Job Composer.

Cluster Shell Access

Shell access to any of the three clusters is available from this drop down menu with one click. Shell access app is similar to ssh client such as Putty and MobaXterm. it allows users to login to a cluster with their NetID and password.

Copy/Paste can be done with hot keys. To copy text from the shell access terminal, highlight the text with a mouse, then the highlighted text will be coped into the clipboard. To paste a text from the clipboard to the terminal, type 'Ctrl+v'.

Shell access works with Firefox and Chrome only.

Jobs

From the jobs drop down menu, a user can view their active jobs or compose and submit jobs using the job composer.

Active Jobs

The active jobs menu provides information about running jobs the cluster, including their JobID, name, user, account, time used, queue, and status. Clicking the arrow to the left of a given job will reveal more details, such as where it was submitted from, which node it's running on, when it was submitted, process IDs, memory, and CPU time.

Activejobs.png

Job Composer

Jobcomposer.png

When first launched, the job composer will walk the user through each of its features, covering the whole process of creating, editing, and submitting a job.

The job composer provides some template job scripts the user can choose from. More details on job parameters can be found here. Once a template is selected, you need to edit the template to provide customized job content. This can be done by clicking 'Open Editor' underneath the job script contents.

The job composer has a specific directory in the user's scratch to store the jobs it has created. We call the directory the job composer's root directory. New jobs created by the job composer will have a sub-directory in the root directory. The name of the sub-directory is same as the index of the job, which is an integer maintained by the job composer. The first job has an index 1, the second job has an index 2, and so on. Knowing this is very important to help us using the job composer more effectively.

There are two ways to cope with the default directory created by the job composer.


Method 1: using the default directory as the working directory of your job. This means you need to upload all input files to that directory before you can click the submit button. This can be easily done by clicking 'Open Dir' right beneath the job script contents. A file explorer will open the job directory in a new tab where you can do file transfers.

Method 2: if you already have the input files stored somewhere in the cluster and don't want to move them around, or you prefer to have an organized directories by yourself, you can simply add one command line in the job script before any other command line, where /path/to/job_working_dir is the directory you want all the commands to be executed:

cd /path/to/job_working_dir

Common Problems

1. The session starts and quits immediately.

Check your quota in your home and scratch. If you see a full or close to full usage, clean your disk space and try again.

2. In ANSYS Workbench, not all windows are available in the foreground.

Right click the bottom panel title bar "Unsaved Project - Workbench" and select maximize

Log out

To properly log out the portal, you must do two things: (1) log out the portal by clicking 'Log out' from the top navigation bar; (2) close the browser to completely terminate the session.

Be aware that only logout of the portal is not enough. You must also close the entire browser (not just the tab), a side effect of CAS. This is very important if you are using a public computer.

Cleanup

The portal stores temporary files for interactive apps in $SCRATCH/ondemand/data/sys/dashboard/. Although the disk space used by those files accumulate slowly, it is a good habit to clean this directory periodically.

   rm -rf $SCRATCH/ondemand/data/sys/dashboard/batch_connect/sys/*

Interactive Apps

Each piece of software listed above in the "services provided" section is directly available to launch from this menu. When a piece of software is selected, you will see the interface for job parameters such as number of cores, wall time, memory, and type of node. If you are not sure what to change, the default values work fine. Once you fill out the form, click 'Launch' and the app will be launched as a job. It will first go into a queue, and when ready, a button will become available to view the interface of the chosen software.

Interactive sessions can be managed via the "My Interactive Sessions" button on the navigation bar.

We have tried to provide the most commonly used GUI software packages on the Interactive Apps drop-down menu. If a software is not available, you can always run it within VNC, which is provided on the drop-down menu. To run a GUI application in the VNC session on the portal, follow these steps.

1. Click 'VNC' from 'Interactive Apps' and start a vnc session.
2. In the terminal within the new tab, load the module for the software you want to run.
3. If you have chosen a GPU node, please run

   vglrun app_name

Otherwise, type the app_name from the command line directly.

RStudio

To install CRAN packages, start RStudio with enough memory for the install process: 10 cores and 2GB per core for example.

Then install CRAN packages using the following command at the RStudio R prompt:

Install Ada CRAN packages
install.packages('package_name', repos='http://10.70.4.4/cran/')
Install Terra CRAN packages
install.packages('package_name', repos='http://10.76.5.24/cran/')

A local CRAN repository is used since RStudio runs on the compute nodes which do not have internet access.

If you need Bioconductor or github R packages installed, contact the HPRC helpdesk to request installation.

JupyterLab

You can create your own JupyterLab conda environment either using Anaconda or Miniconda for use on the HPRC portal but you must use one of the Anaconda versions that are on the JupyterLab HPRC portal webpage.

Notice that you will need to make sure you have enough available file quota (~30,000) since conda creates thousands of files.

An Anaconda install of JupyterLab creates about the same number of files as Miniconda3.

Anaconda

To to create an Anaconda conda environment called jupyterlab_1.2.2, do the following on the command line:

module purge
module load Anaconda/3-5.0.0.1
conda create -n jupyterlab_1.2.2


After your jupyterlab_1.2.2 environment is created, you will see output on how to activate and use your jupyterlab_1.2.2 environment

#
# To activate this environment, use:
# > source activate jupyterlab_1.2.2
#
# To deactivate an active environment, use:
# > source deactivate
#

Then you can install jupyterlab (specifying a version if needed) and add packages to your jupyterlab_1.2.2 environment

source activate jupyterlab_1.2.2
conda install -c conda-forge jupyterlab=1.2.2
conda install -c conda-forge package-name

To remove downloads and unused files after packages are installed.

conda clean --all
Miniconda

JupyterLab v1.2.2 installed via Miniconda3 will install python v3.6.7 while Anaconda installs python 3.8.0.

Anaconda/3-5.0.0.1 and Miniconda3/4.7.10 both use python v3.6.7 with jupyterlab v1.2.0 but jupyterlab v1.2.2 installs python 3.8.0 in Anaconda so it is best to use Anaconda for JupyterLab at the moment if you want to use jupyterlab v1.2.2 instead of v1.2.0.

To to create an Miniconda conda environment called jupyterlab_1.2.0, do the following on the command line:

module purge
module load Miniconda3/4.7.10
conda create -p /scratch/user/your_netid/.conda/envs/jupyterlab_1.2.0 jupyterlab=1.2.0

After your jupyterlab_1.2.0 environment is created, you will see output on how to activate and use your bio environment

#
# To activate this environment, use
#
#     $ conda activate /scratch/user/your_netid/.conda/envs/jupyterlab_1.2.0
#
# To deactivate an active environment, use
#
#     $ conda deactivate

You can add packages to your Miniconda3 environment using either Anaconda/3-5.0.0.1 or Miniconda3/4.7.10 both which use python v3.6.7

When activating the conda environment using the Miniconda3 module, you must specify the full path. When using the Anaconda module you only need to specify the environment name.

In this example, JupyterLab should be run using the portal JupyterLab app. You can use your Miniconda3/4.7.10 environment in the JupyterLab portal app by selecting the Anaconda/3-5.0.0.1 module in the portal app page and providing the name including full path of your Miniconda3/4.7.10 environment in the "JupyterLab Environment to be activated" box.

Jupyter Notebook

You can create your own Jupyter Notebook environment using Python or Anaconda for use on the HPRC Portal but you must use one of the Module versions that are on the Jupyter Notebook HPRC portal web page.

Notice that you will need to make sure you have enough available file quota (~10,000) since conda and pip creates thousands of files.

Python

A Python module can be used to create a virtual environment to be used in the portal Jupyter Notebook app when all you need is Python packages.

You can use a default Python virtual environment in the Jupyter Notebook portal app by leaving the "Optional Environment to be activated" field blank.

To to create a Python virtual environment called my_notebook-python-3.6.6-foss-2018b (you can name it whatever you like), do the following on the command line. You can save your virtual environments in any $SCRATCH directory you want. In this example a directory called /scratch/user/mynetid/pip_envs is used but you can use another name instead of pip_envs

mkdir -p /scratch/user/mynetid/pip_envs

A good practice is to name your environment so that you can identify which Python version is in your virtualenv so that you know which module to load.

The next three lines will create your virtual environment.

module purge
module load Python/3.6.6-foss-2018b
virtualenv /scratch/user/mynetid/pip_envs/my_notebook-python-3.6.6-foss-2018b

Then you can activate the virtual environment by using the full path to the activate command inside your virtual environment and install Python packages.

source /scratch/user/mynetid/pip_envs/my_notebook-python-3.6.6-foss-2018b/bin/activate
pip install notebook
pip install python_package_name

You can use your Python/3.6.6-foss-2018b environment in the Jupyter Notebook portal app by selecting the Python/3.6.6-foss-2018b module in the portal app page and providing the name including full path to the activate command for your Python/3.6.6-foss-2018b environment in the "Optional Conda Environment to be activated" box. The activate command is found inside the bin directory of your virtual env. An example of what to put in the "Optional Conda Environment to be activated" box is the full path used in the source command above.

Anaconda

Anaconda is different than Python's virtualenv in that you can install other types of software such as R and R packages in your environment. To to create an Anaconda conda environment called my_notebook (you can name it whatever you like), do the following on the command line:

module purge
module load Anaconda/3-5.0.0.1
conda create -n my_notebook


After your my_notebook environment is created, you will see output on how to activate and use your my_notebook environment

#
# To activate this environment, use:
# > source activate my_notebook
#
# To deactivate an active environment, use:
# > source deactivate
#

Then you need to install notebook and then you can add optional packages to your my_notebook environment

source activate my_notebook
conda install -c conda-forge notebook
conda install -c conda-forge package-name

Additional Information

Ohio Supercomputing Center has video for OOD at:

 https://youtu.be/DfK7CppI-IU