SW:R
R
Contents
Description
R is a free software environment for statistical computing and graphics.
Homepage: http://www.r-project.org/
Access
R is open to all HPRC users.
Loading the Module
To see all versions of R available:
[NetID@cluster ~]$ module spider R
To load a particular version of R (Example: 3.3.1 with the iomkl toolchain):
[NetID@cluster ~]$ module load R/3.3.1-iomkl-2015B-default-mt
Note: Loading the R modules will load the base installation of R. There is another module available called R_tamu that has all commonly used packages pre-installed.
To see all versions of R_tamu available:
[NetID@cluster ~]$ module spider R_tamu
Installing Packages
While there are many packages available with the R_tamu module, you may find that we do not have a package installed that is needed. Packages can be installed in a personal directories. It is recommended to make a sub-directory in either $HOME or $SCRATCH for any R packages.
To install packages in R in an existing directory ~/R/My_Libs use the following command in R:
> install.packages("package_name", lib="~/R/My_Libs")
Important Note: Any path given to R must be a full path. R does NOT recognize the $HOME or $SCRATCH environment variables. In the above example, "~/" is a short-cut for "/home/netID/". If the installation directory is located in $SCRATCH, the path would need be "/scratch/user/netID/...". If you are unsure of the full path of the installation directory, navigate to the directory outside of R and use "pwd" to print out the full path to that directory.
If you have trouble installing packages for yourself, you can contact us with any concerns. Similarly, if you think a package would be particularly useful to other users, you can contact us with a request to have it added to R_tamu.
Usage on the Login Nodes
Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.
The most important processing limits here are:
- ONE HOUR of PROCESSING TIME per login session.
- EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.
Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.
Usage on the Compute Nodes
Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.
For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:
- Terra: About Terra Batch Processing
- Grace: About Grace Batch Processing
Ada Example Job Script
Example 1: A serial (single core) R Job example: (Last Updated: March 24, 2017)
#BSUB -J R_Job1 # sets the job name to R_Job1. #BSUB -L /bin/bash # uses the bash login shell to initialize the job's execution environment. #BSUB -W 5:00 # sets to 5 hours the job's runtime wall-clock limit. #BSUB -n 1 # assigns 1 core for execution. #BSUB -R "span[ptile=1]" # assigns 1 core per node. #BSUB -R "rusage[mem=5000]" # reserves ~5GB per process/CPU for the job #BSUB -M 5000 # sets to ~5GB the per process enforceable memory limit. #BSUB -o stdout1.%J # directs the job's standard output to stdout1.jobid ## Load the necessary modules module purge module load R_tamu/3.3.1-iomkl-2016.07-default-mt ## Launch R with proper parameters Rscript myScript.R
Example 2: A parallel (multiple core) R Job example, where myScript.R is a script that requests 10 slaves. Note: The number of cores requested should match the number of slaves requested.
#BSUB -J R_Job1 # sets the job name to R_Job1. #BSUB -L /bin/bash # uses the bash login shell to initialize the job's execution environment. #BSUB -W 5:00 # sets to 5 hours the job's runtime wall-clock limit. #BSUB -n 10 # assigns 10 core for execution. #BSUB -R "span[ptile=10]" # assigns 10 core per node. #BSUB -R "rusage[mem=5000]" # reserves ~5GB per process/CPU for the job (5GB * 10 Cores = 50GB per node) #BSUB -M 5000 # sets to ~5GB the per process enforceable memory limit. #BSUB -o stdout1.%J # directs the job's standard output to stdout1.jobid ## Load the necessary modules module purge module load R_tamu/3.3.1-iomkl-2016.07-default-mt ## Launch R with proper parameters mpirun -np 1 Rscript myScript.R
To submit the batch job, run: (where jobscript is a file that looks like one of the above examples)
[ NetID@ada1 ~]$ bsub < jobscript
Terra Example
Example 1: A serial (single core) R Job example: (Last updated March 24, 2017)
#!/bin/bash ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION #SBATCH --export=NONE #SBATCH --get-user-env=L ##NECESSARY JOB SPECIFICATIONS #SBATCH --job-name=R_Job # Sets the job name to R_Job #SBATCH --time=5:00:00 # Sets the runtime limit to 5 hr #SBATCH --ntasks=1 # Requests 1 core #SBATCH --ntasks-per-node=1 # Requests 1 core per node (1 node) #SBATCH --mem=5G # Requests 5GB of memory per node #SBATCH --output=stdout1.o%J # Sends stdout and stderr to stdout1.o[jobID] ## Load the necessary modules module purge module load R_tamu/3.3.2-iomkl-2017A-Python-2.7.12-default-mt ## Launch R with proper parameters Rscript myScript.R
Example 2: A parallel (multiple core) R Job example, where myScript.R is a script that requests 10 slaves. (Last updated March 24, 2017)
Note: The number of cores requested should match the number of slaves requested.
#!/bin/bash ##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION #SBATCH --export=NONE #SBATCH --get-user-env=L ##NECESSARY JOB SPECIFICATIONS #SBATCH --job-name=R_Job # Sets the job name to R_Job #SBATCH --time=5:00:00 # Sets the runtime limit to 5 hr #SBATCH --ntasks=10 # Requests 10 cores #SBATCH --ntasks-per-node=10 # Requests 10 cores per node (1 node) #SBATCH --mem=50G # Requests 50GB of memory per node #SBATCH --output=stdout1.o%J # Sends stdout and stderr to stdout1.o[jobID] ## Load the necessary modules module purge module load R_tamu/3.3.2-iomkl-2017A-Python-2.7.12-default-mt ## Launch R with proper parameters mpirun -np 1 Rscript myScript.R
To submit the batch job, run: (where jobscript is a file that looks like one of the above examples)
[ NetID@terra ~]$ sbatch jobscript
Usage on the VNC Nodes
The VNC nodes allow for usage of the a graphical user interface (GUI) without disrupting other users.
VNC jobs and GUI usage do come with restrictions. All VNC jobs are limited to a single node (Terra: 28 cores/64GB). There are fewer VNC nodes than comparable compute nodes.
For more information, including instructions, on using software on the VNC nodes, please visit our Terra Remote Visualization page.