Hprc banner tamu.png

Difference between revisions of "SW:R"

From TAMU HPRC
Jump to: navigation, search
(Ada Example Job Script)
(Terra Example)
Line 82: Line 82:
  
 
===Terra Example===
 
===Terra Example===
 +
''COMING SOON''
  
 
{{:SW:VNC_Node_Warning}}
 
{{:SW:VNC_Node_Warning}}
  
 
[[Category:Software]]
 
[[Category:Software]]

Revision as of 16:24, 17 February 2017

R

Description

R is a free software environment for statistical computing and graphics.
Homepage: http://www.r-project.org/

Access

R is open to all HPRC users.

Loading the Module

To see all versions of R available:

[NetID@cluster ~]$ module spider R

To load the default R module: (not recommended)

[NetID@cluster ~]$ module load R 

To load a particular version of R (Example: 3.3.1 with the iomkl toolchain):

[NetID@cluster ~]$ module load R/3.3.1-iomkl-2015B-default-mt

Note: Loading the R modules will load the base installation of R. There is another module available called R_tamu that has some packages pre-installed.

To see all versions of R_tamu available:

[NetID@cluster ~]$ module spider R_tamu

Installing Packages

While there are many packages available with the R_tamu module, you may find that we do not have a package installed that you need. You can install packages for yourself in your $HOME directory.
This can be done in R:

> install.packages("package_name")
Warning in install.packages("package_name") :

Note: If this is the first time you are installing packages for yourself, you will see a message similar to the following, which will create a directory for you in your $HOME directory.

'lib ="/general/software/x86_64/easybuild/software/R/3.3.1-iomkl-2016.07-default-mt/lib64/R/library"' is not writable
Would you like to use a personal library instead? (y/n) y
Would you like to create a personal library ~/R/x86_64-pc-linux-gnu-library/3.3 to install packages into? (y/n) y

If you have trouble installing packages for yourself, you can contact us about it.

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:

  • ONE HOUR of PROCESSING TIME per login session.
  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Ada Example Job Script

Updated: February 17, 2017 Example 1: A serial (single core) example:

#BSUB -J R_Job               # sets the job name to R_Job.
#BSUB -L /bin/bash           # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00                # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 1                   # assigns 1 core for execution.
#BSUB -R "span[ptile=1]"     # assigns 1 core per node.
#BSUB -R "rusage[mem=5000]"  # reserves 5000MB per process/CPU for the job (5GB * 1 Core = 5GB per node) 
#BSUB -M 5000		     # sets to 5,000MB (~5GB) the per process enforceable memory limit.
#BSUB -o R_Job.o%J           # directs the job's standard output to R_Job.o[jobid]


# Load the modules
module load R_tamu/3.3.1-iomkl-2016.07-default-mt

# Launch R with proper parameters 
Rscript myScript.R

Example 2: A parallel example, where myScript.R is a script that requests 16 slaves. Note: The number of cores requested should match the number of slaves requested.

#BSUB -J R_Job              # sets the job name to R_Job.
#BSUB -L /bin/bash          # uses the bash login shell to initialize the job's execution environment.
#BSUB -W 2:00               # sets to 2 hours the job's runtime wall-clock limit.
#BSUB -n 16                 # assigns 16 cores for execution.
#BSUB -R "span[ptile=16]"   # assigns 16 cores per node.
#BSUB -R "rusage[mem=2500]" # reserves 2500MB per process/CPU for the job (2.5GB * 16 Core = 40GB per node) 
#BSUB -M 2500		    # sets to 250MB (~2.5GB) the per process enforceable memory limit.
#BSUB -o R_Job.o%J          # directs the job's standard output to R_Job.o[jobid]


# Load the modules
module load R_tamu/3.3.1-iomkl-2016.07-default-mt

# Launch R with proper parameters 
mpirun -np 1 Rscript myScript.R

To submit the batch job, run: (where jobscript looks like an example from above)

[NetID@ada1 ~]$ bsub < jobscript

Terra Example

COMING SOON

Usage on the VNC Nodes

The VNC nodes allow for usage of the a graphical user interface (GUI) without disrupting other users.

VNC jobs and GUI usage do come with restrictions. All VNC jobs are limited to a single node (Terra: 28 cores/64GB). There are fewer VNC nodes than comparable compute nodes.

For more information, including instructions, on using software on the VNC nodes, please visit our Terra Remote Visualization page.