Difference between revisions of "SW:Matlab"
(→Using Matlab Parallel Toolbox on HPRC Resources) |
(→Using Matlab Parallel Toolbox on HPRC Resources) |
||
Line 100: | Line 100: | ||
<font color=red> ''THIS SECTION IS UNDER CONSTRUCTION'' </font><br> | <font color=red> ''THIS SECTION IS UNDER CONSTRUCTION'' </font><br> | ||
− | In this section, we will focus on utilizing the Parallel toolbox on HPRC cluster. For a general intro to the Parallel Toolbox see the [https://www.mathworks.com/help/parallel-computing/index.html?s_tid=CRUX_lftnav parallel toolbox ] section on the Mathworks website. Here we will discuss how to use Cluster profiles to distribute workers over multiple nodes. | + | In this section, we will focus on utilizing the Parallel toolbox on HPRC cluster. For a general intro to the Parallel Toolbox see the [https://www.mathworks.com/help/parallel-computing/index.html?s_tid=CRUX_lftnav parallel toolbox ] section on the Mathworks website. Here we will discuss how to use Matlab Cluster profiles to distribute workers over multiple nodes. |
− | The central concept in most of the | + | The central concept in most of the discussion below is the '''TAMUClusterProperties''' object which we will discuss in more detail in the next section |
== Cluster Profiles == | == Cluster Profiles == | ||
− | + | Matlab Cluster Profiles provide an interface to define properties of how and where to start Matlab workers. There are two kinds of profiles. | |
− | Matlab | ||
* local profiles: parallel processing is limited to the same node the Matlab client is running. | * local profiles: parallel processing is limited to the same node the Matlab client is running. | ||
− | * cluster profiles: parallel processing can span multiple nodes; profile interacts with batch scheduler (e.g. | + | * cluster profiles: parallel processing can span multiple nodes; profile interacts with a batch scheduler (e.g. SLURM on terra). |
'''NOTE:''' we will not discuss ''local profiles'' any further here. Processing using a local profile is exactly the same as processing using cluster profiles. | '''NOTE:''' we will not discuss ''local profiles'' any further here. Processing using a local profile is exactly the same as processing using cluster profiles. | ||
Line 117: | Line 116: | ||
=== Importing Cluster Profile === | === Importing Cluster Profile === | ||
− | For your convenience, HPRC already created a custom Cluster Profile. | + | For your convenience, HPRC already created a custom Cluster Profile. Using the profile, you can define how many workers you want, how you want to distribute the workers over the nodes, how many computational threads to use, how long to run, etc. Before you can use this profile you need to import it first. This can be done using by calling the following Matlab function. |
+ | |||
+ | <pre> | ||
+ | >>tamuprofile.importProfile() | ||
+ | </pre> | ||
+ | |||
+ | |||
<pre> | <pre> | ||
Line 123: | Line 128: | ||
</pre> | </pre> | ||
− | This function imports the cluster profile and it creates a directory structure in your scratch directory where Matlab will store meta information during parallel processing. The default location is ''/scratch/$USER/MatlabJobs/TAMU'' | + | This function imports the cluster profile and it creates a directory structure in your scratch directory where Matlab will store meta-information during parallel processing. The default location is ''/scratch/$USER/MatlabJobs/TAMU<VERSION'', where <VERSIOM> represents the Matlab version. For example, for Matlab R2019b it will be ''/scratch/$USER/MatlabJobs/TAMU2019b'' |
− | '''NOTE:''' | + | '''NOTE:''' function '''tamuprofile.clusterprofile''' is a wrapper around the Matlab function |
[https://www.mathworks.com/help/distcomp/parallel.importprofile.html parallel.importprofile] | [https://www.mathworks.com/help/distcomp/parallel.importprofile.html parallel.importprofile] | ||
− | |||
− | |||
− | |||
=== Retrieving fully populated Cluster Profile Object === | === Retrieving fully populated Cluster Profile Object === |
Revision as of 23:35, 24 February 2020
Contents
Running Matlab interactively
Matlab is accessible to all HPRC users within the terms of our license agreement. If you have particular concerns about whether specific usage falls within the TAMU HPRC license, please send an email to HPRC Helpdesk. You can start a Matlab session either directly on a login node or through our portal
Running Matlab on a login node
To be able to use Matlab, the Matlab module needs to be loaded first. This can be done using the following command:
[ netID@cluster ~]$ module load Matlab/R2019a
This will setup the environment for Matlab version R2019a. To see a list of all installed versions, use the following command:
[ netID@cluster ~]$ module spider Matlab
Note: New versions of software become available periodically. Version numbers may change.
To start matlab, use the following command:
[ netID@cluster ~]$ matlab
Depending on your X server settings, this will start either the Matlab GUI or the Matlab command-line interface. To start Matlab in command-line interface mode, use the following command with the appropriate flags:
[ netID@cluster ~]$ matlab -nosplash -nodisplay
By default, Matlab will execute a large number of built-in operators and functions multi-threaded and will use as many threads (i.e. cores) as are available on the node. Since login nodes are shared among all users, HPRC restricts the number of computational threads to 8. This should suffice for most cases. Speedup achieved through multi-threading depends on many factors and in certain cases. To explicitly change the number of computational threads, use the following Matlab command:
>>feature('NumThreads',4);
This will set the number of computational threads to 4.
To completely disable multi-threading, use the -singleCompThread option when starting Matlab:
[ netID@cluster ~]$ matlab -singleCompThread
Usage on the Login Nodes
Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.
The most important processing limits here are:
- ONE HOUR of PROCESSING TIME per login session.
- EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.
Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.
Running Matlab through the hprc portal
HPRC provides a portal through which users can start an interactive Matlab GUI session inside a web browser. For more information how to use the portal see our HPRC OnDemand Portal section
Running Matlab through the batch system
HPRC developed a tool named matlabsubmit to run Matlab simulations on the HPRC compute nodes without the need to create your own batch script and without the need to start a Matlab session. matlabsubmit will automatically generate a batch script with the correct requirements. In addition, matlabsubmit will also generate boilerplate Matlab code to set up the environment (e.g. the number of computational threads) and, if needed, will start a parpool using the correct Cluster Profile (local if all workers fit on a single node and a cluster profile when workers are distribued over multiple nodes)
To submit your Matlab script, use the following command:
[ netID@cluster ~]$ matlabsubmit myscript.m
In the above example, matlabsubmit will use all default values for runtime, memory requirements, the number of workers, etc. To specify resources, you can use the command-line options of matlabsubmmit. For example:
[ netID@cluster ~]$ matlabsubmit -t 07:00 -s 4 myscript.m
will set the wall-time to 7 hours and makes sure Matlab will use 4 computational threads for its run ( matlabsubmit will also request 4 cores).
To see all options for matlabsubmit use the -h flag
[ netID@cluster ~]$ matlabsubmit -h Usage: /sw/hprc/sw/Matlab/bin/matlabsubmit [options] SCRIPTNAME This tool automates the process of running Matlab codes on the compute nodes. OPTIONS: -h Shows this message -m set the amount of requested memory in MEGA bytes(e.g. -m 20000) -t sets the walltime; form hh:mm (e.g. -t 03:27) -w sets the number of ADDITIONAL workers -g indicates script needs GPU (no value needed) -b sets the billing account to use -s set number of threads for multithreading (default: 8 ( 1 when -w > 0) -p set number of workers per node -f run function call instead of script -x add explicit batch scheduler option DEFAULT VALUES: memory : 2000 per core time : 02:00 workers : 0 gpu : no gpu threading: on, 8 threads
NOTE when using the -f flag to execute a function instead of a script, the function call must be enclosed with double quotes when it contains parentheses. For example: matlabsubmit -f "myfunc(21)"
When executing, matlabsubmit will do the following:
- generate boilerplate Matlab code to setup the Matlab environment (e.g. #threads, #workers)
- generate a batch script with all resources set correctly and the command to run Matlab
- submit the generated batch script to the batch scheduler and return control back to the user
For detailed examples on using matlabsubmit see the examples section.
Using Matlab Parallel Toolbox on HPRC Resources
THIS SECTION IS UNDER CONSTRUCTION
In this section, we will focus on utilizing the Parallel toolbox on HPRC cluster. For a general intro to the Parallel Toolbox see the parallel toolbox section on the Mathworks website. Here we will discuss how to use Matlab Cluster profiles to distribute workers over multiple nodes.
The central concept in most of the discussion below is the TAMUClusterProperties object which we will discuss in more detail in the next section
Cluster Profiles
Matlab Cluster Profiles provide an interface to define properties of how and where to start Matlab workers. There are two kinds of profiles.
- local profiles: parallel processing is limited to the same node the Matlab client is running.
- cluster profiles: parallel processing can span multiple nodes; profile interacts with a batch scheduler (e.g. SLURM on terra).
NOTE: we will not discuss local profiles any further here. Processing using a local profile is exactly the same as processing using cluster profiles.
Importing Cluster Profile
For your convenience, HPRC already created a custom Cluster Profile. Using the profile, you can define how many workers you want, how you want to distribute the workers over the nodes, how many computational threads to use, how long to run, etc. Before you can use this profile you need to import it first. This can be done using by calling the following Matlab function.
>>tamuprofile.importProfile()
>>tamu_import_TAMU_clusterprofile()
This function imports the cluster profile and it creates a directory structure in your scratch directory where Matlab will store meta-information during parallel processing. The default location is /scratch/$USER/MatlabJobs/TAMU<VERSION, where <VERSIOM> represents the Matlab version. For example, for Matlab R2019b it will be /scratch/$USER/MatlabJobs/TAMU2019b
NOTE: function tamuprofile.clusterprofile is a wrapper around the Matlab function parallel.importprofile
Retrieving fully populated Cluster Profile Object
To return a fully completed cluster object (i.e. with attached resource information) HPRC created the tamu_set_profile_properties convenience function. There are two steps to follow:
- define the properties using the TAMUClusterProperties class
- call tamu_set_profile_properties using the created TAMUClusterProperties object.
For example, suppose you have Matlab code and want to use 4 workers for parallel processing.
>> tp=TAMUClusterProperties; >> tp.workers(4); >> clusterObject=tamu_set_profile_properties(tp);
Variable clusterObject is a fully populated cluster object that can be used for parallel processing.
NOTE: convenience function tamu_set_profile_properties is a wrapper around Matlab function parcluster. It also uses HPRC convenience function tamu_import_TAMU_clusterprofile to check if the TAMU profile has been imported already.
Starting a Parallel Pool
To start a parallel pool you can use the HPRC convenience function tamu_parpool. It takes as argument a TAMUClustrerProperties object that specifies all the resources that are requested.
The parpool functions enables the full functionality of the parallel language features (parfor and spmd, will be discussed below). A parpool creates a special job on a pool of workers, and connects the pool to the MATLAB client. For example:
mypool = parpool 4 : delete(mypool)
This code starts a worker pool using the default cluster profile, with 4 additional workers.
NOTE: only instructions within parfor and spmd blocks are executed on the workers. All other instructions are executed on the client.
NOTE: all variables declared inside the matlabpool block will be destroyed once the block is finished.
Using GPU
Normally all variables reside in the client workspace and matlab operations are executed on the client machine. However, Matlab also provides options to utilize available GPUs to run code faster. Running code on the gpu is actually very straightforward. Matlab provides GPU versions for many build-in operations. These operations are executed on the GPU automatically when the variables involved reside on the GPU. The results of these operations will also reside on the GPU. To see what functions can be run on the GPU type:
methods('gpuArray') This will show a list of all available functions that can be run on the GPU, as well as a list of available static functions to create data on the GPU directly (will be discussed later).
NOTE: There is significant overhead of executing code on the gpu because of memory transfers.
Another useful function is: gpuDevice This functions shows all the properties of the GPU. When this function is called from the client (or a node without a GPU) it will just print an error message.
To copy variables from the client workspace to the GPU, you can use the gpuArray command. For example:
carr = ones(1000); garr = gpuArray(carr);
will copy variable carr to the GPU wit name garr.
In the example above the 1000x1000 matrix needs to be copied from the client workspace to the GPU. There is a significant overhead involved in doing this.
To create the variables directly on the GPU, Matlab provides a number of convenience functions. For example:
garr=gpuArray.ones(1000)
This will create a 1000x1000 matrix directly on the GPU consisting of all ones.
To copy data back to the client workspace Matlab provides the gather operation.
carr2 = gather(garr)
This will copy the array garr on the GPU back to variable carr2 in the client workspace.
The next example performs a matrix multiplication on the client, a matrix multiplication on the GPU, and prints out elapsed times for both. The actual cpu-gpu matrix multiplication code can be written as:
ag = gpuArray.rand(1000); bg = ag*ag; c = gather(bg);
Running (parallel) Matlab Scripts on HPRC compute nodes
NOTE: Due to the new 2-factor authentication mechanism, this method does not work at the moment. We will update this wiki page when this is fixed.
For detailed information how to submit Matlab codes remotely, click here
Submit Matlab Scripts Remotely or Locally From the Matlab Command Line
NOTE: Due to the new 2-factor authentication mechanism, remote submission method does not work at the moment. We will update this wiki page when this is fixed.
Instead of using the App you can also call Matlab functions (developed by HPRC) directly to run your Matlab script on HPRC compute nodes. There are two steps involved in submitting your Matlab script:
- Define the properties for your Matlab script (e.g. #workers). HPRC created a class named TAMUClusterProperties for this
- Submit the Matlab script to run on HPRC compute nodes. HPRC created a function named tamu_run_batch for this.
For example, suppose you have a script named mysimulation.m, you want to use 4 workers and estimate it will need less than 7 hours of computing time:
>> tp=TAMUClusterProperties(); >> tp.workers(4); >> tp.walltime('07:00'); >> myjob=tamu_run_batch(tp,'mysimulation.m');
NOTE: TAMUClusterProperties will use all default values for any of the properties that have not been set explicitly.
In case you want to submit your Matlab script remotely from your local Matlab GUI, you also have to specify the HPRC cluster name you want to run on and your username. For example, suppose you have a script that uses Matlab GPU functions and you want to run it on terra:
>> tp=TAMUClusterProperties(); >> tp.gpu(1); >> tp.hostname('terra.tamu.edu'); >> tp.user('<USERNAME>'); >> myjob=tamu_run_batch(tp,'mysimulation.m');
To see all available methods on objects of type TAMUClusterProperties you can use the Matlab help or doc functions: E.g.
>> help TAMUClusterProperties/doc
To see help page for tamu_run_batch, use:
>> help tamu_run_batch tamu_run_batch runs Matlab script on worker(s). j = TAMU_RUN_BATH(tp,'script') runs the script script.m on the worker(s) using the TAMUClusterProperties object tp. Returns j, a handle to the job object that runs the script.
tamu_run_batch returns a variable of type Job. See the section "Retrieve results and information from Submitted Job" how to get results and information from the submitted job.