Matlab
Matlab is accessible to all HPRC users within the terms of our license agreement. If you have particular concerns about whether specific usage falls within the TAMU HPRC license, please send an email to HPRC Helpdesk. You can start a Matlab session either directly on a login node, through our portal or through the batch system.
NOTE: For ACES users
Although Matlab falls under restricted software on ACES, most academic researchers are allowed to access it. Please contact us at help@hprc.tamu.edu or create a help request from the ACES dashboard to request access.
Running Matlab on a login node
To be able to use Matlab, the Matlab module needs to be loaded first. This can be done using the following command:
[ NetID@cluster ~]$
module load Matlab/R2020b
This will setup the environment for Matlab version R2020b. To see a list of all installed versions, use the following command:
[ NetID@cluster ~]$
module spider Matlab
Note: New versions of software become available periodically. Version numbers may change.
To start matlab, use the following command:
[ NetID@cluster ~]$
matlab
Depending on your X server settings, this will start either the Matlab GUI or the Matlab command-line interface. To start Matlab in command-line interface mode, use the following command with the appropriate flags:
[ NetID@cluster ~]$
matlab -nosplash -nodisplay
By default, Matlab will execute a large number of built-in operators and functions multi-threaded and will use as many threads (i.e. cores) as are available on the node. Since login nodes are shared among all users, HPRC restricts the number of computational threads to 8. This should suffice for most cases. Speedup achieved through multi-threading depends on many factors and in certain cases. To explicitly change the number of computational threads, use the following Matlab command:
>>feature('NumThreads',4);
This will set the number of computational threads to 4.
To completely disable multi-threading, use the -singleCompThread option when starting Matlab:
[ NetID@cluster ~]$
matlab -singleCompThread
Usage on the Login Nodes
Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.
The most important processing limits here are: * ONE HOUR of PROCESSING TIME per login session.
- EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.
Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension. Note: Your login session will disconnect after one hour of inactivity.
Using Drona Composer to run Matlab jobs
Drona Composer provides a 100% graphical interface to create and submit matlab jobs without the need to write a Slurm script or even be aware of Slurm syntax. It guides you in providing the relevant information to generate and submit your matlab job.
Accesing Drona Composer
Drona is available on all HPRC Portals. Once you log in to the portal of your choice, select Drona Composer from the Jobs tab. This will open a new window showing the Drona composer interface.
Drona Environments
You can find the matlab environment in the Dropdown. The image below shows a screenshot of the Drona composer interface with a dropdown menu with all available environments. NOTE: if you don't see the matlab environment, you need to import it first. You only need to do this once. See the import section for more information.
Here, you will select the matlab environment. Once you select the matlab environment, the form will expand with several additional fields to guide you in providing all the relevant information. The screenshot below shows the extra fields.
Hover over the little question mark to get additional information for any of the input fields. It will show further information and help.
Once you have filled in all the fields, click the "preview" button. This will show the fully editable preview screen with the generated job script based on the provided input. You are welcome to inspect the generated files and make edits.
To submit the job, click on the submit button, and Drona Composer will submit the generated job on your behalf.
For detailed information about Drona composer, checkout the Drona Composer Guide
Running Matlab through the hprc portal
HPRC provides a portal through which users can start an interactive Matlab GUI session inside a web browser. For more information how to use the portal see our HPRC OnDemand Portal section
Running Matlab through the batch system
To run Matlab simulations on the compute nodes HPRC recommend matlabsubmit
For detailed information on using matlabsubmit see the matlabsubmit page section.
Using Matlab Parallel Toolbox on HPRC Resources
In this section, we will focus on utilizing the Parallel toolbox on HPRC cluster. For a general intro to the Parallel Toolbox see the parallel toolbox section on the Mathworks website. Here we will discuss how to use Matlab Cluster profiles to distribute workers over multiple nodes.
Cluster Profiles
Matlab uses the concept of Cluster Profiles to create parallel pools. When Matlab creates a parallel pool, it uses the cluster profile to determine how many workers to use, how many threads every worker can use, where to store meta-data, as well as some other meta-data. There are two kinds of profiles.
- local profiles: parallel processing is limited to the same node the Matlab client is running.
- cluster profiles: parallel processing can span multiple nodes; profile interacts with a batch scheduler (e.g. SLURM on grace).
NOTE: we will not discuss local profiles any further here. Processing using a local profile is exactly the same as processing using cluster profiles.
TAMU HPRC provides a framework, to easily manage and update cluster profiles. The central concept in most of the discussion below is the TAMUClusterProperties object. The TAMUClusterProperties object keeps track of all the properties needed to successfully create a parallel pool. That includes typical Matlab properties, such as the number of Matlab workers requested as well as batch scheduler properties such as wall-time and memory. TAMUClusterProperties.
Importing Cluster Profile
For your convenience, HPRC already created a custom Cluster Profile. Using the profile, you can define how many workers you want, how you want to distribute the workers over the nodes, how many computational threads to use, how long to run, etc. Before you can use this profile you need to import it first. This can be done using by calling the following Matlab function.
>>tamuprofile.importProfile()
This function imports the cluster profile and it creates a directory
structure in your scratch directory where Matlab will store
meta-information during parallel processing. The default location is
/scratch/$USER/MatlabJobs/TAMU\<VERSION, where
Getting the Cluster Profile Object
To get a TAMUClusterProperties object you can do the following:
>> tp=TAMUClusterProperties;
tp is an object of type TAMUClusterProperties with default values for all the properties. To see all the properties, you can just print the value of tp. You can easily change the values using the convenience methods of TAMUClusterProperties
For example, suppose you have Matlab code and want to use 4 workers for parallel processing.
>> tp=TAMUClusterProperties;
>> tp.workers(4);
Creating a Parallel Pool
To start a parallel pool you can use the HPRC convenience function tamuprofile.parpool. It takes as argument a TAMUClustrerProperties object that specifies all the resources that are requested.
For example:
mypool = tamuprofile.parpool(tp)
:
delete(mypool)
This code starts a worker pool using the default cluster profile, with 4 additional workers.
NOTE: only instructions within parfor and spmd blocks are executed on the workers. All other instructions are executed on the client.
NOTE: all variables declared inside the parpool block will be destroyed once the block is finished.
Alternative approach to create parallel pool
Matlab already provides functions to create parallel pools, namely:
parcluster(
cp = parcluster('TAMU2021b')
% add code to set the number of workers manually.
mypool = parpool(cp);
;
delete(mypool)
For convenience, TAMU HPRC also provides a convenience function to return a fully populated parcluster object that can be passed into a Matlab parpool function. See below for an example that creates a pool with 4 workers:
tp = TAMUClusterProperties();
tp.workers(4);
cp = tamuprofile.parcluster();
mypool = parpool(cp)
:
delete(mypool)