Hprc banner tamu.png

SW:Comsol

From TAMU HPRC
Revision as of 12:26, 17 April 2020 by Francis (talk | contribs) (Access)
Jump to: navigation, search


Description

COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics simulation software. It allows conventional physics-based user interfaces and coupled systems of partial differential equations (PDEs). COMSOL provides an IDE and unified workflow for electrical, mechanical, fluid, and chemical applications. An API for Java and LiveLink for MATLAB may be used to control the software externally, and the same API is also used via the Method Editor.

Once a model is built in Comsol GUI, the next step is to compute the model for a solution, which is often time-consuming. A job script must be created to run the model in batch so that you can control wall time, memory, and other cluster resources for your simulation. This tutorial illustrates how to create Comsol LSF batch scripts on Ada.

All solvers in Comsol can run in parallel in one of three parallel modes: shared memory mode, distributed mode, or hybrid mode. By default, a Comsol solver runs in shared memory mode. This is the same as OpenMP where the parallelism is limited by total number of CPU cores available on one compute node in a cluster.

Access

Comsol is restricted software that is only limited to users and groups who have a license. You can choose one of the following to get access:

  1. Purchase your own license (If you choose this route, you can either ask us to host the license server or you can host the server by yourself.)
  2. Ask for permission to use the license server maintained by the school of engineering.


The contact person is:

Mitch Wittneben
9798455235
mwittneben@tamu.edu

A Complete Example

Example 1: solving a model in shared memory mode using 20 cores on one cluster node.

 #BSUB -J comsoltest
 #BSUB -n 20 -R "span[ptile=20]"
 #BSUB -M 2800 -R "rusage[mem=2800]"
 #BSUB -o output.%J
 #BSUB -L /bin/bash
 #BSUB -W 2:00
 
 module load Comsol/xxx 
 export LM_LICENSE_FILE=port@license-server
 comsol -np 20 batch -inputfile in.mph -outputfile out.mph

Note: xxx represents a Comsol version. You need to pick the version you need to access.

Running Comsol in Different Parallel Mode

Assuming other things are the same as in Example 1, we will see additional examples running in different parallel modes by changing number of cores and Comsol command line parameters.

Shared Memory Mode

Example 2: Solving a model in shared memory mode and using 10 cores in one cluster node. This is the similar as Example 1.

 #BSUB -n 10 -R "span[ptile=10]"
 comsol -np 10 batch -inputfile input.mph -outputfile output.mph

Distributed Mode

Comsol solvers can also run in distributed mode by checking the "distributed computing" checkbox of the solver when building the model. In this mode, the solver runs on multiple nodes and uses MPI for communication. Except PARDISO, all solvers support distributed mode. However, PARDISO also has a check box for distributed computing. If selected, the actual solver used is MUMPS.

Example 3: Solving a model in distributed mode on two cluster nodes with a total of 40 cores

 #BSUB -n 40 -R "span[ptile=20]"
 comsol -simplecluster -inputfile input.mph -outputfile output.mph

This is the same as:

 #BSUB -n 40 -R "span[ptile=20]"
 cat $LSB_DJOB_HOSTFILE  > hostfile.$LSB_JOBID
 comsol -f ./hostfile.$LSB_JOBID -nn 40 batch -inputfile input.mph -outputfile output.mph

Hybrid Mode

Either mode has its pros and cons. Shared mode utilizes CPU cores better than distributed mode but can only run on one cluster node, while distributed mode can utilize more than one physical cluster node. It is usually best to run a solver in a way to take advantage of both modes. This can be done easily at the command line through fine tuning of the options -nn, -nnhost, -np.

Example 4: Solving a model in hybrid mode on 2 cluster nodes with 40 cores. In this example, Comsol will spawn 2 MPI tasks in total (one on each cluster node). Each MPI task will be running with 20 threads on 20 cores.

 #BSUB -n 40 -R "span[ptile=20]"
 cat $LSB_DJOB_HOSTFILE |uniq  > hostfile.$LSB_JOBID
 comsol batch -f ./hostfile.$LSB_JOBID -nn 2 -nnhost 1 -np 20 -inputfile input.mph -outputfile output.mph

Example 5: Solving a model in hybrid mode on 2 cluster nodes with 40 cores. In this example, Comsol will spawn 4 MPI tasks in total (one on each cluster node). Each MPI task will be running with 10 threads on 10 cores.

 #BSUB -n 40 -R "span[ptile=20]"
 cat $LSB_DJOB_HOSTFILE |uniq  > hostfile.$LSB_JOBID
 comsol batch -f ./hostfile.$LSB_JOBID -nn 4 -nnhost 2 -np 10 -inputfile input.mph -outputfile output.mph

Parametric Sweep

Comsol models configured with parametric sweep can also benefit from parallel computing in different ways. A model configured with parametric sweep needs to run under a range of parameters or combinations of parameters, and each set of parameters can be calculated independently. Once a model with parametric sweep node created in the Comsol GUI, it must also be configured with cluster sweep to distribute the parameters to be processed in parallel.

Example 6: Run a parametric sweep model on 40 cores. In this example, 10 combinations of parameters will be running concurrently on two cluster nodes with 5 combinations of parameters on each cluster node. Each combination of parameters will be running with 4 threads on 4 cores.

 #BSUB -n 40 -R "span[ptile=20]"
 cat $LSB_DJOB_HOSTFILE |uniq  > hostfile.$LSB_JOBID
 comsol -f  ./hostfile.$LSB_JOBID -nn 10 -nnhost 5 -np 4 -inputfile input.mph -outputfile output.mph

If each combination of parameters requires large amount of memory to solve, then we can specify one combination of parameters per node such that the entire memory on the node will be used for solving one combination of parameters.

Example 7: Run a parametric sweep model (with 10 parameter combinations) on 200 cores with each parameter combination taking an entire cluster node.

 #BSUB -n 200 -R "span[ptile=20]"
 cat $LSB_DJOB_HOSTFILE |uniq  > hostfile.$LSB_JOBID
 comsol -f ./hostfile.$LSB_JOBID -nn 10 -nnhost 1 -np 20 -inputfile input.mph -outputfile output.mph

Common problems

1. Disk quota exceeded in home directory

By default, comsol stores all temporary files in your home directory. For large models, you are likely to get "Disk quota exceeded" error due to huge amount of temporary files dumped into y our home directory. To resolve this issue, you need to redirect temporary files to your scratch directory.

  comsol -tmpdir /scratch/user/username/cosmol/tmp -recoverydir /scratch/user/username/comsol/recovery -np 20 -inputfile input.mph -outputfile outpu.mph

2. Out of Memory


If you receive the "Out of Memory" error, most likely the Java heap has an inadequate size. The default Java heap size for Comsol is 2G. You can change the value by following three steps:

  • 1) load the comsol module
ml comsol/version
  • 2) copy the Comsol setup file to your home directory

If you are running one core Comsol job, copy comsolbatch.ini

cp $EBROOTCOMSOL/bin/glnxa64/comsolbatch.ini $HOME/comsol.ini


If you are running cluster Comsol job, copy comsolclusterbatch.ini

cp $EBROOTCOMSOL/bin/glnxa64/comsolclusterbatch.ini $HOME/comsol.ini


  • 3) edit the local setup file and increase Xmx
sed -i "s/-Xmx.*/-Xmx8196m/" $HOME/comsol.ini     (here we increase the heap size to 8G)
  • 4) add '-comsolinifile $HOME/comsol.ini' at the command line.
comsol -comsolinifile $HOME/comsol.ini  ...        (... represent other command line options)