Hprc banner tamu.png

SW:LSDYNA

From TAMU HPRC
Jump to: navigation, search

LS-DYNA

LS-DYNA is a general-purpose finite element program capable of simulating complex real world problems. - Homepage: http://www.lstc.com/products/ls-dyna

Documentation

Note: You will need to be within the TAMU firewall in order to view these documents. You will need to either be on campus OR connect using VPN. Information on the TAMU VPN can be found here.

Online Resources

Access

LS-DYNA is ONLY available to any users within an academic department which has purchased their own license. We host the license on behalf of the licensed department(s).

Loading the Module

To see all versions of LS-DYNA available on Ada:

[NetID@cluster ~]$ module spider LS-DYNA

To load a particular version of LS-DYNA on Ada (Example: R9.1.0):

[NetID@cluster ~]$ module load LS-DYNA/R9.1.0

To show the program names for a particular version of LS-DYNA:

 [NetID@cluster ~]$ module help LS-DYNA/R9.1.0

----------------------------------------------------- Module Specific Help for "LS-DYNA/R9.1.0" -----------------------------------------------------
  LS-DYNA is a general-purpose finite element program capable of simulating complex real world problems. - Homepage: http://www.lstc.com/products/ls-dyna/
    
  Sets up environment for LS-DYNA 
  Version                        Precision      Command  
  OpenMP OR Serial               Single         ls-dyna_smp_s 
  OpenMP OR Serial               Double         ls-dyna_smp_d 
  MPP (Intel MPI)                Single         ls-dyna_mpp_s 
  MPP (Intel MPI)                Double         ls-dyna_mpp_d 
  Test inputs for LS-DYNA are available in the following directory: /software/tamusc/LS-DYNA/examples 

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:

  • ONE HOUR of PROCESSING TIME per login session.
  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:

Ada Example

A double precision SMP example with 20 cores on a single node:

#BSUB -J LSDYNAJob1        # sets the job name to LSDYNAJob1
#BSUB -L /bin/bash         # uses the bash login shell to initialize the job's execution environment
#BSUB -W 48:00             # sets to 48 hours the job's runtime wall-clock limit
#BSUB -n 20                # assigns 20 cores for execution.
#BSUB -R "span[ptile=20]"  # assigns 20 cores per node.
#BSUB -R "rusage[mem=1500]"  # reserves 1500MB per process/CPU for the job
#BSUB -M 1500              # sets to 1500MB (~1.5GB) the per process enforceable memory limit.
#BSUB -o stdout1.%J        # directs the job's standard output to stdout1.jobid


## Load the necessary modules
module load LS-DYNA/R9.1.0

## Run LS-DYNA with the proper parameters.  
## The number of threads MUST be specified using both the OMP_NUM_THREADS variable and the ncpu option.
export OMP_NUM_THREADS=20
mpirun ls-dyna_smp_d ncpu=20 I=InputFile O=OutputFile MEMORY=300m D=d3dump

A double precision MPP example with 40 cores across two nodes:

#BSUB -J LSDYNAJob1        # sets the job name to LSDYNAJob1
#BSUB -L /bin/bash         # uses the bash login shell to initialize the job's execution environment
#BSUB -W 48:00             # sets to 48 hours the job's runtime wall-clock limit
#BSUB -n 40                # assigns 40 cores for execution.
#BSUB -R "span[ptile=20]"  # assigns 20 cores per node.
#BSUB -R "rusage[mem=2500]"  # reserves 2500MB per process/CPU for the job
#BSUB -M 2500              # sets to 2500MB (~2.5GB) the per process enforceable memory limit.
#BSUB -o stdout1.%J        # directs the job's standard output to stdout1.jobid


## Load the necessary modules
module load LS-DYNA/R9.1.0

## Run LS-DYNA with the proper parameters
mpirun ls-dyna_mpp_d I=InputFile O=OutputFile MEMORY=300m D=d3dump


To submit the batch job, run:

[NetID@ada1 ~]$ bsub < jobscript

Terra Example

A double precision SMP example using 28 cores on a single node:

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE          #Do not propagate environment
#SBATCH --get-user-env=L       #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=lsdyna      #Set the job name to "lsdyna"
#SBATCH --time=01:30:00        #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks-per-node=28   #Request 28 tasks per node
#SBATCH --ntasks=28            #Request 20 tasks/cores (total)
#SBATCH --mem=50000M           #Request 50GB per node
#SBATCH --output=stdout.%j     #Send stdout/err to "lsdyna.[jobID]"

module load LS-DYNA/R9.1.0

## The number of threads MUST be specified using both the OMP_NUM_THREADS variable and the ncpu option.
export OMP_NUM_THREADS=28
ls-dyna_smp_d i=main.k ncpu=28 memory=500m memory2=200m

A double precision MPP example using 56 cores across two nodes:

#!/bin/bash
##ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE          #Do not propagate environment
#SBATCH --get-user-env=L       #Replicate login environment

##NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=lsdyna      #Set the job name to "lsdyna"
#SBATCH --time=01:30:00        #Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks-per-node=28   #Request 28 tasks per node
#SBATCH --ntasks=56            #Request 20 tasks/cores (total)
#SBATCH --mem=50000M           #Request 50GB per node
#SBATCH --output=stdout.%j     #Send stdout/err to "lsdyna.[jobID]"

module load LS-DYNA/R9.1.0

# launch MPI program using the hydra launcher
mpirun -n 56  ls-dyna_mpp_d i=main.k memory=500m memory2=200m

Memory Specification

The LS-DYNA command line option MEMORY specifies memory per node with a base unit words. The argument can be specified in words or megawords (denoted by m). For single precision LS-DYNA, a word is 4 bytes and a megaword is 4 MB. For double precision, a word is 8 bytes and a megaword is 8MB.

Example: (Single Precision)

MEMORY=300m     #300 megawords = 300 megawords * 4 MB = 1200 MB (~1.2GB)
MEMORY=600      #600 words = 600 words * 4 B = 2400 KB

Example: (Double Precision)

MEMORY=300m     #300 megawords = 300 megawords * 8 MB = 2400 MB (~2.4GB)
MEMORY=600      #600 words = 600 words * 8 B = 4800 KB

Checking Input Before Running

If there are no licenses currently available, or if you would not like to check out licenses. There is a way to initialize your model and check for errors without using any licenses. This can be done with the mcheck command option.

Example:

#BSUB -J LSDYNAJob1        # sets the job name to LSDYNAJob1
#BSUB -L /bin/bash         # uses the bash login shell to initialize the job's execution environment
#BSUB -W 48:00             # sets to 48 hours the job's runtime wall-clock limit
#BSUB -n 40                # assigns 40 cores for execution.
#BSUB -R "span[ptile=20]"  # assigns 20 cores per node.
#BSUB -R "rusage[mem=2500]"  # reserves 2500MB per process/CPU for the job
#BSUB -M 2500              # sets to 2500MB (~2.5GB) the per process enforceable memory limit.
#BSUB -o stdout1.%J        # directs the job's standard output to stdout1.jobid


## Load the necessary modules
module load LS-DYNA/R7.1.2

## Run LS-DYNA with the proper parameters
mpirun ls-dyna_mpp_d I=InputFile O=OutputFile MEMORY=300m mcheck=1

Related Software

The following software is also available on our clusters for use with LS-DYNA:

  • LS-OPT: A standalone Design Optimization and Probabilistic Analysis package with an interface to LS-DYNA.
  • LS-PREPOST: An advanced pre and post-processor for LS-DYNA.
  • LS-TASC: A Topology and Shape Computation tool. Developed for engineering analysts who need to optimize structures, LS-TaSC works with both the implicit and explicit solvers of LS-DYNA. LS-TaSC handles topology optimization of large non-linear problems, involving dynamic loads and contact conditions.

More Information

To find more information on LS-DYNA, please consult the LS-DYNA manuals located at http://www.dynasupport.com/manuals.