Skip to content

Amber

Description

Amber is a suite of biomolecular simulation programs.

Homepage: http://ambermd.org/

Documentation: http://ambermd.org/Manuals.php

Access

Amber versions prior to Amber24 are licensed software. Please send an e-mail to help@hprc.tamu.edu with an access request.

Amber24 is free for academic, non-profit, and government usage. Access to Amber24 is granted to users who have registered to download Amber24:

  1. Please visit this page and read AMBER24 Software License.
  2. Input your name and institution on the page, and click the 'Accept non-commercial license and download' button to register.
  3. After registration, please send an email to help@hprc.tamu.edu stating that the user has registered for Amber24 and agreed to the terms and conditions to get access on HPRC cluters.

Loading the Module

To find what Amber versions are available, use module spider:

module spider Amber

To learn how to load a specific module version, use module spider:

module spider Amber/22-Python-3.9.6

You will need to load all module(s) on any one of the lines below before the "Amber/22-Python-3.9.6" module is available to load.

GCC/11.2.0 OpenMPI/4.1.1

Read more about toolchains.

Finally, to load Amber:

module load GCC/11.2.0 OpenMPI/4.1.1 Amber/22-Python-3.9.6

To find what Amber versions are available, use module spider:

module spider Amber

To learn how to load a specific module version, use module spider:

module spider Amber/24-NCCL-2.20.5-CUDA-12.4.1

You will need to load all module(s) on any one of the lines below before the "Amber/24-NCCL-2.20.5-CUDA-12.4.1" module is available to load.

GCC/13.2.0 OpenMPI/4.1.6

Read more about toolchains.

Finally, to load Amber:

module load GCC/13.2.0 OpenMPI/4.1.6 Amber/24-NCCL-2.20.5-CUDA-12.4.1

To find what Amber versions are available, use module spider:

module spider Amber

To learn how to load a specific module version, use module spider:

module spider Amber/24-NCCL-2.20.5-CUDA-12.4.1

You will need to load all module(s) on any one of the lines below before the "Amber/24-NCCL-2.20.5-CUDA-12.4.1" module is available to load.

GCC/13.2.0 OpenMPI/4.1.6

Read more about toolchains.

Finally, to load Amber:

module load GCC/13.2.0 OpenMPI/4.1.6 Amber/24-NCCL-2.20.5-CUDA-12.4.1

Running Amber

Grace MPI Example for Amber22

#!/bin/bash

## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber         # Set the job name to "amber"
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=48              # Request 48 tasks; in Grace max is 48
#SBATCH --ntasks-per-node=48     # Request 48 tasks per node
#SBATCH --mem=360G               # Request 360GB (up to 360GB in Grace) per node
#SBATCH --output=amber.%j        # Send stdout/err to "amber.[jobID]"

module purge                     # purge all module
module load GCC/11.2.0 OpenMPI/4.1.1 Amber/22-Python-3.9.6

# Please visit the Amber [documentation](https://ambermd.org/Manuals.php) for a full list of options for pmemd.MPI and the other programs availabe in the amber suite of programs.
mpirun -np $SLURM_NPROCS pmemd.MPI -O -i amber.in -o amber.out -p amber.prmtop -c amber.inpcrd

exit

FASTER MPI Example for Amber24

#!/bin/bash

## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber         # Set the job name to "amber"
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=64              # Request 64 tasks; in FASTER max is 64
#SBATCH --ntasks-per-node=64     # Request 64 tasks per node
#SBATCH --mem=240G               # Request 240GB (up to 240GB in FASTER) per node
#SBATCH --output=amber.%j        # Send stdout/err to "amber.[jobID]"

module purge                     # purge all module
module load GCC/13.2.0 OpenMPI/4.1.6 Amber/24-NCCL-2.20.5-CUDA-12.4.1

# Please visit the Amber documentation for a full list of options for pmemd.MPI and the other programs availabe in the amber suite of programs.
mpirun -np $SLURM_NPROCS pmemd.MPI -O -i amber.in -o amber.out -p amber.prmtop -c amber.inpcrd

exit

ACES MPI Example for Amber24

#!/bin/bash

## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber         # Set the job name to "amber"
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=96              # Request 96 tasks; in ACES max is 96
#SBATCH --ntasks-per-node=96     # Request 96 tasks per node
#SBATCH --mem=480G               # Request 480GB (up to 480GB in ACES) per node
#SBATCH --output=amber.%j        # Send stdout/err to "amber.[jobID]"

module purge                     # purge all module
module load GCC/13.2.0 OpenMPI/4.1.6 Amber/24-NCCL-2.20.5-CUDA-12.4.1

# Please visit the Amber documentation for a full list of options for pmemd.MPI and the other programs availabe in the amber suite of programs.
mpirun -np $SLURM_NPROCS pmemd.MPI -O -i amber.in -o amber.out -p amber.prmtop -c amber.inpcrd

exit

GPU-Accelerated Simulations

Although Amber can parallelize multiple GPUs on a single calculation, you are unlikely to see significant speedup beyond a single GPU. It is recommended to run Amber with a single GPU. For more information about running single and multiple GPU-accelerated simulations, please refer to Amber's webpage.

Grace GPU Example for Amber22

#!/bin/bash

## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber         # Set the job name to "amber"
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
#SBATCH --nodes=1                # Request 1 node
#SBATCH --ntasks=1               # Request 1 task
#SBATCH --ntasks-per-node=1      # Request 1 task per node
#SBATCH --mem=8G                 # Request 8GB for the task
#SBATCH --partition=gpu          # Request gpu partition
#SBATCH --gres=gpu:rtx:1         # Request gpu type; in Grace it can be t4/rtx/a40/a100
#SBATCH --output=amber.%j        # Send stdout/err to "amber.[jobID]"

module purge                     # purge all module
module load GCC/11.2.0 OpenMPI/4.1.1 Amber/22-CUDA-11.5-Python-3.9.6

# Please visit the Amber documentation for a full list of options for pmemd.cuda and the other programs availabe in the amber suite of programs.
pmemd.cuda -O -i amber.in -o amber.out -p amber.prmtop -c amber.inpcrd

exit

FASTER GPU Example for Amber24

#!/bin/bash

## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber         # Set the job name to "amber"
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
#SBATCH --nodes=1                # Request 1 node
#SBATCH --ntasks=1               # Request 1 task
#SBATCH --ntasks-per-node=1      # Request 1 task per node
#SBATCH --mem=4G                 # Request 4GB for the task
#SBATCH --partition=gpu          # Request gpu partition
#SBATCH --gres=gpu:a100:1        # Request gpu type; in FASTER it can be t4/a10/a30/a/40/a100
#SBATCH --output=amber.%j        # Send stdout/err to "amber.[jobID]"

module purge                     # purge all module
module load GCC/13.2.0 OpenMPI/4.1.6 Amber/24-NCCL-2.20.5-CUDA-12.4.1

# Please visit the Amber documentation for a full list of options for pmemd.cuda and the other programs availabe in the amber suite of programs.
pmemd.cuda -O -i amber.in -o amber.out -p amber.prmtop -c amber.inpcrd

exit

ACES GPU Example for Amber24

#!/bin/bash

## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber         # Set the job name to "amber"
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
#SBATCH --nodes=1                # Request 1 node
#SBATCH --ntasks=1               # Request 1 task
#SBATCH --ntasks-per-node=1      # Request 1 task per node
#SBATCH --mem=5G                 # Request 5GB for the task
#SBATCH --partition=gpu          # Request gpu partition
#SBATCH --gres=gpu:h100:1        # Request gpu type; in ACES it can be a30/h100
#SBATCH --output=amber.%j        # Send stdout/err to "amber.[jobID]"

module purge                     # purge all module
module load GCC/13.2.0 OpenMPI/4.1.6 Amber/24-NCCL-2.20.5-CUDA-12.4.1

# Please visit the Amber documentation for a full list of options for pmemd.cuda and the other programs availabe in the amber suite of programs.
pmemd.cuda -O -i amber.in -o amber.out -p amber.prmtop -c amber.inpcrd

exit