Hprc banner tamu.png

Difference between revisions of "SW:Amber"

From TAMU HPRC
Jump to: navigation, search
(AMBER)
 
(13 intermediate revisions by 4 users not shown)
Line 1: Line 1:
= AMBER =
+
= Amber =
 
__TOC__
 
__TOC__
 
== Description ==
 
== Description ==
Line 9: Line 9:
 
== Access ==  
 
== Access ==  
  
AMBER is only accessible to subscribers of the Laboratory for Molecular Simulation.  Please visit [http://lms.chem.tamu.edu/ http://lms.chem.tamu.edu/] for subscription rates.
+
Amber is only accessible to subscribers of the Laboratory for Molecular Simulation.  Please visit [https://lms.hprc.tamu.edu/ https://lms.hprc.tamu.edu/] for subscription rates.
  
 
=== Loading the Module ===
 
=== Loading the Module ===
  
To see all versions of AMBER available:
+
To see all versions of Amber available:
 
  [ netID@cluster ~]$ '''module spider amber'''
 
  [ netID@cluster ~]$ '''module spider amber'''
  
To load a particular version of AMBER (Example: 18):
+
To load a particular version of Amber (Example: 18):
  [ netID@cluster ~]$ '''module load AMBER/18-intel-2017b'''
+
  [ netID@cluster ~]$ '''module load Amber/18-intel-2017b'''
  
 
{{:SW:Login_Node_Warning}}
 
{{:SW:Login_Node_Warning}}
Line 23: Line 23:
 
{{:SW:Compute_Node_Info}}
 
{{:SW:Compute_Node_Info}}
  
=== Ada Examples ===
 
 
On Ada, to submit a batch job, run:
 
[NetID@ada1 ~]$ '''bsub < ''jobscript'''''
 
 
==== MPI Job Example ====
 
 
<pre>
 
#!/bin/bash
 
 
#BSUB -J amber                  # Set the job name to "amber"
 
#BSUB -W 1:00                    # Set the wall clock limit to 1hr
 
#BSUB -n 40                      # Request 40 tasks
 
#BSUB -R "span[ptile=20]"        # Request 20 tasks per node
 
#BSUB -M 2500                    # Set effective memory limit to 2500 MB per task
 
#BSUB -R "rusage[mem=2500]"      # Reserve 2500 MB of memory per task
 
#BSUB -o amber.%J                # Send stdout/err to "amber.[jobID]"
 
 
module purge
 
ml AMBER/18-intel-2017b
 
 
# Please visit the AMBER documentation for a full list of options for pmemd.MPI and the other programs availabe in the amber suite of programs. 
 
mpirun -np $LSB_MAX_NUM_PROCESSORS pmemd.MPI -O -i amber.in -o amber.out -p amber.prmtop -c amber.rst -r amber.rst -x amber.nc
 
 
exit
 
</pre>
 
  
 
=== Terra Examples ===
 
=== Terra Examples ===
Line 64: Line 38:
  
 
## NECESSARY JOB SPECIFICATIONS
 
## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber       # Set the job name to "amber"
+
#SBATCH --job-name=amber         # Set the job name to "amber"
 
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
 
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
 
#SBATCH --ntasks=56              # Request 56 tasks
 
#SBATCH --ntasks=56              # Request 56 tasks
#SBATCH --ntasks-per-node=28    # Request 28 task per node
+
#SBATCH --ntasks-per-node=28    # Request 28 tasks per node
#SBATCH --mem=40G               # Request 40GB per node
+
#SBATCH --mem=56G               # Request 56GB per node
#SBATCH --output=amber.%j       # Send stdout/err to "amber.[jobID]"
+
#SBATCH --output=amber.%j       # Send stdout/err to "amber.[jobID]"
  
 
module purge
 
module purge
ml AMBER/14-intel-2016a-AmberTools-15-patchlevel-13-13
+
ml Amber/18-intel-2017b
  
# Please visit the AMBER documentation for a full list of options for pmemd.MPI and the other programs availabe in the amber suite of programs.   
+
# Please visit the Amber documentation for a full list of options for pmemd.MPI and the other programs availabe in the amber suite of programs.   
 
mpirun -np $SLURM_NPROCS pmemd.MPI -O -i amber.in -o amber.out -p amber.prmtop -c amber.rst -r amber.rst -x amber.nc
 
mpirun -np $SLURM_NPROCS pmemd.MPI -O -i amber.in -o amber.out -p amber.prmtop -c amber.rst -r amber.rst -x amber.nc
  

Latest revision as of 10:51, 23 September 2021

Amber

Description

Amber is a suite of biomolecular simulation programs. - Homepage: http://ambermd.org/

Documentation: http://ambermd.org/Manuals.php

Access

Amber is only accessible to subscribers of the Laboratory for Molecular Simulation. Please visit https://lms.hprc.tamu.edu/ for subscription rates.

Loading the Module

To see all versions of Amber available:

[ netID@cluster ~]$ module spider amber

To load a particular version of Amber (Example: 18):

[ netID@cluster ~]$ module load Amber/18-intel-2017b

Usage on the Login Nodes

Please limit interactive processing to short, non-intensive usage. Use non-interactive batch jobs for resource-intensive and/or multiple-core processing. Users are requested to be responsible and courteous to other users when using software on the login nodes.

The most important processing limits here are:

  • ONE HOUR of PROCESSING TIME per login session.
  • EIGHT CORES per login session on the same node or (cumulatively) across all login nodes.

Anyone found violating the processing limits will have their processes killed without warning. Repeated violation of these limits will result in account suspension.
Note: Your login session will disconnect after one hour of inactivity.

Usage on the Compute Nodes

Non-interactive batch jobs on the compute nodes allows for resource-demanding processing. Non-interactive jobs have higher limits on the number of cores, amount of memory, and runtime length.

For instructions on how to create and submit a batch job, please see the appropriate wiki page for each respective cluster:


Terra Examples

On Terra, to submit a batch job, run:

[NetID@terra1 ~]$ sbatch jobscript

MPI Job Example

#!/bin/bash
## ENVIRONMENT SETTINGS; CHANGE WITH CAUTION
#SBATCH --export=NONE            # Do not propagate environment
#SBATCH --get-user-env=L         # Replicate login environment

## NECESSARY JOB SPECIFICATIONS
#SBATCH --job-name=amber         # Set the job name to "amber"
#SBATCH --time=01:30:00          # Set the wall clock limit to 1hr and 30min
#SBATCH --ntasks=56              # Request 56 tasks
#SBATCH --ntasks-per-node=28     # Request 28 tasks per node
#SBATCH --mem=56G                # Request 56GB per node
#SBATCH --output=amber.%j        # Send stdout/err to "amber.[jobID]"

module purge
ml Amber/18-intel-2017b

# Please visit the Amber documentation for a full list of options for pmemd.MPI and the other programs availabe in the amber suite of programs.  
mpirun -np $SLURM_NPROCS pmemd.MPI -O -i amber.in -o amber.out -p amber.prmtop -c amber.rst -r amber.rst -x amber.nc

exit