Skip to content

VASP

The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.

VASP homepage

Access

VASP is restricted software.

Usage of this software is restricted to registered VASP license members. If you believe you are eligible to use VASP on our clusters, please email the HPRC Help Desk (help@hprc.tamu.edu) with the request for access and provide your VASP license number.

VASP is installed on Grace, FASTER, and ACES.

To list the versions of VASP installed use the command:

mla vasp

To list the required module dependencies for vaspversion use the command:

ml spidervaspversion

Finally, load the required dependencies and the desired VASP version to setup your environment to run vasp:

mldependenciesvaspversion

Sample Job files

Sample VASP6 CPU job on Grace:

#!/bin/bash
#SBATCH -J vasp                  # Job Name
#SBATCH -t 1:00:00               # Wall time h:m:s
#SBATCH -N 2                     # Request 2 nodes
#SBATCH --ntasks-per-node=48     # 48 core per node
#SBATCH --mem=360G               # memory per node

ml purge
ml intel/2022a vasp/6.3.2

mpirun -np $SLURM_NTASKS  vasp_std

Sample VASP6 GPU job on Grace:

#!/bin/bash
#SBATCH -J vasp                  # Job Name
#SBATCH -t 1:00:00               # Wall time h:m:s
#SBATCH -N 1                     # Number of nodes
#SBATCH --partition=gpu          # specify gpu partition
#SBATCH --gres=gpu:a100:1        # select an a100 gpu
#SBATCH --ntasks-per-node=1      # 1 core per node 
#SBATCH --mem=120G               # memory per node

ml purge
ml NVHPC/21.9 vasp/6.3.2

mpirun -np $SLURM_NTASKS  vasp_std

Sample VASP6 CPU job on FASTER:

#!/bin/bash
#SBATCH -J vasp                  # Job Name
#SBATCH -t 1:00:00               # Wall time h:m:s
#SBATCH -N 1                     # Request 1 node
#SBATCH --ntasks-per-node=64     # cores per node; max 64 on FASTER
#SBATCH --mem=240G               # memory per node; max 240GB on FASTER

ml purge
ml intel/2022a vasp/6.3.2

mpirun -np $SLURM_NTASKS vasp_std

Sample VASP6 GPU job on FASTER:

#!/bin/bash
#SBATCH -J vasp                  # Job Name
#SBATCH -t 1:00:00               # Wall time h:m:s
#SBATCH -N 1                     # Number of nodes
#SBATCH --partition=gpu          # specify gpu partition
#SBATCH --gres=gpu:a100:1        # select an a100 gpu
#SBATCH --ntasks-per-node=1      # 1 core per node 
#SBATCH --mem=120G               # memory per node

ml purge
ml NVHPC/21.9 vasp/6.3.2

mpirun -np $SLURM_NTASKS  vasp_std

Sample VASP6 CPU job on ACES:

#!/bin/bash
#SBATCH -J vasp                  # Job Name
#SBATCH -t 1:00:00               # Wall time h:m:s
#SBATCH -N 1                     # Request 1 node
#SBATCH --ntasks-per-node=96     # cores per node; max 96 on ACES
#SBATCH --mem=480G               # memory per node; max 480GB on ACES

ml purge
ml intel/2023a
ml vasp/6.3.2

mpirun -np $SLURM_NTASKS vasp_std

Potentials

VASP proprietary potentials are located in

/sw/restricted/vasp/sw/potentials

and only accessible to registered VASP license members. Per the VASP license, the potential files must be protected such that only the registered VASP license member has access to the file(s) (for example -r--------).

Information about the potentials and download date can be found in

/sw/restricted/vasp/sw/potentials/README.txt

AFLOW

Automatic FLOW for Materials Discovery

To load AFLOW module on Grace:

    ml GCC/10.2.0 AFLOW/3.2.11