GROMACS is a molecular dynamics package.
Documentation: https://manual.gromacs.org/documentation/current/install-guide/index.html
Picotte
CPU-only (non-CUDA) version with MPI
N.B. DO NOT follow the "quick and dirty cluster installation" instructions.
Modules:
- cmake
- intel/composerxe/2020u4
- picotte-openmpi/intel/2020/4.1.0
- hwloc/2.4.1
May need to install sphinx to generate documentation:
$ pip install --user sphinx
Single pass cmake with these options (or use ccmake to avoid having to edit long command lines):
- REGRESSIONTEST_DOWNLOAD=ON
- CMAKE_INSTALL_PREFIX=/ifs/somewhere/appropriate/2021.1
- CMAKE_BUILD_TYPE=Release
- GMX_FFT_LIBRARY=mkl
- CMAKE_CXX_COMPILER=/ifs/opt/openmpi/intel/2020/4.1.0/bin/mpic++
- CMAKE_C_COMPILER=/ifs/opt/openmpi/intel/2020/4.1.0/bin/mpicc
- GMX_MPI=ON
- GMX_THREAD_MPI=OFF
- GMX_OPENMP=ON
- GMX_HWLOC=ON
- MPIEXEC_MAX_NUMPROCS=2048
- HWLOC_DIR=/ifs/opt/hwloc/2.4.1
- HWLOC_INFO=/ifs/opt/hwloc/2.4.1/bin/hwloc-info
- HWLOC_hwloc.h_DIS=/ifs/opt/hwloc/2.4.1/include
- HWLOC_hwloc_LIBRARY=/ifs/opt/hwloc/2.4.1/lib/libhwloc.so
- PYTHON_EXECUTABLE=/ifs/opt/intel/2020/intelpython3/bin/python3.7 (Python is part of intel/composerxe/2020u4 and should be picked up automatically without having to specify to cmake)
CUDA version (update 2023 in progress)
C++ compiler needs C++17 support. Both available compilers on Picotte work (GCC 9.2.0, Intel Composer XE 2020).
Session on GPU node:
srun -p gpu --gpus=1 --cpus-per-gpu=12 --mem=40G --time=8:00:00 --pty /bin/bash
Modules
First, ensure Picotte CUDA modulefiles are available:
[juser@gpu003 ~]$ module use /ifs/opt_cuda/modulefiles
Load these modulefiles:
cmake
hwloc/cuda11.4
picotte-openmpi/cuda11.4
cuda11.4/blas
cuda11.4/fft
intel/mkl (??? MAYBE)
python/gcc/3.10
perl-threaded
Configure with Cmake
Download tar file and expand: gromacs-2023.tar.gz
:
[juser@gpu003 ~]$ cd /ifs/groups/myrsrchGrp/Software/Src
[juser@gpu003 Src]$ wget https://ftp.gromacs.org/gromacs/gromacs-2023.tar.gz
[juser@gpu003 Src]$ tar -xf gromacs-2023.tar.gz
Make a build directory and configure; we use the terminal interface
rather than writing a long cmake
command with many options:
[juser@gpu003 Src]$ cd gromacs-2023
[juser@gpu003 gromacs-2023]$ mkdir BUILD
[juser@gpu003 gromacs-2023]$ cd BUILD
[juser@gpu003 BUILD]$ ccmake -DCMAKE_C_COMPILER=`which gcc` -DCMAKE_CXX_COMPILER=`which g++` -DPERL_EXECUTABLE=`which perl`
Then, hit “t” to toggle on advanced mode to see all variables/options.
- Use the arrow keys to move around.
- Hit “Enter” to start editing the values.
- Some options are multiple choice; hit “Enter” or “Space” to switch between available options.
Set these; if not specified, leave at defaults:
- CMAKE_INSTALL_PREFIX = /ifs/groups/myrsrchGrp/Software/Gromacs/2023
- GMX_FFT_LIBRARY = fftpack (NOTE this will change when we turn on CUDA)
- GMX_GPU = CUDA
- GMX_HWLOC = ON (NOTE needs hwloc >= ??)
- GMX_MPI = ON
- GMX_THREAD_MPI = OFF
- MPIEXEC_MAX_NUMPROCS = 2048
Then, hit “c” to configure. The selection of “GMX_GPU=CUDA” above will bring in more options. Check that these are set:
- GMX_GPU_FFT_LIBRARY = cuFFT
- HWLOC_hwloc.h_DIRS = /ifs/opt_cuda/hwloc/cuda11.4/2.9.0/include
Build and Install
make -j 12
and then
make install
Proteus
PROTEUS HAS BEEN DECOMMISSIONED
Preliminary
Read the official installation/compilation guide.[1]
Configure & Build CPU-only Version (GROMACS 2016.2)
This is for GROMACS 2016.2. The process is the same for later versions. However, details should be checked in case any changes have been made.
Modules
- gcc/4.8.1
- cmake/gcc/3.6.1
- intel/composerxe/2015.1.133
- proteus-openmpi/intel/2015/1.8.1-mlnx-ofed
- python/intelpython
Cmake
$ cd gromacs-2016.2
$ mkdir BUILD
$ cd BUILD
$ cmake -DGMX_MPI=ON -DGMX_FFT_LIBRARY=mkl -DREGRESSIONTEST_PATH=/mnt/HA/opt/src/Gromacs-2016.2/regressiontests-2016.2 \
-DCMAKE_INSTALL_PREFIX=/mnt/HA/opt/gromacs/intel/2015/2016.2 ..
The interactive interface for cmake can also be used, i.e. ccmake
Make
make -j 4 >& Make.out &
Regression Tests
make check >& Make.check.out &
All the tests should be passed:
Test project /mnt/HA/opt/src/Gromacs-2016.2/gromacs-2016.2/BUILD
Start 1: TestUtilsUnitTests
1/27 Test #1: TestUtilsUnitTests ............... Passed 2.92 sec
Start 2: MdlibUnitTest
...
26/27 Test #26: regressiontests/pdb2gmx .......... Passed 184.81 sec
Start 27: regressiontests/rotation
27/27 Test #27: regressiontests/rotation ......... Passed 18.67 sec
100% tests passed, 0 tests failed out of 27
Label Time Summary:
GTest = 15.17 sec (18 tests)
IntegrationTest = 7.01 sec (2 tests)
MpiIntegrationTest = 2.93 sec (1 test)
UnitTest = 15.17 sec (18 tests)
Total Test time (real) = 509.31 sec
VMD Plugin Directory
/mnt/HA/opt/gromacs/vmd/plugins
Configure & Build CPU-only Version (GROMACS 2019.1)
GROMACS 2019.1 using GCC with MKL from Intel Composer XE 2019u1
Requires newer g++ for C++11 standard support, and for Standard Template Library (STL).
Modules
For C++11 support, use Red Hat Software Collection:
scl enable devtoolset-6 /bin/bash
Confirm that gcc is appropriate version:
gcc (GCC) 6.2.1 20160916 (Red Hat 6.2.1-3)
Alternatively, use GCC 7.4.0
module load gcc/7.4.0
Modules
- intel/composerxe/2019u1
- hpcx/stack
- proteus-openmpi/intel/2019/3.1.1
- cmake/3.14
Cmake
$ cd gromacs-2019.1
$ mkdir BUILD
$ cd BUILD
$ cmake -DGMX_MPI=ON -DGMX_FFT_LIBRARY=mkl -DREGRESSIONTEST_PATH=/mnt/HA/opt/src/Gromacs-2019.1/regressiontests-2019.1 \
-DCMAKE_INSTALL_PREFIX=/mnt/HA/opt_rh68/gromacs/intel/2019/2019.1 ..
Configure & Build CPU-only Version (GROMACS 2019.2)
- Intel Composer XE 2019u1 + GCC 7.4.0
- hwloc 1.11.12 (Open MPI only supports hwloc-1.x)
- ucx
- cmake/3.14
Configure & Build GPU-enabled Version
- Instructions here are for version 2018.2 but should work for 2016.2 and later.
- This uses CUDA 9.0 that is installed on the GPU nodes.
- This also uses the the Open MPI bundled with the Mellanox HPC-X Software Toolkit[2]
Download source code and regression tests from: http://manual.gromacs.org/documentation/2018.2/download.html
Modules
- shared
- proteus
- proteus-gpu
- gcc/4.8.1
- sge/univa
- git/2.18.0
- hpcx
- cmake/3.11.4
- proteus-fftw3/gcc/3.3.8
OpenBLAS is installed by default in /usr/local
on the GPU nodes, and
the Cmake step below will find it.
Cmake
There are many options which need to be specified, so using the terminal
user interface, ccmake
will allow for incremental changes.
cd gromacs-2018.2
mkdir BUILD
cd BUILD
ccmake ..
Things to set (hit "T" to toggle on advanced options):
- BLAS_openblas_LIBRARY = /usr/local/lib/libopenblas.so
- CMAKE_BUILD_TYPE = Release
- CMAKE_INSTALL_PREFIX = /mnt/HA/opt_cuda90/gromacs/2018.2. (or wherever you prefer)
- CMAKE_CXX_COMPILER = /cm/shared/apps/gcc/4.8.1/bin/g++
- CMAKE_C_COMPILER = /cm/shared/apps/gcc/4.8.1/bin/gcc
- FFTWF_INCLUDE_DIR = /mnt/HA/opt_rh68/fftw3/gcc/3.3.8/include
- FFTWF_LIBRARY = /mnt/HA/opt_rh68/fftw3/gcc/3.3.8/lib/libfftw3f.so
- GMX_GPU = ON
- GMX_MPI = ON
- GMX_MPI_IN_PLACE = ON
- GMX_THREAD_MPI = OFF (important this must be off if GMX_MPI is to be on)
- GMX_USE_NVML = ON
- GMX_X11 = ON
- REGRESSIONTEST_DOWNLOAD = ON
Double check that CUDA_VERSION
is 9.0
Make
make -j 4 >& Make.out
Regression Tests
make check >& Make.check.out
Install
make install