Compiling VOTCA
VOTCA [1] is a software package which focuses on the analysis of molecular dynamics data, the development of systematic coarse-graining techniques as well as methods used for simulating microscopic charge (and exciton) transport in disordered semiconductors.
Installed Versions♯
Unnumbered version checked out from the master branch, compiled with Intel Composer XE 2015.1.133: use the following module file
votca/intel/2015/master
N.B. This also loads dependent modules, including
python/intel/2015/3.6-current
Environment Setting♯
All environment settings are set by the modulefile. DO NOT use the VOTCARC.bash or VOTCARC.csh scripts.
Usage♯
Some of the CSG utilities run multithreaded. However, the default scripts are not aware of the cluster environment, and may run too many threads.
VOTCA and the GROMACS it depends on are compiled for Intel only; so, Intel nodes must be requested.
For instance, csg_stat
has a "--nt
" option which sets the number of
threads. This should use the NSLOTS
environment variable:
#$ -pe shm 16
#$ -l ua=sandybridge
...
csg_stat --nt $NSLOTS ...
The run scripts in the tutorial distribution read the special file
/proc/cpuinfo
, which is not compatible with Proteus's environment.
Using GROMACS MPI across multiple nodes♯
You will need a script to modify the settings.xml
dynamically for each
job.
You will also need to change the command line option for csg_stat (and other csg programs) since they will need to run on fewer slots than what you will request for GROMACS. For example, if your job requested 128 slots in order to run GROMACS on 128 slots, but csg_stat would only run on 16 slots:
csg_stat --nt 16
(i.e. do not use the NSLOTS environment variable, here).
The following Python 3 script will modify settings.xml
:[2]
#!/usr/bin/env python3
import sys
import os
from pathlib import Path
import xml.dom.minidom
### README
### * Save this file as fix_settings.py in the same directory as your job script
### * Make it executable: chmod +x fix_settings.py
def generate_hostfile(pe_hostfile):
'''Convert Univa Grid Engine hostfile to Open MPI hostfile'''
ompi_hostfile = Path('./hostfile.{}'.format(os.getenv('JOB_ID'))).resolve()
with open(pe_hostfile, 'r') as f, open(ompi_hostfile, 'w') as g:
for l in f:
hostname, nslots = l.strip().split()[:2]
g.write('{} slots={} max-slots={}\n'.format(hostname, nslots, nslots))
return ompi_hostfile
def fix_settings_xml(ompi_hostfile):
'''Fix VOTCA CSG settings.xml file'''
settings = xml.dom.minidom.parse('settings.xml')
### read environment variable MPI_RUN for full path to mpirun command
settings.getElementsByTagName('command')[0].childNodes[0].data = '{} -x LD_LIBRARY_PATH -x BASH_ENV --hostfile {} gmx_mpi mdrun'.format(os.getenv('MPI_RUN'), ompi_hostfile)
### XXX caution - this overwrites the settings.xml file
with open('settings.xml', 'w') as f:
f.write(settings.toxml())
if __name__ == '__main__':
pe_hostfile = Path(os.getenv('PE_HOSTFILE'))
ompi_hostfile = generate_hostfile(pe_hostfile)
fix_settings_xml(ompi_hostfile)
Example Job Script♯
#!/bin/bash
#$ -S /bin/bash
#$ -P FIXME
#$ -M FIXME
#$ -j y
#$ -cwd
#$ -pe fixed16 128
#$ -l h_rt=0:30:00
#$ -l m_mem_free=3G
#$ -l h_vmem=4G
#$ -l ua=sandybridge
. /etc/profile.d/modules.sh
module load shared
module load proteus
module load gcc
module load sge/univa
module load votca/intel/2015/master
./fix_settings.py
csg_inverse --options settings.xml
Build♯
See instructions: https://github.com/votca/votca
CFLAGS = -O3 -xHost -mieee-fp -Wpointer-arith -fno-strict-aliasing
Modules♯
1) shared 8) boost/openmpi/intel/2015/1.65.1 15) zlib/cloudflare/intel/2015/1.2.8
2) proteus 9) gsl/intel/2015/2.4 16) szip/intel/2015/2.1
3) gcc/4.8.1 10) proteus-fftw3/intel/2015/3.3.7 17) hdf5_18/intel/2015/1.8.17-serial
4) sge/univa 11) hwloc/1.11.7 18) gromacs/intel/2015/2018.1
5) git/2.16.1 12) cmake/gcc/3.9.4 19) sqlite/intel/2015/3.16.2
6) texlive/2016 13) intel/composerxe/2015.1.133
7) doxygen/1.8.12 14) proteus-openmpi/intel/2015/1.8.1-mlnx-ofed
GROMACS♯
- Requires GROMACS with a shared library. The only one that works is gromacs/intel/2015/2018.1
- Since our install of GROMACS is MPI-enabled by default, the
executable name and library file name are different from what is
expected by VOTCA's build configuration. Modify the file
votca/csg/CMakeModules/FindGROMACS.cmake
adding a line after line 42:
pkg_check_modules(PC_GROMACS libgromacs_mpi)
Use CMake♯
cd votca
mkdir BUILD
cd BUILD
ccmake ..
In ccmake's terminal interface, hit the "T" key to toggle on "advanced" settings. Look through and modify as necessary, including but not exclusively, the name of the GROMACS executable, which is "gmx_mpi".
- Disable FFTW
MANY SETTINGS ARE NOT SHOWN. See
Make♯
Once Makefile is generated, type "make". If that completes successfully, install with "make install". There are no provided tests/checks.
Test♯
Run through some of the tutorials[3] to test. Most will do a multi-hour GROMACS run, and then post-process with VOTCA.
Things to note if you run into errors:
- The tutorials come with a shell scripts named
run.sh
in each tutorial exercise directory. This needs to be modified. The invocation ofcsg_stat
determines the number of threads by a method incompatible with Proteus's environment. See the next section on Usage for details. - The tutorials provide a shell script
Extract_Energies.sh
which may not inherit the calling environment properly.
References♯
[1] VOTCA web site