Quantum Espresso is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.[1]
Source Code
Download the QE source code from: https://www.quantum-espresso.org/download-page/
Picotte
Standard Nodes
Modules:
- intel/composerxe/2020u4
- picotte-openmpi/intel/2020/4.1.4
- szip/intel/2020/2.1.1
- libaec/intel/2020/1.0.6
- hdf5/intel/2020/1.14.0-serial
- N.B. cannot use the MPI version of HDF5 because it does not provide C++ interface
- libxc/intel/2020/6.1.0
Configure with Cmake.
Configuration notes:
- QE_ENABLE_HDF = ON
- QE_ENABLE_LIBXC = ON
- QE_ENABLE_MPI = ON
- QE_ENABLE_OPENMP = ON (may require a threaded UCX)
- QE_ENABLE_SCALAPACK = OFF (ON is also possible; have not done comparison benchmarks)
- QE_FFTW_VENDOR = Intel_FFTW3
- HDF5_PREFER_PARALLEL = OFF
- MPIEXEC_MAX_NUMPROCS = 2048 (???)
- TESTCODE_NPROCS = 8
- TESTCODE_NTHREADS = 2
GPU Nodes
Needs to be compiled with the NVHPC[2] C++ compiler.
Modules
- cuda11.4/toolkit/11.4.2
- picotte-openmpi/cuda11.4/4.1.4
- intel/composerxe/2020u4
- intel/mkl/2020
- Omit in favor of CUDA linalg/fft stuff?
- hdf5/intel/2020/1.14.0-serial
- libxc/intel/2020/5.1.2
- szip/intel/2020/2.1.1
- libaec/intel/2020/1.0.6
- cuda11.4/blas/11.4.2
- cuda11.4/fft/11.4.2
Configure with Cmake.
Configure options:
- QE_ENABLE_CUDA = ON
- QE_ENABLE_HDF5 = ON
- QE_ENABLE_LIBXC = ON
- QE_ENABLE_MPI = ON
- QE_ENABLE_OPENMP = OFF (should this be on?)
See Also
OBSOLETE for Proteus
Standard Proteus nodes
- intel/composerxe/2015.1.133
- proteus-openmpi/intel/2015/1.8.1-mlnx-ofed
New Proteus (Intel Sky Lake) nodes
- intel/composerxe/2019u1
- proteus-openmpi/intel/2019/3.1.4
Compilation Options
- -O3 -xHost
- use MKL; see Compiling for Intel with Intel Composer XE, MKL, and Intel MPI#MKL Link Line Advisor
- if the authors of the code you are compiling have set up the makefiles or build scripts properly, you should not have to set the MKL-related options
Running
- Use
fixed16
orfixed40
PEs
OBSOLETE
Compiled using Intel Compilers. See Compiling for Intel with Intel Composer XE, MKL, and Intel MPI.
Environment
Modules needed:
shared
proteus
sge/univa
gcc/4.8.1
intel/compiler
intel/mkl
intel-mpi/64
Environment variables:
export CC=icc
export F77=ifort
export F90=ifort
export MPIF90=mpiifort
Download
Individually from http://www.qe-forge.org/gf/project/q-e/frs/?action=FrsReleaseBrowse&frs_package_id=18
[juser@proteusi01 ~]$ mkdir src
[juser@proteusi01 ~]$ cd src
[juser@proteusi01 src]$ wget ....../espresso-X.Y.tar.gz
[juser@proteusi01 src]$ tar xf espresso-X.Y.tar.gz
Configure
[juser@proteusa01 espress-X.Y]$ ./configure --prefix=$HOME/espresso --enable-parallel --enable-openmp --enable-shared --with-scalapack=intel | tee Configure.out
Check the file Configure.out
to see that all the appropriate libraries
were picked up:
The following libraries have been found:
BLAS_LIBS= -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core
LAPACK_LIBS=
SCALAPACK_LIBS=-lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
FFT_LIBS=-lfftw3xf_intel
...
Parallel environment detected successfully.\
Configured for compilation of parallel executables.
For more info, read the ESPRESSO User's Guide (Doc/users-guide.tex).
--------------------------------------------------------------------
configure: success
Modify make.sys
CFLAGS = -O3 -xAVX -ipo $(DFLAGS) $(IFLAGS)
...
FFLAGS = -O3 -xAVX -ipo -assume byterecl -g -traceback -par-report0 -vec-report0 -openmp
...
AR = xiar
Build
[juser@proteusi01 espresso-X.Y]$ make -j 16 pw | tee Make.pw.out
Running
Since this uses Intel MPI, the intelmpi
parallel environment (PE)
must be requested in the job script, e.g.
#$ -pe intelmpi 128
...
### "-rmk sge" specifies Grid Engine integration
### -np not necessary singe GE integration allows pw.x to read number of slots from the environment
mpirun -rmk sge pw.x ...
See Also
References
[2] NVIDIA HPC SDK