Skip to content

PETSc is the Portable, Extensible Toolkit for Scientific Computation[1] from the Mathematics and Computer Science Division of Argonne National Lab.

Using PETSc

Build Documentation

http://www.mcs.anl.gov/petsc/documentation/installation.html

PETSc can be built with many components (about 140). This article only describes building a fairly basic installation with minimal components.

Picotte

These are merely recommendations.

Base settings:

  • Compiler: Intel Composer XE
  • MPI: Open MPI
  • BLAS+LAPACK: MKL

Variant:

  • CUDA

Environment

Modules:

  • intel/composerxe/2020u4
  • picotte-openmpi/intel/2020/4.1.0
  • hwloc/2.4.1
  • cmake/3.19.5

Proteus

Environment

$ module list Currently Loaded Modulefiles: 1) shared                                  6) intel/ipp/64/8.1/2013_sp1.3.174 2) proteus                                 7) intel/mkl/64/11.1/2013_sp1.3.174 3) gcc/4.8.1                               8) intel/tbb/64/4.2/2013_sp1.3.174 4) sge/univa                               9) intel-mpi/64/4.1.1/036 5) intel/compiler/64/14.0/2013_sp1.3.174  10) intel-tbb-oss/intel64/42_20140601oss $ unsetenv CXX CC FC F77   # maybe optional

Configure and Make

This is incomplete. In particular, it does not show how to build MVAPICH2 (or any other MPI implementation).

$ export PETSC_DIR=/mnt/HA/opt/src/petsc-3.5.3 $ export PETSC_ARCH=linux-gnu-intel $ ./configure COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" \ --prefix=/mnt/HA/groups/myresearchGrp/petsc/3.5.3 \ --with-blas-lapack-dir=$MKLROOT --with-debugging=0 >& Configure.out & $ make MAKE_NP=12 all >& Make.out & $ make install > & Make.install.out &

Test

Simple 1- and 2-process MPI test:

$ make test

More complicated test:

$ cd ./src/benchmarks/streams $ make MPIVersion

Create a test script - this tests in the source/build directory:

#!/bin/bash
#$ -S /bin/bash
#$ -cwd
#$ -j y
#$ -M fixme@drexel.edu
#$ -P fixmePrj
#$ -l h_rt=0:5:0
#$ -l h_vmem=2g
#$ -l vendor=intel
#$ -pe fixed16 32
#$ -q all.q

. /etc/profile.d/modules.sh
module load shared
module load proteus
module load gcc
module load sge/univa
module load intel/compiler/64
module load intel/mkl/64
module load intel/tbb/64
module load intel/ipp/64
module load proteus-openmpi/intel/64/1.8.1-mlnx-ofed

export PETSC_DIR=/mnt/HA/opt/src/petsc-3.5.3
export PETSC_ARCH=linux-gnu-intel

$MPI_RUN src/benchmarks/streams/MPIVersion

Output should look like:

Number of MPI processes 32
Process 0 ic15n02
Process 1 ic15n02
Process 2 ic15n02
Process 3 ic15n02
Process 4 ic15n02
Process 5 ic15n02
Process 6 ic15n02
Process 7 ic15n02
Process 8 ic15n02
Process 9 ic15n02
Process 10 ic15n02
Process 11 ic15n02
Process 12 ic15n02
Process 13 ic15n02
Process 14 ic15n02
Process 15 ic15n02
Process 16 ic06n02
Process 17 ic06n02
Process 18 ic06n02
Process 19 ic06n02
Process 20 ic06n02
Process 21 ic06n02
Process 22 ic06n02
Process 23 ic06n02
Process 24 ic06n02
Process 25 ic06n02
Process 26 ic06n02
Process 27 ic06n02
Process 28 ic06n02
Process 29 ic06n02
Process 30 ic06n02
Process 31 ic06n02
Function      Rate (MB/s)
Copy:      119140.5811
Scale:     142912.1167
Add:       145337.7745
Triad:     145683.7082
error: commlib error: got read error (closing "ic06n02.cm.cluster/shepherd_ijs/1")

The "commlib error" may be ignored: it occurs after the actual job terminates.

After installation ("make install"), modify the test script and qsub again:

#!/bin/bash
#$ -S /bin/bash
#$ -cwd
#$ -j y
#$ -M fixme@drexel.edu
#$ -l h_rt=0:5:0
#$ -l h_vmem=2g
#$ -l vendor=intel
#$ -pe fixed16 32
#$ -q all.q

. /etc/profile.d/modules.sh
module load shared
module load proteus
module load gcc
module load sge/univa
module load petsc/intel/2013/3.5.3

mpirun src/benchmarks/streams/MPIVersion

The output should be close to the one from before.

PETSc 3.12.1 w/ Intel Composer XE 2019u1 + Open MPI

IN PROGRESS

Prerequisites

Currently Loaded Modulefiles:
  1) shared                             4) git/2.18.0                         7) intel/composerxe/2019u1           10) proteus-openmpi/intel/2019/3.1.4
  2) proteus-rh68                       5) texlive/2019                       8) hwloc/1.11.12                     11) binutils/2.32
  3) sge/univa                          6) doxygen/1.8.14                     9) ucx/intel/2019/1.3.0

Configure

~~./configure`` ``--prefix=/mnt/HA/opt_rh68/petsc/intel/2019/3.11.1`` ``--with-mpi-dir=${MPI_HOME}`` ``--with-debugging=0`` ``--COPTFLAGS="-O3`` ``-xHost"`` ``--CXXOPTFLAGS="-O3`` ``-xHost"`` ``--FOPTFLAGS="-O3`` ``-xHost"`` ``--with-blaslapack-dir=$MKLROOT/lib/intel64_lin`` ``--with-mkl_sparse-dir=$MKLROOT/lib/intel64_lin`` ``--with-mkl_sparse_optimize-dir=$MKLROOT/lib/intel64_lin~~

UPDATE 2019-11-04 Need to specify full path to the MKL LAPACK library, and not just the directory. See: https://gitlab.com/petsc/petsc/merge_requests/2226

./configure --prefix=/mnt/HA/opt_rh68/petsc/intel/2019/3.12.1 --with-mpi-dir=${MPI_HOME} --with-debugging=0 --COPTFLAGS="-O3 -xHost -mkl=parallel" --CXXOPTFLAGS="-O3 -xHost -mkl=parallel" --FOPTFLAGS="-O3 -xHost -mkl=parallel"  --with-mkl_sparse-dir=$MKLROOT/lib/intel64_lin --with-mkl_sparse_optimize-dir=$MKLROOT/lib/intel64_lin --with-blaslapack-lib=$MKLROOT/lib/intel64_lin/libmkl_rt.so >& Configure.out&

Make

make PETSC_DIR=/mnt/HA/opt_rh68/src/petsc-3.12.1 PETSC_ARCH=arch-linux-c-opt all

References

[1] PETSc official website