PETSc

From Arbeitsgruppe Kuiper
Jump to navigation Jump to search

PETSc is the Portable, Extensible Toolkit for Scientific Computation.

Version 3.1-p8

(March 25, 2010)

That's the version needed by belt. When choosing to download packages during configuration, versions are:

  • openmpi-1.4.1.tar.gz
  • mpich2-1.0.8.tar.gz
  • hypre-2.6.0b.tar.gz (uses deprecated MPI_Address(), MPI_Type_struct)

Problem when using system's MPI (more modern):

  • PETSc 3.1-p8 uses deprecated objects: MPI_Attr_get, MPI_Attr_put, MPI_Attr_delete, MPI_Keyval_create, MPI_Keyval_free, OMPI_C_MPI_NULL_COPY_FN, OMPI_C_MPI_NULL_DELETE_FN (deprecated since MPI-2.0) and MPI_Errhandler_create, MPI_Errhandler_set, MPI_Type_struct (removed in MPI-3.0)
  • Pluto4.1 itself uses deprecated objects (in Parallel/al_subarray_.c): MPI_Type_extent, MPI_Type_hvector, MPI_Type_struct (removed in MPI-3.0)

OpenMPI is (at least on Debian) hardwired to reject removed functions (#define OMPI_ENABLE_MPI1_COMPAT 0 in /usr/lib/x86_64-linux-gnu/openmpi/include/mpi.h and /usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi/mpi.h). Changing 0 → 1 (by the admin) works, the compiler flag -Wno-deprecated-declarations is recommended, then. A script like the following for configuring PETSc is advisable:

Debian (like merope, asterope, and electra)

Necessary packages:

  • make, python2-minimal
  • libopenmpi-dev or libmpich-dev
  • libblas-dev, liblapack-dev
#!/bin/bash

# L.B. 2024

declare -A inc=(["openmpi"]="/usr/lib/x86_64-linux-gnu/openmpi/include",
                ["mpich"]="/usr/include/x86_64-linux-gnu/mpich")

declare -A lib=(["openmpi"]="[/usr/lib/x86_64-linux-gnu/libmpi.so,/usr/lib/x86_64-linux-gnu/libmpi_mpifh.so]",
                ["mpich"]="[/usr/lib/x86_64-linux-gnu/libmpich.so,/usr/lib/x86_64-linux-gnu/libmpichfort.so]")

# On Debian 12, the MPICH libraries are actually called
# "libmpich.so.12" and "libmpichfort.so.12", and the standard links
# .so -> .so.12 are missing. Ask your admin to fix that. Specifying
# the suffix ".12" doesn't work, it confuses the configure.py.

export PETSC_DIR=$PWD

opt=-O3  # Some prefer -O2.
mpi=openmpi

python2 ./config/configure.py PETSC_ARCH=debian_$mpi \
        --with-cc=mpicc.$mpi --with-cxx=mpicxx.$mpi --with-fc=mpif90.$mpi --with-mpiexec=mpirun.$mpi \
        --CFLAGS=-Wno-deprecated-declarations --COPTFLAGS=$opt --FOPTFLAGS=$opt \
        --with-x=0 --with-debugging=0 --download-hypre=1 \
        --with-mpi-include="${inc[$mpi]}" --with-mpi-lib="${lib[$mpi]}"

magnitude

#!/bin/bash

# L.B. 2024

module -s load compiler/latest
module -s load mkl/latest
module -s load mpi/latest

export PETSC_DIR=$PWD

python2 ./config/configure.py PETSC_ARCH=magnitude_intel \
        --CFLAGS= --COPTFLAGS=-O3 --FOPTFLAGS=-O3 \
	    --with-blas-lib=[libmkl_intel_lp64.a,libmkl_sequential.a,libmkl_core.a] --with-lapack-lib=libmkl_core.a \
        --with-x=0 --with-debugging=0 --download-hypre=1

# The Intel® oneAPI Math Kernel Library ILP64 libraries use the 64-bit
# integer type (necessary for indexing large arrays, with more than
# 2^31-1 elements), whereas the LP64 libraries index arrays with the
# 32-bit integer type.

Attention:

  • In your script or shell, where you do the actual make (as given in the last line of output of the configure script), you need first to load above modules, too.
  • On magnitude, downloading extra packages (like hypre) is not possible, you have to copy the source from a computer in our local network to magnitude.