PETSc: Difference between revisions

From Arbeitsgruppe Kuiper
Jump to navigation Jump to search
(→‎Version 3.1-p8: +MPI_Comm_f2c())
Line 15: Line 15:
* PETSc 3.1-p8 uses deprecated objects: MPI_Attr_get, MPI_Attr_put, MPI_Attr_delete, MPI_Keyval_create, MPI_Keyval_free, OMPI_C_MPI_NULL_COPY_FN, OMPI_C_MPI_NULL_DELETE_FN (deprecated since MPI-2.0) and MPI_Errhandler_create, MPI_Errhandler_set, MPI_Type_struct (''removed'' in MPI-3.0)
* PETSc 3.1-p8 uses deprecated objects: MPI_Attr_get, MPI_Attr_put, MPI_Attr_delete, MPI_Keyval_create, MPI_Keyval_free, OMPI_C_MPI_NULL_COPY_FN, OMPI_C_MPI_NULL_DELETE_FN (deprecated since MPI-2.0) and MPI_Errhandler_create, MPI_Errhandler_set, MPI_Type_struct (''removed'' in MPI-3.0)
* Pluto4.1 itself uses deprecated objects (in Parallel/al_subarray_.c): MPI_Type_extent, MPI_Type_hvector, MPI_Type_struct (''removed'' in MPI-3.0)
* Pluto4.1 itself uses deprecated objects (in Parallel/al_subarray_.c): MPI_Type_extent, MPI_Type_hvector, MPI_Type_struct (''removed'' in MPI-3.0)
* The configurator checks the existence of the function <code>MPI_Comm_f2c()</code> by calling it as <code>MPI_Comm_f2c(MPI_COMM_WORLD)</code> (in function <code>configureConversion()</code> in petsc-3.1-p8/config/BuildSystem/config/packages/MPI.py), but <code>MPI_COMM_WORLD</code> is [https://linux.die.net/man/3/mpi_comm_world not an integer]. The failing check may finally result in "mpi.h" allegedly not being found. [[PETSc/config/BuildSystem/config/packages/MPI.py|Patching MPI.py]] works.


OpenMPI is (at least on Debian) hardwired to reject removed functions (<code>#define OMPI_ENABLE_MPI1_COMPAT 0</code> in /usr/lib/x86_64-linux-gnu/openmpi/include/mpi.h and /usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi/mpi.h). Changing 0 → 1 (by the admin) works, the compiler flag <code>-Wno-deprecated-declarations</code> is recommended, then. A script like the following for configuring PETSc is advisable:
OpenMPI is (at least on Debian) hardwired to reject removed functions (<code>#define OMPI_ENABLE_MPI1_COMPAT 0</code> in /usr/lib/x86_64-linux-gnu/openmpi/include/mpi.h and /usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi/mpi.h). Changing 0 → 1 (by the admin) works, the compiler flag <code>-Wno-deprecated-declarations</code> is recommended, then. A script like the following for configuring PETSc is advisable:

Revision as of 16:21, 13 June 2025

PETSc is the Portable, Extensible Toolkit for Scientific Computation.

Version 3.1-p8

(March 25, 2010)

That's the version needed by belt. When choosing to download packages during configuration, versions are:

  • openmpi-1.4.1.tar.gz
  • mpich2-1.0.8.tar.gz
  • hypre-2.6.0b.tar.gz (uses deprecated MPI_Address(), MPI_Type_struct)

Problem when using system's MPI (more modern):

  • PETSc 3.1-p8 uses deprecated objects: MPI_Attr_get, MPI_Attr_put, MPI_Attr_delete, MPI_Keyval_create, MPI_Keyval_free, OMPI_C_MPI_NULL_COPY_FN, OMPI_C_MPI_NULL_DELETE_FN (deprecated since MPI-2.0) and MPI_Errhandler_create, MPI_Errhandler_set, MPI_Type_struct (removed in MPI-3.0)
  • Pluto4.1 itself uses deprecated objects (in Parallel/al_subarray_.c): MPI_Type_extent, MPI_Type_hvector, MPI_Type_struct (removed in MPI-3.0)
  • The configurator checks the existence of the function MPI_Comm_f2c() by calling it as MPI_Comm_f2c(MPI_COMM_WORLD) (in function configureConversion() in petsc-3.1-p8/config/BuildSystem/config/packages/MPI.py), but MPI_COMM_WORLD is not an integer. The failing check may finally result in "mpi.h" allegedly not being found. Patching MPI.py works.

OpenMPI is (at least on Debian) hardwired to reject removed functions (#define OMPI_ENABLE_MPI1_COMPAT 0 in /usr/lib/x86_64-linux-gnu/openmpi/include/mpi.h and /usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/openmpi/mpi.h). Changing 0 → 1 (by the admin) works, the compiler flag -Wno-deprecated-declarations is recommended, then. A script like the following for configuring PETSc is advisable:

Debian (like merope, asterope, and electra)

Necessary packages:

  • make, python2-minimal
  • libopenmpi-dev or libmpich-dev
  • libblas-dev, liblapack-dev
#!/bin/bash

# L.B. 2024

declare -A inc=(["openmpi"]="/usr/lib/x86_64-linux-gnu/openmpi/include",
                ["mpich"]="/usr/include/x86_64-linux-gnu/mpich")

declare -A lib=(["openmpi"]="[/usr/lib/x86_64-linux-gnu/libmpi.so,/usr/lib/x86_64-linux-gnu/libmpi_mpifh.so]",
                ["mpich"]="[/usr/lib/x86_64-linux-gnu/libmpich.so,/usr/lib/x86_64-linux-gnu/libmpichfort.so]")

# On Debian 12, the MPICH libraries are actually called
# "libmpich.so.12" and "libmpichfort.so.12", and the standard links
# .so -> .so.12 are missing. Ask your admin to fix that. Specifying
# the suffix ".12" doesn't work, it confuses the configure.py.

export PETSC_DIR=$PWD

opt=-O3  # Some prefer -O2.
mpi=openmpi

python2 ./config/configure.py PETSC_ARCH=debian_$mpi \
        --with-cc=mpicc.$mpi --with-cxx=mpicxx.$mpi --with-fc=mpif90.$mpi --with-mpiexec=mpirun.$mpi \
        --CFLAGS=-Wno-deprecated-declarations --COPTFLAGS=$opt --FOPTFLAGS=$opt \
        --with-x=0 --with-debugging=0 --download-hypre=1 \
        --with-mpi-include="${inc[$mpi]}" --with-mpi-lib="${lib[$mpi]}"

magnitude

#!/bin/bash

# L.B. 2024

module -s load compiler/latest
module -s load mkl/latest
module -s load mpi/latest

export PETSC_DIR=$PWD

python2 ./config/configure.py PETSC_ARCH=magnitude_intel \
        --CFLAGS= --COPTFLAGS=-O3 --FOPTFLAGS=-O3 \
	    --with-blas-lib=[libmkl_intel_lp64.a,libmkl_sequential.a,libmkl_core.a] --with-lapack-lib=libmkl_core.a \
        --with-x=0 --with-debugging=0 --download-hypre=1

# The Intel® oneAPI Math Kernel Library ILP64 libraries use the 64-bit
# integer type (necessary for indexing large arrays, with more than
# 2^31-1 elements), whereas the LP64 libraries index arrays with the
# 32-bit integer type.

Attention:

  • In your script or shell, where you do the actual make (as given in the last line of output of the configure script), you need first to load above modules, too.
  • On magnitude, downloading extra packages (like hypre) is not possible, you have to copy the source from a computer in our local network to magnitude.