Compile-belt: Difference between revisions

From Arbeitsgruppe Kuiper
Jump to navigation Jump to search
m (→‎making PETSc: make-line)
 
Line 64: Line 64:
= making PETSc =
= making PETSc =


Copying and executing the line from above will <code>make</code> PETSc, i.e. compile the library (which takes some time). A successful <code>make</code> will terminate with printing the same line, except <code>all</code>→<code>test</code>. Copying and executing it, will perform three tests.
Copying and executing the <code>make</code>-line from above will <code>make</code> PETSc, i.e. compile the library (which takes some time). A successful <code>make</code> will terminate with printing the same line, except <code>all</code>→<code>test</code>. Copying and executing it will perform three tests.


= makefile_user_machinedefs =
= makefile_user_machinedefs =

Latest revision as of 18:17, 24 March 2026

Overview

Compiling belt (from scratch) takes place in 5 steps:

  1. installing packages needed by PETSc (the Portable, Extensible Toolkit for Scientific Computation used by belt)
  2. configuring PETSc, which may include selecting a compiler and an MPI implementation
  3. making PETSc
  4. adapting makefile_user_machinedefs
  5. making belt

prerequisites for PETSc

MPI and linear algebra

If you're on a compute cluster/server, those packages will already be installed. Probably there will be several versions and you have to pick the right ones later. If you're on your own computer, you need

  • an implementation of the Message Passing Interface, namely OpenMPI (package libopenmpi-dev or openmpi-devel on Debian-like or Redhat-like distros, respectively) or MPICH (libmpich-dev, mpich-devel)
  • the linear algebra libraries BLAS ([open?] libblas-dev, blas-devel) and LAPACK (liblapack-dev, lapack-devel)

Python 2

belt needs an old PETSc version, namely 3.1-p8 from March 2011. That version needs Python < 3 for its configuration, with the last version (Python 2.7.18) having been discontinued in 2020 and dropped from the major Linux distros around 2024. If there is really no Python 2 on the machine, two possible solutions[1] are:

  1. Download pypy2.7, extract it to a suitable location in your $HOME (will consume about 130MB) and use pypy2.7-v*-linux64/bin/python2 as Python interpreter.
  2. As an alternative, install miniconda into your $HOME and create a Python 2.7 environment: conda create -n py27 python=2.7 (will consume about 1GB). Don't install anything else into this environment. (Because the package binutils may be pulled in; it provides the miniconda version of the linker ld, which may later confuse make.)


  1. The most lightweight variant would be to get a statically built python2.7 binary to work, or at least a portable one.

configuring PETSc

extracting and patching PETSc

  1. Extract petsc-3.1-p8 into a suitable location in your $HOME.
  2. Since it's old, PETSc's C-code and the configuration scripts need a bit of patching: Save the petsc.patch into the PETSc folder.

setting up the configuration

It it advisable to have a small bash script which carries out the configuration . Here's a minimal template (to be saved into the PETSc folder), where you'll have to adjust the path of pypy2.7 or miniconda if you haven't chosen your home-directory:

#!/bin/bash

[ -r include/newMPInames.h ] || patch -p0 < petsc.patch  # patch if not done, yet

python=python2  # probably not available right away, hence:
if [ -d $HOME/pypy2.7-v*-linux64 ]; then
    python=$HOME/pypy2.7-v*-linux64/bin/$python
elif [ -d $HOME/miniconda3 ]; then
    source $HOME/miniconda3/bin/activate py27
fi

opt=-O3  # Some prefer -O2.
export PETSC_DIR=$PWD
$python config/configure.py PETSC_ARCH=my_arch \
        --CFLAGS='-Wno-deprecated-declarations -Wno-unused-result -Wno-format-overflow' \
        --COPTFLAGS=$opt --FOPTFLAGS=$opt \
        --with-x=0 --with-debugging=0

The value for PETSC_ARCH can be chosen freely, you'll probably put something more meaningful than my_arch, like e.g. Debian_MPICH if that is your distro and chosen MPI. Several versions of PETSc with different settings can be kept simultaneously and chosen later in the makefile_user_machinedefs. Running the script will find BLAS/LAPACK and the chosen MPI [1] and will finish with printing a line make PETSC_DIR=... all.

  1. If not, it will suggest to add options to download them, but those will be outdated versions and will be compiled from source.

making PETSc

Copying and executing the make-line from above will make PETSc, i.e. compile the library (which takes some time). A successful make will terminate with printing the same line, except alltest. Copying and executing it will perform three tests.

makefile_user_machinedefs

In each of your run-folders where an executable belt is to be created+run, a makefile_user_machinedefs specific to the used computer is needed. You'll adapt it once and then copy it into the next run-folders. But first you need the actual source code: Download and extract belt from gitlab (or clone it) into a suitable location in your $HOME. This path will be needed now when editing makefile_user_machinedefs:

  • CC           = mpicc
  • FC           = mpifort
  • BELT_DIR     = $(HOME)/belt [1]
  • PETSC_DIR    = $(HOME)/petsc-3.1-p8 [1]
  • PETSC_ARCH   = your Choice from PETSc's configuration step above
  • PETSC_LAPACK = lapack
  • PETSC_BLAS   = blas
  • PETSC_HYPRE  = NO
  • PETSC_ML     = NO
  • GFORTRAN     = NO
  • GSL          = NO


  1. 1.0 1.1 Adjust to your chosen location.

making belt

Now it's just make (or make -j to speed up by parallel compilation). After any changes to makefile_user-Files or .c-files or .h-files, you'll repeat this step.

adjustments on specific computers

(to be continued)


almost obsolete

(and to be converted from Markdown to Wikitext)

## Belt initialization

1. Clone the belt repository from the Gitlab of the university `git clone https://git.uni-due.de/agkuiper/belt` using your university credentials.

2. Enter dependencies folder and unpack petsc-3.1-p8.tar with `tar -xvf petsc-3.1-p8.tar`.

3. Open the file PETSc compile.sh and copy the command for "Rolfs MacBook Pro". Enter petsc-3.1-p8 and execute the copied statement with python2 in the command line to compile PETSc.

4. Follow the instructions in the terminal to complete the setup.

5. Exit the dependencies folder and enter the tests44 folder and open the `makefile_user_machinedefs` file. In this file we need to adjust the paths so that the code knows where the dependencies are. You find an example of the needed settings here:

  ```plaintext

   ################

   ##            ##

   ##  Compiler  ##

   ##            ##

   ################

   CC           = /home/your_name/Documents/belt/dependencies/petsc-3.1-p8/externalpackages/mpich2-1.0.8/bin/mpicc

   FC           = gfortran

   

   

   ############

   ##        ##

   ##  belt  ##

   ##        ##

   ############

   BELT_DIR     = /home/your_name/Documents/belt

   

   

   #############

   ##         ##

   ##  PETSc  ##

   ##         ##

   #############

   PETSC_DIR    = /home/your_name/Documents/belt/dependencies/petsc-3.1-p8

   PETSC_ARCH   = linux-gnu-c-opt

   PETSC_HYPRE  = YES

   PETSC_ML     = NO

   PETSC_LAPACK = flapack

   PETSC_BLAS   = fblas

   

   

   ################

   ##            ##

   ##  gfortran  ##

   ##            ##

   ################

   GFORTRAN         = YES

   GFORTRAN_LIB_DIR = /usr/lib64

   GFORTRAN_LIB     = gfortran

   

   

   ###########

   ##       ##

   ##  GSL  ##

   ##       ##

   ###########

   GSL          = NO

   GSL_DIR      = /Applications/Science/gsl/gsl-2.1-install

   GSL_LIB      = gsl

  ```

These could be different for your machine, please double check! Make sure that there are no spaces at the end of each line, the code won't compile if there are. GSL is not needed and you don't have to change anything there.

5. Choose a test setup, for example Grids/Spherical3D, enter the folder and create a folder "bin" as well as "data".

6. Open the file `makefile_user_CFLAGS_FFLAGS_LDFLAGS` and comment out the LDFlags line: `#LDFLAGS +="-Wl,-no_compact_unwind"`. Instead write `LDFLAGS += -ldl`. Beware of the spaces.

7. Now we are ready to run the first test. Type `make -j 8`, where the -j 8 gives the number of cores to compile on. Note that there are individual setups (BonnerEbert-Sphere 1D) that don't work in parallel yet. The number can not be higher than the physical number of cores.

8. Run the code with `./belt` on a single core and with e.g. `/home/your_name/Documents/belt/dependencies/petsc-3.1-p8/externalpackages/mpich2-1.0.8/bin/mpiexec -n 8 ./belt` depending on your path and the number of cores.

### Additional stuff:

- Check the file in the log folder to see the progress of your code. Here you find the current step the calculation is at, as well as the normalization units to convert the outputs from code units to cgs. When the run is finished the file also contains the total runtime at the bottom.

- If you stop the code while executing you can restart it using the last output to continue from, e.g. `./belt -restart 191`.