A-Cluster
Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein
Login
External Hostname is a-cluster.physik.uni-due.de (134.91.59.16), internal hostname is stor2.
Queueing system: Slurm
- There are two queues (partitions in Slurm terminology) named:
- CPUs, the default
- GPUs, to be selected via
-p GPUsfor jobs which involve a GPU
sinfodisplays the cluster's total load.squeueshows running jobs. You can modify its output via the option-o. To make that permanent put something likealias squeue='squeue -o "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o"'into your.bashrc.- In the most simple cases, jobs are submitted via
sbatch -nn script-name. The number n of CPUs is available within the script as$SLURM_NTASKS. It's not necessary to pass it on tompirun, since the latter evaluates it on its own, anyway. - To allocate GPUs as well, add
-Gn or--gpus=n with n ∈ {1,2}. You can specify the type as well by prependingrtx2080:orrtx3090:to n. - Don't use background jobs (
&), unless youwaitfor them before the end of the script. srunis intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its-ndoesn't only reserve n cores but starts n jobs. (Those shouldn't containmpirun, otherwise you'd end up with n² busy cores.)- For an interactive shell with n reserved cores on a compute node:
srun --pty -cnbash - The assignment of cores can be non-trivial (cf. also task affinity), some rules:
- gromacs: Don't use its
-pinoptions.
- gromacs: Don't use its
Scientific Software
... installed (on the compute nodes)
AMBER
The module system is not involved. Instead, scripts provided by the software set the environment.
/usr/local/amber18/usr/local/amber20(providesparmedas well)
Script to source therein (assuming bash): amber.sh
GROMACS
The module system is not involved. Instead, scripts provided by the software set the environment.
Versions (not all tested):
/usr/local/gromacs-2018.3/usr/local/gromacs-2020.4/usr/local/gromacs-3.3.4/usr/local/gromacs-4.6.4/usr/local/gromacs-5.0.1/usr/local/gromacs-5.1.1
Script to source therein (assuming bash): bin/GMXRC.bash
Ana provided an example script to be submitted via sbatch.
OpenMolcas
(compiled with Intel compiler and MKL)
Minimal example script to be sbatched:
#!/bin/bash
export MOLCAS=/usr/local/openmolcas
export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID
mkdir $MOLCAS_WORKDIR
export PATH=$PATH:$MOLCAS
export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64
export OMP_NUM_THREADS=${SLURM_NTASKS:-1}
pymolcas the_input.inp
# Emptying/removing $MOLCAS_WORKDIR in the end is reccomended.
If you want/need to use the module system instead of setting LD_LIBRARY_PATH manually:
shopt -s expand_aliases source /etc/profile.d/modules.sh module use /opt/intel/oneapi/modulefiles module -s load compiler/latest module -s load mkl/latest
Intel Compiler & Co.
- is located in
/opt/intel/oneapi - must be made available via
module use /opt/intel/oneapi/modulefiles(unless you include/opt/intel/oneapi/modulefilesin yourMODULEPATH), thenmodule availlists the available modules.