A-Cluster: Unterschied zwischen den Versionen
Zur Navigation springen
Zur Suche springen
(→Simulation Software: PATH -> scripts) |
(→Login: external hostname) |
||
Zeile 3: | Zeile 3: | ||
= Login = | = Login = | ||
External | External Hostname is <code>a-cluster.physik.uni-due.de</code> (134.91.59.16), internal hostname is <code>stor2</code>. | ||
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] = | = Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] = |
Version vom 14. Oktober 2021, 12:51 Uhr
Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein
Login
External Hostname is a-cluster.physik.uni-due.de
(134.91.59.16), internal hostname is stor2
.
Queueing system: Slurm
sinfo
displays the cluster's total load.squeue
shows running jobs.- Currently, there's just one partiton: "a-cluster"
- In the most simple cases, jobs are submitted via
sbatch -n
n script-name. The number n of CPUs is available within the script as$SLURM_NTASKS
. It's not necessary to pass it on tompirun
, since the latter evaluates it on its own, anyway. srun
is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its-n
doesn't only reserve n cores but starts n jobs. (Those shouldn't containmpirun
, otherwise you'd end up with n² busy cores.)- The assignment of cores can be non-trivial (cf. also task affinity), some rules:
- gromacs: Don't use its
-pin
options.
- gromacs: Don't use its
Simulation Software
... installed (on the compute nodes)
The module system is not involved. Instead, scripts provided by the software set the environment.
AMBER
/usr/local/amber18
/usr/local/amber20
(providesparmed
as well)
Script to source therein (assuming bash): amber.sh
GROMACS
(not all tested)
/usr/local/gromacs-2018.3
/usr/local/gromacs-2020.4
/usr/local/gromacs-3.3.4
/usr/local/gromacs-4.6.4
/usr/local/gromacs-5.0.1
/usr/local/gromacs-5.1.1
Script to source therein (assuming bash): bin/GMXRC.bash
Intel Compiler & Co.
- is located in
/opt/intel/oneapi
- must be made available via
module use /opt/intel/oneapi/modulefiles
(unless you include/opt/intel/oneapi/modulefiles
in yourMODULEPATH
), thenmodule avail
lists the available modules.