A-Cluster
Zur Navigation springen
Zur Suche springen
Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein
Login
External Hostname is a-cluster.physik.uni-due.de (134.91.59.16), internal hostname is stor2.
Queueing system: Slurm
- There are two queues (partitions in Slurm terminology) named:
- CPUs being the default
- GPUs to be selected via
-p GPUsfor jobs which involve a GPU
sinfodisplays the cluster's total load.squeueshows running jobs. You can modify its output via the option-o. To make that permanent put something likealias squeue='squeue -o "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o"'into your.bashrc.- In the most simple cases, jobs are submitted via
sbatch -nn script-name. The number n of CPUs is available within the script as$SLURM_NTASKS. It's not necessary to pass it on tompirun, since the latter evaluates it on its own, anyway. - To allocate GPUs as well, add
-Gn or--gpus=n with n ∈ {1,2}. You can specify the type as well by prependingrtx2080:orrtx3090:to n. - Don't use background jobs (
&), unless youwaitfor them before the end of the script. srunis intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its-ndoesn't only reserve n cores but starts n jobs. (Those shouldn't containmpirun, otherwise you'd end up with n² busy cores.)- For an interactive shell with n reserved cores on a compute node:
srun --pty -cnbash - The assignment of cores can be non-trivial (cf. also task affinity), some rules:
- gromacs: Don't use its
-pinoptions.
- gromacs: Don't use its
Simulation Software
... installed (on the compute nodes)
The module system is not involved. Instead, scripts provided by the software set the environment.
AMBER
/usr/local/amber18/usr/local/amber20(providesparmedas well)
Script to source therein (assuming bash): amber.sh
GROMACS
(not all tested)
/usr/local/gromacs-2018.3/usr/local/gromacs-2020.4/usr/local/gromacs-3.3.4/usr/local/gromacs-4.6.4/usr/local/gromacs-5.0.1/usr/local/gromacs-5.1.1
Script to source therein (assuming bash): bin/GMXRC.bash
Ana provided an example script to be submitted via sbatch.
Intel Compiler & Co.
- is located in
/opt/intel/oneapi - must be made available via
module use /opt/intel/oneapi/modulefiles(unless you include/opt/intel/oneapi/modulefilesin yourMODULEPATH), thenmodule availlists the available modules.