A-Cluster: Unterschied zwischen den Versionen

Aus IT Physics
Wechseln zu: Navigation, Suche
(AMBER: +parmed)
(Simulation Software: PATH -> scripts)
Zeile 19: Zeile 19:
 
... installed (on the ''compute nodes'')
 
... installed (on the ''compute nodes'')
  
Since just one path has to be adjusted, the [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, activate the desired version in your job-script via:
+
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.
 
 
<code>export PATH=/usr/local/</code>''version''<code>/bin:$PATH</code>
 
  
 
== AMBER ==
 
== AMBER ==
Zeile 27: Zeile 25:
 
* <code>/usr/local/amber18</code>
 
* <code>/usr/local/amber18</code>
 
* <code>/usr/local/amber20</code> (provides <code>parmed</code> as well)
 
* <code>/usr/local/amber20</code> (provides <code>parmed</code> as well)
 +
 +
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): <code>amber.sh</code>
  
 
== GROMACS ==
 
== GROMACS ==
Zeile 38: Zeile 38:
 
* <code>/usr/local/gromacs-5.0.1</code>
 
* <code>/usr/local/gromacs-5.0.1</code>
 
* <code>/usr/local/gromacs-5.1.1</code>
 
* <code>/usr/local/gromacs-5.1.1</code>
 +
 +
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): <code>bin/GMXRC.bash</code>
  
 
= Intel Compiler & Co. =
 
= Intel Compiler & Co. =

Version vom 30. September 2021, 18:25 Uhr

Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein

Login

External address is 134.91.59.31 (will change soon and then get a hostname), internal hostname is stor2.

Queueing system: Slurm

  • sinfo displays the cluster's total load.
  • squeue shows running jobs.
  • Currently, there's just one partiton: "a-cluster"
  • In the most simple cases, jobs are submitted via sbatch -n n script-name. The number n of CPUs is available within the script as $SLURM_NTASKS. It's not necessary to pass it on to mpirun, since the latter evaluates it on its own, anyway.
  • srun is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its -n doesn't only reserve n cores but starts n jobs. (Those shouldn't contain mpirun, otherwise you'd end up with n² busy cores.)
  • The assignment of cores can be non-trivial (cf. also task affinity), some rules:
    • gromacs: Don't use its -pin options.

Simulation Software

... installed (on the compute nodes)

The module system is not involved. Instead, scripts provided by the software set the environment.

AMBER

  • /usr/local/amber18
  • /usr/local/amber20 (provides parmed as well)

Script to source therein (assuming bash): amber.sh

GROMACS

(not all tested)

  • /usr/local/gromacs-2018.3
  • /usr/local/gromacs-2020.4
  • /usr/local/gromacs-3.3.4
  • /usr/local/gromacs-4.6.4
  • /usr/local/gromacs-5.0.1
  • /usr/local/gromacs-5.1.1

Script to source therein (assuming bash): bin/GMXRC.bash

Intel Compiler & Co.

  • is located in /opt/intel/oneapi
  • must be made available via module use /opt/intel/oneapi/modulefiles (unless you include /opt/intel/oneapi/modulefiles in your MODULEPATH), then module avail lists the available modules.