A-Cluster: Unterschied zwischen den Versionen

Aus IT Physics
Zur Navigation springen Zur Suche springen
(+Software)
(→‎AMBER: +parmed)
Zeile 26: Zeile 26:


* <code>/usr/local/amber18</code>
* <code>/usr/local/amber18</code>
* <code>/usr/local/amber20</code>
* <code>/usr/local/amber20</code> (provides <code>parmed</code> as well)


== GROMACS ==
== GROMACS ==

Version vom 30. September 2021, 12:09 Uhr

Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein

Login

External address is 134.91.59.31 (will change soon and then get a hostname), internal hostname is stor2.

Queueing system: Slurm

  • sinfo displays the cluster's total load.
  • squeue shows running jobs.
  • Currently, there's just one partiton: "a-cluster"
  • In the most simple cases, jobs are submitted via sbatch -n n script-name. The number n of CPUs is available within the script as $SLURM_NTASKS. It's not necessary to pass it on to mpirun, since the latter evaluates it on its own, anyway.
  • srun is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its -n doesn't only reserve n cores but starts n jobs. (Those shouldn't contain mpirun, otherwise you'd end up with n² busy cores.)
  • The assignment of cores can be non-trivial (cf. also task affinity), some rules:
    • gromacs: Don't use its -pin options.

Simulation Software

... installed (on the compute nodes)

Since just one path has to be adjusted, the module system is not involved. Instead, activate the desired version in your job-script via:

export PATH=/usr/local/version/bin:$PATH

AMBER

  • /usr/local/amber18
  • /usr/local/amber20 (provides parmed as well)

GROMACS

(not all tested)

  • /usr/local/gromacs-2018.3
  • /usr/local/gromacs-2020.4
  • /usr/local/gromacs-3.3.4
  • /usr/local/gromacs-4.6.4
  • /usr/local/gromacs-5.0.1
  • /usr/local/gromacs-5.1.1

Intel Compiler & Co.

  • is located in /opt/intel/oneapi
  • must be made available via module use /opt/intel/oneapi/modulefiles (unless you include /opt/intel/oneapi/modulefiles in your MODULEPATH), then module avail lists the available modules.