<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="de">
	<id>https://wiki.uni-due.de/ittp/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Brendel</id>
	<title>IT Physics - Benutzerbeiträge [de]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.uni-due.de/ittp/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Brendel"/>
	<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Spezial:Beitr%C3%A4ge/Brendel"/>
	<updated>2026-04-26T17:59:55Z</updated>
	<subtitle>Benutzerbeiträge</subtitle>
	<generator>MediaWiki 1.39.10</generator>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=128</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=128"/>
		<updated>2026-04-16T13:35:35Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel oneAPI */ /latest&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;, home directories are the same.&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;). The default one is marked with &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;icx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ifx&amp;lt;/code&amp;gt;, ... (the [https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-icc-users-to-dpcpp-or-icx.html new Intel compilers])&lt;br /&gt;
** icc/latest → &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/DEPRECATION-NOTICE-Intel-C-Compiler-Classic/m-p/1506693 deprecated C/C++ compilers])&lt;br /&gt;
** ifort/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Blogs/Tech-Innovation/Tools/A-Historic-Moment-for-The-Intel-Fortran-Compiler-Classic-ifort/post/1614625 deprecated Fortran compiler])&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifx&amp;lt;/code&amp;gt;,  &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, ... (Attention: The &amp;lt;code&amp;gt;mpii&amp;lt;/code&amp;gt;*s are only wrappers; the actual compiler module needs to be loaded, too.)&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
(You can actually omit &amp;lt;code&amp;gt;/latest&amp;lt;/code&amp;gt; for the choices above.)&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; in order to be able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
== Pitfalls ==&lt;br /&gt;
&lt;br /&gt;
* The &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; system fails to reach some Intel libraries (like libcilkrts.so.5&amp;lt;sup&amp;gt;*&amp;lt;/sup&amp;gt;). If needed, their path has to be found out and appended:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/&amp;lt;/code&amp;gt;...&lt;br /&gt;
* &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;: When compiling with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt;, you also have to link with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt; (cf. also [https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/command-reference/compiler-commands/compilation-command-options.html manual]).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;*&amp;lt;/sup&amp;gt;: currently in /opt/intel/oneapi/compiler/2023.2.4/linux/bin/intel64&lt;br /&gt;
&lt;br /&gt;
= [https://www.open-mpi.org/doc/current OpenMPI] =&lt;br /&gt;
&lt;br /&gt;
* The setting &amp;lt;code&amp;gt;export OMPI_MCA_pml=ucx&amp;lt;/code&amp;gt; is necessary to prevent a &amp;quot;failed OFI Libfabric library call&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=127</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=127"/>
		<updated>2025-12-18T16:40:52Z</updated>

		<summary type="html">&lt;p&gt;Brendel: +g4pu18, +g5pu19, /scratch&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 800 cores, GPUs: 8× RTX 2080 + 23× RTX 3090+ 6× RTX 4090 + 2× RTX 5090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the nodes g4pu17, g4pu18 and g5pu19 (having AMD CPUs [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9354p.html EPYC 9354P],  [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9554.html EPYC 9554] and [https://www.amd.com/de/products/processors/server/epyc/9005-series/amd-epyc-9745.html EPYC 9745], respectively, instead of Intel CPUs).&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NPROCS&amp;lt;/code&amp;gt; and (equivalently) &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; can be used as a substitute for &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; or for interactive jobs (stdin+stdout+stderr stay attached to the terminal), its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; again, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* If you launch the actual &#039;&#039;n&#039;&#039; workers yourself (instead of using &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;), read &#039;&#039;n&#039;&#039; from the environment variables  &amp;lt;code&amp;gt;SLURM_NPROCS&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;SLURM_NTASKS&amp;lt;/code&amp;gt;.&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17, 2× RTX4090 on g4pu18, 2× RTX5090 on g5pu19). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has at least 375GiB disk space available in &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a larger scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
== On the Login Node (&#039;&#039;stor2&#039;&#039;) ==&lt;br /&gt;
&lt;br /&gt;
* VMD 1.9.3 (which can [[VMD/create movies|create movies]])&lt;br /&gt;
&lt;br /&gt;
== On the Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
=== AMBER ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GROMACS ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== OpenMM + open forcefield ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
=== OpenMolcas ===&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Compilers &amp;amp; Interpreters =&lt;br /&gt;
&lt;br /&gt;
== Intel Compiler &amp;amp; Co. ==&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
== Python ==&lt;br /&gt;
&lt;br /&gt;
The Python version provided by the system is currently 3.9. If you need a newer version, you can make 3.12 available by doing (in bash):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate py312&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=126</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=126"/>
		<updated>2025-02-05T21:49:01Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ NPROCS = NTASKS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the nodes g4pu17 and g4pu18 and (having AMD CPUs [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9354p.html EPYC 9354P] and [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9554.html EPYC 9554], respectively, instead of Intel CPUs).&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NPROCS&amp;lt;/code&amp;gt; and (equivalently) &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; can be used as a substitute for &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; or for interactive jobs (stdin+stdout+stderr stay attached to the terminal), its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; again, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* If you launch the actual &#039;&#039;n&#039;&#039; workers yourself (instead of using &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;), read &#039;&#039;n&#039;&#039; from the environment variables  &amp;lt;code&amp;gt;SLURM_NPROCS&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;SLURM_NTASKS&amp;lt;/code&amp;gt;.&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
== On the Login Node (&#039;&#039;stor2&#039;&#039;) ==&lt;br /&gt;
&lt;br /&gt;
* VMD 1.9.3 (which can [[VMD/create movies|create movies]])&lt;br /&gt;
&lt;br /&gt;
== On the Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
=== AMBER ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GROMACS ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== OpenMM + open forcefield ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
=== OpenMolcas ===&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Compilers &amp;amp; Interpreters =&lt;br /&gt;
&lt;br /&gt;
== Intel Compiler &amp;amp; Co. ==&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
== Python ==&lt;br /&gt;
&lt;br /&gt;
The Python version provided by the system is currently 3.9. If you need a newer version, you can make 3.12 available by doing (in bash):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate py312&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=125</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=125"/>
		<updated>2025-02-05T20:33:12Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ srun, SLURM_NPROCS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the nodes g4pu17 and g4pu18 and (having AMD CPUs [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9354p.html EPYC 9354P] and [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9554.html EPYC 9554], respectively, instead of Intel CPUs).&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; can be used as a substitute for &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; or for interactive jobs (stdin+stdout+stderr stay attached to the terminal), its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; again, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* If you launch the actual &#039;&#039;n&#039;&#039; workers yourself (instead of using &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt;), read &#039;&#039;n&#039;&#039; from the environment variable &amp;lt;code&amp;gt;SLURM_NPROCS&amp;lt;/code&amp;gt;.&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
== On the Login Node (&#039;&#039;stor2&#039;&#039;) ==&lt;br /&gt;
&lt;br /&gt;
* VMD 1.9.3 (which can [[VMD/create movies|create movies]])&lt;br /&gt;
&lt;br /&gt;
== On the Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
=== AMBER ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GROMACS ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== OpenMM + open forcefield ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
=== OpenMolcas ===&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Compilers &amp;amp; Interpreters =&lt;br /&gt;
&lt;br /&gt;
== Intel Compiler &amp;amp; Co. ==&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
== Python ==&lt;br /&gt;
&lt;br /&gt;
The Python version provided by the system is currently 3.9. If you need a newer version, you can make 3.12 available by doing (in bash):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate py312&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=124</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=124"/>
		<updated>2025-02-05T12:51:32Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ g4pu18, AMD-Links&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the nodes g4pu17 and g4pu18 and (having AMD CPUs [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9354p.html EPYC 9354P] and [https://www.amd.com/de/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9554.html EPYC 9554], respectively, instead of Intel CPUs).&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
== On the Login Node (&#039;&#039;stor2&#039;&#039;) ==&lt;br /&gt;
&lt;br /&gt;
* VMD 1.9.3 (which can [[VMD/create movies|create movies]])&lt;br /&gt;
&lt;br /&gt;
== On the Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
=== AMBER ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GROMACS ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== OpenMM + open forcefield ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
=== OpenMolcas ===&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Compilers &amp;amp; Interpreters =&lt;br /&gt;
&lt;br /&gt;
== Intel Compiler &amp;amp; Co. ==&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
== Python ==&lt;br /&gt;
&lt;br /&gt;
The Python version provided by the system is currently 3.9. If you need a newer version, you can make 3.12 available by doing (in bash):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate py312&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=123</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=123"/>
		<updated>2024-12-17T12:34:00Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Pitfalls */ path libcilkrts&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;, home directories are the same.&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;). The default one is marked with &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;icx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ifx&amp;lt;/code&amp;gt;, ... (the [https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-icc-users-to-dpcpp-or-icx.html new Intel compilers])&lt;br /&gt;
** icc/latest → &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/DEPRECATION-NOTICE-Intel-C-Compiler-Classic/m-p/1506693 deprecated C/C++ compilers])&lt;br /&gt;
** ifort/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Blogs/Tech-Innovation/Tools/A-Historic-Moment-for-The-Intel-Fortran-Compiler-Classic-ifort/post/1614625 deprecated Fortran compiler])&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifx&amp;lt;/code&amp;gt;,  &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, ... (Attention: The &amp;lt;code&amp;gt;mpii&amp;lt;/code&amp;gt;*s are only wrappers; the actual compiler module needs to be loaded, too.)&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; in order to be able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
== Pitfalls ==&lt;br /&gt;
&lt;br /&gt;
* The &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; system fails to reach some Intel libraries (like libcilkrts.so.5&amp;lt;sup&amp;gt;*&amp;lt;/sup&amp;gt;). If needed, their path has to be found out and appended:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/&amp;lt;/code&amp;gt;...&lt;br /&gt;
* &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;: When compiling with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt;, you also have to link with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt; (cf. also [https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/command-reference/compiler-commands/compilation-command-options.html manual]).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;*&amp;lt;/sup&amp;gt;: currently in /opt/intel/oneapi/compiler/2023.2.4/linux/bin/intel64&lt;br /&gt;
&lt;br /&gt;
= [https://www.open-mpi.org/doc/current OpenMPI] =&lt;br /&gt;
&lt;br /&gt;
* The setting &amp;lt;code&amp;gt;export OMPI_MCA_pml=ucx&amp;lt;/code&amp;gt; is necessary to prevent a &amp;quot;failed OFI Libfabric library call&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=122</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=122"/>
		<updated>2024-12-17T07:03:24Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Pitfalls */ LD_LIBRARY_PATH&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;, home directories are the same.&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;). The default one is marked with &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;icx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ifx&amp;lt;/code&amp;gt;, ... (the [https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-icc-users-to-dpcpp-or-icx.html new Intel compilers])&lt;br /&gt;
** icc/latest → &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/DEPRECATION-NOTICE-Intel-C-Compiler-Classic/m-p/1506693 deprecated C/C++ compilers])&lt;br /&gt;
** ifort/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Blogs/Tech-Innovation/Tools/A-Historic-Moment-for-The-Intel-Fortran-Compiler-Classic-ifort/post/1614625 deprecated Fortran compiler])&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifx&amp;lt;/code&amp;gt;,  &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, ... (Attention: The &amp;lt;code&amp;gt;mpii&amp;lt;/code&amp;gt;*s are only wrappers; the actual compiler module needs to be loaded, too.)&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; in order to be able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
== Pitfalls ==&lt;br /&gt;
&lt;br /&gt;
* The &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; system fails to reach some Intel libraries (like libcilkrts.so.5). If needed, their path has to be found out and appended:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/&amp;lt;/code&amp;gt;...&lt;br /&gt;
* &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;: When compiling with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt;, you also have to link with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt; (cf. also [https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/command-reference/compiler-commands/compilation-command-options.html manual]).&lt;br /&gt;
&lt;br /&gt;
= [https://www.open-mpi.org/doc/current OpenMPI] =&lt;br /&gt;
&lt;br /&gt;
* The setting &amp;lt;code&amp;gt;export OMPI_MCA_pml=ucx&amp;lt;/code&amp;gt; is necessary to prevent a &amp;quot;failed OFI Libfabric library call&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=121</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=121"/>
		<updated>2024-12-17T06:39:42Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel oneAPI */ mpii* = wrappers&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;, home directories are the same.&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;). The default one is marked with &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;icx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ifx&amp;lt;/code&amp;gt;, ... (the [https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-icc-users-to-dpcpp-or-icx.html new Intel compilers])&lt;br /&gt;
** icc/latest → &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/DEPRECATION-NOTICE-Intel-C-Compiler-Classic/m-p/1506693 deprecated C/C++ compilers])&lt;br /&gt;
** ifort/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Blogs/Tech-Innovation/Tools/A-Historic-Moment-for-The-Intel-Fortran-Compiler-Classic-ifort/post/1614625 deprecated Fortran compiler])&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifx&amp;lt;/code&amp;gt;,  &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, ... (Attention: The &amp;lt;code&amp;gt;mpii&amp;lt;/code&amp;gt;*s are only wrappers; the actual compiler module needs to be loaded, too.)&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; in order to be able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
== Pitfalls ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;: When compiling with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt;, you also have to link with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt; (cf. also [https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/command-reference/compiler-commands/compilation-command-options.html manual]).&lt;br /&gt;
&lt;br /&gt;
= [https://www.open-mpi.org/doc/current OpenMPI] =&lt;br /&gt;
&lt;br /&gt;
* The setting &amp;lt;code&amp;gt;export OMPI_MCA_pml=ucx&amp;lt;/code&amp;gt; is necessary to prevent a &amp;quot;failed OFI Libfabric library call&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=120</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=120"/>
		<updated>2024-12-14T22:24:08Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel oneAPI */ new compilers&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;, home directories are the same.&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;). The default one is marked with &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;icx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;ifx&amp;lt;/code&amp;gt;, ... (the [https://www.intel.com/content/www/us/en/developer/articles/guide/porting-guide-for-icc-users-to-dpcpp-or-icx.html new Intel compilers])&lt;br /&gt;
** icc/latest → &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Intel-oneAPI-DPC-C-Compiler/DEPRECATION-NOTICE-Intel-C-Compiler-Classic/m-p/1506693 deprecated C/C++ compilers])&lt;br /&gt;
** ifort/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt; (the [https://community.intel.com/t5/Blogs/Tech-Innovation/Tools/A-Historic-Moment-for-The-Intel-Fortran-Compiler-Classic-ifort/post/1614625 deprecated Fortran compiler])&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpx&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifx&amp;lt;/code&amp;gt;,  &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; in order to be able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
== Pitfalls ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;: When compiling with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt;, you also have to link with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt; (cf. also [https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/command-reference/compiler-commands/compilation-command-options.html manual]).&lt;br /&gt;
&lt;br /&gt;
= [https://www.open-mpi.org/doc/current OpenMPI] =&lt;br /&gt;
&lt;br /&gt;
* The setting &amp;lt;code&amp;gt;export OMPI_MCA_pml=ucx&amp;lt;/code&amp;gt; is necessary to prevent a &amp;quot;failed OFI Libfabric library call&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=119</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=119"/>
		<updated>2024-12-03T14:49:35Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* On the Login Node (stor2) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
== On the Login Node (&#039;&#039;stor2&#039;&#039;) ==&lt;br /&gt;
&lt;br /&gt;
* VMD 1.9.3 (which can [[VMD/create movies|create movies]])&lt;br /&gt;
&lt;br /&gt;
== On the Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
=== AMBER ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GROMACS ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== OpenMM + open forcefield ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
=== OpenMolcas ===&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Compilers &amp;amp; Interpreters =&lt;br /&gt;
&lt;br /&gt;
== Intel Compiler &amp;amp; Co. ==&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
== Python ==&lt;br /&gt;
&lt;br /&gt;
The Python version provided by the system is currently 3.9. If you need a newer version, you can make 3.12 available by doing (in bash):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate py312&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=VMC/create_movies&amp;diff=118</id>
		<title>VMC/create movies</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=VMC/create_movies&amp;diff=118"/>
		<updated>2024-12-03T14:48:53Z</updated>

		<summary type="html">&lt;p&gt;Brendel: Brendel verschob die Seite VMC/create movies nach VMD/create movies: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#WEITERLEITUNG [[VMD/create movies]]&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=VMD/create_movies&amp;diff=117</id>
		<title>VMD/create movies</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=VMD/create_movies&amp;diff=117"/>
		<updated>2024-12-03T14:48:53Z</updated>

		<summary type="html">&lt;p&gt;Brendel: Brendel verschob die Seite VMC/create movies nach VMD/create movies: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Making movies in VMD ===&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;General stuff:&#039;&#039;&#039;&lt;br /&gt;
* Animated gifs can be dropped into Power Point or into a browser to play. I normally used this format.&lt;br /&gt;
* mp4 movies are also OK&lt;br /&gt;
* Unusual (and very beautiful) representations can be created in VMD using renderers other than Snapshot, but they take longer to produce, so I don&#039;t mention them here.&lt;br /&gt;
* In all cases, load the relevant trajectories and format how it looks.&lt;br /&gt;
* In VMD Main→Display→Render Mode, chose GLSL to enable transparent materials.&lt;br /&gt;
* Save the visualization state as a *vmd file, in case you want to make modifications in the future.&lt;br /&gt;
* There are specialized tutorials to learn how to use the visualization capabilities of VMD.  Search for them... Here I only summarize a few topics for which guidelines appear to be harder to find.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Drawing an arrow at the pulled end (for pulling simulations)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In pulling simulations, it is sometimes useful to draw an arrow at the pulled end for visualization purposes.  Steps to do this in VMD:&lt;br /&gt;
&lt;br /&gt;
# Modify  script script_drawArrow.tcl to your needs, to specify the atom where the arrow starts and direction of arrow.&lt;br /&gt;
# Run script (by typing at the VMD TkConsole &amp;quot;source script_drawArrow.tcl &amp;quot;; the &amp;quot;&amp;quot; are not typed in...)&lt;br /&gt;
# Move the bar in VMD main from the beginning to the end of the trajectory, so the script executes for each frame&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Update the secondary structure at every frame&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Some representations (e.g. NewCartoon) depend on the secondary structure of the amino acids, but by default VMD does not update it every frame to keep things light.  If you need it updated at every frame:&lt;br /&gt;
&lt;br /&gt;
# Save the script  [https://www.ks.uiuc.edu/Research/vmd/script_library/scripts/sscache/sscache.tcl sscache.tcl ] somewhere in your machine, and then source it within VMD (by typing at the VMD Tkconsole &amp;quot;source sscache.tcl&amp;quot;)&lt;br /&gt;
# Source the script named [https://www.ks.uiuc.edu/Research/vmd/script_library/scripts/sscache/stride.tcl stride.tcl] (in the same way as in 1)&lt;br /&gt;
# Type at the TkConsole    &amp;quot;start_sscache&amp;quot;&lt;br /&gt;
# Move the bar in VMD main from the beginning to the end of the trajectory, so the SS is cached.  You should be able to see now that it is updated at every frame.&lt;br /&gt;
&lt;br /&gt;
To turn off the trace, use the command stop_sscache, which also takes the molecule number. There must be one stop_sscache for each start_sscache. The command clear_sscache resets the saved secondary structure data for all the molecules and all the frames. More info in https://www.ks.uiuc.edu/Research/vmd/vmd-1.3/ug/node256.html&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Making movies (A)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This option is OK for trajectories that are not too jumpy to begin with.&lt;br /&gt;
&lt;br /&gt;
In VMD Main → Extensions → Movie Maker:&lt;br /&gt;
# renderer snapshot&lt;br /&gt;
# movie settings: trajectory ; leave option to delete all files on&lt;br /&gt;
# format: Animated GIF (image magick)&lt;br /&gt;
#  Select an empty working directory&lt;br /&gt;
# Click on &amp;quot;Make Movie&amp;quot;, and wait until it completes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Making movies (B)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This option is better if your trajectory is jumpy.&lt;br /&gt;
&lt;br /&gt;
Step 1: In VMD Main → Extensions → Move Maker:&lt;br /&gt;
# renderer snapshot&lt;br /&gt;
# movie settings: trajectory ; deselect option to delete all files&lt;br /&gt;
# format: jpeg (image magick)&lt;br /&gt;
# Select an empty working directory&lt;br /&gt;
# Click on &amp;quot;Make Movie&amp;quot;, and wait until it completes&lt;br /&gt;
&lt;br /&gt;
Step 2: In a Linux machine, in the directory with the jpeg images, run command&lt;br /&gt;
convert -delay 30 myImageName.*.jpg myMovie-delay-30.gif&lt;br /&gt;
&lt;br /&gt;
Try several delay values (5 to 50 is a good range) to find something that works. Remember to rename the movies according to the delay so at the end you can chose the best one.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=VMD/create_movies&amp;diff=116</id>
		<title>VMD/create movies</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=VMD/create_movies&amp;diff=116"/>
		<updated>2024-12-03T14:47:54Z</updated>

		<summary type="html">&lt;p&gt;Brendel: new, by Ana&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Making movies in VMD ===&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;General stuff:&#039;&#039;&#039;&lt;br /&gt;
* Animated gifs can be dropped into Power Point or into a browser to play. I normally used this format.&lt;br /&gt;
* mp4 movies are also OK&lt;br /&gt;
* Unusual (and very beautiful) representations can be created in VMD using renderers other than Snapshot, but they take longer to produce, so I don&#039;t mention them here.&lt;br /&gt;
* In all cases, load the relevant trajectories and format how it looks.&lt;br /&gt;
* In VMD Main→Display→Render Mode, chose GLSL to enable transparent materials.&lt;br /&gt;
* Save the visualization state as a *vmd file, in case you want to make modifications in the future.&lt;br /&gt;
* There are specialized tutorials to learn how to use the visualization capabilities of VMD.  Search for them... Here I only summarize a few topics for which guidelines appear to be harder to find.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Drawing an arrow at the pulled end (for pulling simulations)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In pulling simulations, it is sometimes useful to draw an arrow at the pulled end for visualization purposes.  Steps to do this in VMD:&lt;br /&gt;
&lt;br /&gt;
# Modify  script script_drawArrow.tcl to your needs, to specify the atom where the arrow starts and direction of arrow.&lt;br /&gt;
# Run script (by typing at the VMD TkConsole &amp;quot;source script_drawArrow.tcl &amp;quot;; the &amp;quot;&amp;quot; are not typed in...)&lt;br /&gt;
# Move the bar in VMD main from the beginning to the end of the trajectory, so the script executes for each frame&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Update the secondary structure at every frame&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Some representations (e.g. NewCartoon) depend on the secondary structure of the amino acids, but by default VMD does not update it every frame to keep things light.  If you need it updated at every frame:&lt;br /&gt;
&lt;br /&gt;
# Save the script  [https://www.ks.uiuc.edu/Research/vmd/script_library/scripts/sscache/sscache.tcl sscache.tcl ] somewhere in your machine, and then source it within VMD (by typing at the VMD Tkconsole &amp;quot;source sscache.tcl&amp;quot;)&lt;br /&gt;
# Source the script named [https://www.ks.uiuc.edu/Research/vmd/script_library/scripts/sscache/stride.tcl stride.tcl] (in the same way as in 1)&lt;br /&gt;
# Type at the TkConsole    &amp;quot;start_sscache&amp;quot;&lt;br /&gt;
# Move the bar in VMD main from the beginning to the end of the trajectory, so the SS is cached.  You should be able to see now that it is updated at every frame.&lt;br /&gt;
&lt;br /&gt;
To turn off the trace, use the command stop_sscache, which also takes the molecule number. There must be one stop_sscache for each start_sscache. The command clear_sscache resets the saved secondary structure data for all the molecules and all the frames. More info in https://www.ks.uiuc.edu/Research/vmd/vmd-1.3/ug/node256.html&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Making movies (A)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This option is OK for trajectories that are not too jumpy to begin with.&lt;br /&gt;
&lt;br /&gt;
In VMD Main → Extensions → Movie Maker:&lt;br /&gt;
# renderer snapshot&lt;br /&gt;
# movie settings: trajectory ; leave option to delete all files on&lt;br /&gt;
# format: Animated GIF (image magick)&lt;br /&gt;
#  Select an empty working directory&lt;br /&gt;
# Click on &amp;quot;Make Movie&amp;quot;, and wait until it completes&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Making movies (B)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This option is better if your trajectory is jumpy.&lt;br /&gt;
&lt;br /&gt;
Step 1: In VMD Main → Extensions → Move Maker:&lt;br /&gt;
# renderer snapshot&lt;br /&gt;
# movie settings: trajectory ; deselect option to delete all files&lt;br /&gt;
# format: jpeg (image magick)&lt;br /&gt;
# Select an empty working directory&lt;br /&gt;
# Click on &amp;quot;Make Movie&amp;quot;, and wait until it completes&lt;br /&gt;
&lt;br /&gt;
Step 2: In a Linux machine, in the directory with the jpeg images, run command&lt;br /&gt;
convert -delay 30 myImageName.*.jpg myMovie-delay-30.gif&lt;br /&gt;
&lt;br /&gt;
Try several delay values (5 to 50 is a good range) to find something that works. Remember to rename the movies according to the delay so at the end you can chose the best one.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=115</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=115"/>
		<updated>2024-12-03T14:44:30Z</updated>

		<summary type="html">&lt;p&gt;Brendel: VMD on stor2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
== On the Login Node (&#039;&#039;stor2&#039;&#039;) ==&lt;br /&gt;
&lt;br /&gt;
* VMD 1.9.3 (which can [[VMC/create movies|create movies]])&lt;br /&gt;
&lt;br /&gt;
== On the Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
=== AMBER ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GROMACS ===&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== OpenMM + open forcefield ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
=== OpenMolcas ===&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Compilers &amp;amp; Interpreters =&lt;br /&gt;
&lt;br /&gt;
== Intel Compiler &amp;amp; Co. ==&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
== Python ==&lt;br /&gt;
&lt;br /&gt;
The Python version provided by the system is currently 3.9. If you need a newer version, you can make 3.12 available by doing (in bash):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate py312&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=114</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=114"/>
		<updated>2024-03-11T07:12:37Z</updated>

		<summary type="html">&lt;p&gt;Brendel: +Python 3.12&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Compilers &amp;amp; Interpreters =&lt;br /&gt;
&lt;br /&gt;
== Intel Compiler &amp;amp; Co. ==&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
== Python ==&lt;br /&gt;
&lt;br /&gt;
The Python version provided by the system is currently 3.9. If you need a newer version, you can make 3.12 available by doing (in bash):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate py312&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=113</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=113"/>
		<updated>2024-03-10T20:30:09Z</updated>

		<summary type="html">&lt;p&gt;Brendel: 316TB&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×316TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=112</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=112"/>
		<updated>2024-03-06T07:13:30Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Backups */ lv1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv1&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv1/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=111</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=111"/>
		<updated>2024-03-06T07:12:38Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* GPUs */ four GPUs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,...,3\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=110</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=110"/>
		<updated>2024-03-06T07:10:56Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ +assure&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs each, Xeon Gold 6226R or 6346R. To assure running on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,1\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=109</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=109"/>
		<updated>2024-03-05T08:12:15Z</updated>

		<summary type="html">&lt;p&gt;Brendel: +AMD&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 17 compute nodes (CPUs: 544 cores, GPUs: 8× RTX 2080 + 24× RTX 3090+ 4× RTX 4090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are four queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default. The nodes have two Intel CPUs, Xeon Gold 6226R or 6346R. To run on a node with the latter, you need to specify the option &amp;lt;code&amp;gt;-C X6346R&amp;lt;/code&amp;gt;.&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;AMD&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p AMD&amp;lt;/code&amp;gt;, which contains only the node g3pu17 (having an [https://www.amd.com/de/products/cpu/amd-epyc-9354p#product-specs AMD EPYC 9354P] instead of an Intel CPU)&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;rtx4090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are GPUs on each node (2× RTX2080 on gpu01-04, 2× RTX3090 on g3pu05-16, 4× RTX4090 on g4pu17). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,1\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=108</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=108"/>
		<updated>2023-12-22T14:58:23Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ + -x&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 16 compute nodes (CPUs: 512 cores, GPUs: 8x RTX 2080 + 24x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are three queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* If you want to avoid certain nodes, you can specify their names to the option &amp;lt;code&amp;gt;-x&amp;lt;/code&amp;gt;.&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are two GPUs on each node (RTX2080 on gpu01-04, RTX3090 on g3pu05-16). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,1\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=107</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=107"/>
		<updated>2023-04-27T14:50:48Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ job arrays&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 16 compute nodes (CPUs: 512 cores, GPUs: 8x RTX 2080 + 24x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are three queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs. If you have many, many runs with just varying parameters, consider using [https://slurm.schedmd.com/job_array.html job arrays].&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are two GPUs on each node (RTX2080 on gpu01-04, RTX3090 on g3pu05-16). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,1\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=106</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=106"/>
		<updated>2023-04-27T14:32:19Z</updated>

		<summary type="html">&lt;p&gt;Brendel: + Test-queue, user restrictions, OpenMM+openff&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 16 compute nodes (CPUs: 512 cores, GPUs: 8x RTX 2080 + 24x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are three queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
** &#039;&#039;Test&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p Test&amp;lt;/code&amp;gt; for test jobs of maximal 10 minutes running time (Compute node &#039;&#039;gpu01&#039;&#039; is reserved exclusively for this queue.)&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
* There are restrictions per user:&lt;br /&gt;
** You cannot use more than 384 CPU cores simultaneously.&lt;br /&gt;
** You cannot have more than 128 submitted jobs.&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are two GPUs on each node (RTX2080 on gpu01-04, RTX3090 on g3pu05-16). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,1\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMM + open forcefield ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;source /usr/local/miniconda3/bin/activate&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;conda activate openforcefield&amp;lt;/code&amp;gt;&lt;br /&gt;
* installed openff components: forceBalance, geomeTRIC, openFF toolkit, openFF evaluator, TorsionDrive, pyMBAR&lt;br /&gt;
* also installed: jupyterlab&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=104</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=104"/>
		<updated>2022-10-31T15:11:19Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Backups */ lv1 auf stor2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are two GPUs on each node (RTX2080 on gpu01-04, RTX3090 on g3pu05-13). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,1\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv1/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Mathematica&amp;diff=100</id>
		<title>Mathematica</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Mathematica&amp;diff=100"/>
		<updated>2022-09-26T11:29:09Z</updated>

		<summary type="html">&lt;p&gt;Brendel: neu&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Als Studierende der Fakultät für Physik wird Ihnen die Verwendung der lizenzpflichtigen Software [https://www.wolfram.com/mathematica Mathematica] der Firma &#039;&#039;Wolfram Research, Inc.&#039;&#039; auch auf Ihrem eigenen Computer ermöglicht. Dazu benötigen Sie zum Einen die Software selbst sowie einen Aktivierungsschlüssel. Letzteren müssen Sie beim [https://user.wolfram.com/portal/requestAK/dd51dc8b8e063a436d7fe6b4be7256173ac4b3dc Hersteller anfordern], wobei Sie sich bei Wolfram registrieren (Ihre &#039;&#039;Wolfram ID&#039;&#039; wird angelegt). Achten Sie bitte darauf, dass Sie dabei Ihre Uni-eMail-Adresse (...&amp;lt;code&amp;gt;@stud.uni-due.de&amp;lt;/code&amp;gt;) verwenden. Im [https://user.wolfram.com Wolfram-Portal] gibt es dann auch einen Download-Bereich, wo Sie sich die Installationsfiles für Linux, Windows oder Mac herunterladen können.&lt;br /&gt;
&lt;br /&gt;
Fragen bitte an &amp;lt;code&amp;gt;lothar.brendel@uni-due.de&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=99</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=99"/>
		<updated>2022-08-23T10:09:00Z</updated>

		<summary type="html">&lt;p&gt;Brendel: GPUs, /scratch&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= GPUs = &lt;br /&gt;
&lt;br /&gt;
There are two GPUs on each node (RTX2080 on gpu01-04, RTX3090 on g3pu05-13). After having requested GPUs (cf. above), you&#039;ll find the ID(s) \(\in\{0,1\}\) of the GPUs(s) assigned to your job in the environment variable &amp;lt;code&amp;gt;SLURM_STEP_GPUS&amp;lt;/code&amp;gt; as well as in &amp;lt;code&amp;gt;GPU_DEVICE_ORDINAL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The command &amp;lt;code&amp;gt;sgpus&amp;lt;/code&amp;gt; (no manpage) displays the number of unallocated GPUs on each node.&lt;br /&gt;
&lt;br /&gt;
= Scratch space =&lt;br /&gt;
&lt;br /&gt;
If your job makes heavy use of temporary files, you shouldn&#039;t have them in your home directory (to avoid too much network traffic). Each node has about 400GiB disk space available in &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt;, where you should create &amp;lt;code&amp;gt;/tmp/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt; (to avoid cluttering) and wipe it at the end of your job.&lt;br /&gt;
&lt;br /&gt;
Four nodes (g3pu07-10) have a dedicated scratch directory &amp;lt;code&amp;gt;/scratch&amp;lt;/code&amp;gt; of 3.4TiB capacity, where you should create (and later wipe) &amp;lt;code&amp;gt;/scratch/$USER/$SLURM_JOBID&amp;lt;/code&amp;gt;. To use it, you have to specify &amp;lt;code&amp;gt;--gres=scratch:&amp;lt;/code&amp;gt;&#039;&#039;X&#039;&#039; upon submission, where &#039;&#039;X&#039;&#039; is the amount of scratch space intended to use in GiB (max 3480). (This amount is not checked during the job&#039;s runtime.)&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=88</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=88"/>
		<updated>2022-07-05T14:03:58Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ limit of 30 cores only on CPUs queue&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* In the &#039;&#039;CPUs&#039;&#039; queue, 2 cores stay reserved on each node for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=87</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=87"/>
		<updated>2022-05-17T12:44:29Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel Compiler &amp;amp; Co. */ Typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* On each node, 2 cores stay reserved for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039;&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=86</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=86"/>
		<updated>2022-05-17T12:44:09Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel Compiler &amp;amp; Co. */ FFT&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* On each node, 2 cores stay reserved for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
* Module &#039;&#039;mkl/latest&#039; contains also FFT routines.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=75</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=75"/>
		<updated>2022-03-25T17:22:54Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ 30 cores&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* On each node, 2 cores stay reserved for GPU jobs, resulting in 30 available cores.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=36</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=36"/>
		<updated>2022-03-13T08:47:43Z</updated>

		<summary type="html">&lt;p&gt;Brendel: +OpenMPI&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;, home directories are the same.&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;). The default one is marked with &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; before being able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
== Pitfalls ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;: When compiling with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt;, you also have to link with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt; (cf. also [https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/command-reference/compiler-commands/compilation-command-options.html manual]).&lt;br /&gt;
&lt;br /&gt;
= [https://www.open-mpi.org/doc/current OpenMPI] =&lt;br /&gt;
&lt;br /&gt;
* The setting &amp;lt;code&amp;gt;export OMPI_MCA_pml=ucx&amp;lt;/code&amp;gt; is necessary to prevent a &amp;quot;failed OFI Libfabric library call&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=35</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=35"/>
		<updated>2022-03-09T10:21:03Z</updated>

		<summary type="html">&lt;p&gt;Brendel: storage&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090) and 2×251TiB disk storage, purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=34</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=34"/>
		<updated>2022-03-09T10:15:40Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Backups */ vd1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. To access the &lt;br /&gt;
backups, first log in to the cluster. Then:&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor.vd1&amp;lt;/code&amp;gt;: Last night&#039;s backup is in &amp;lt;code&amp;gt;/export/vd1/$USER&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Users in &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt;: You actually have seven backups corresponding to the last 7 days in  &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&#039;&#039;D&#039;&#039;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with &#039;&#039;D&#039;&#039; \(\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=33</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=33"/>
		<updated>2022-03-09T09:01:02Z</updated>

		<summary type="html">&lt;p&gt;Brendel: Backups&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;br /&gt;
&lt;br /&gt;
= Backups =&lt;br /&gt;
&lt;br /&gt;
A backup of the users&#039; home directories is taken nightly. Users residing on &amp;lt;code&amp;gt;/home/stor1.lv0&amp;lt;/code&amp;gt; actually have seven backups corresponding to the last 7 days in &amp;lt;code&amp;gt;/exports/lv0/snapshots/days.&amp;lt;/code&amp;gt;\(d\)&amp;lt;code&amp;gt;/stor1/home/stor1.lv0/$USER&amp;lt;/code&amp;gt; with \(d\in\{0,\dots,6\}\).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=32</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=32"/>
		<updated>2022-02-18T14:41:38Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel oneAPI */ +pitfalls&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com/ Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;).&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; before being able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
== Pitfalls ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;: When compiling with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt;, you also have to link with &amp;lt;code&amp;gt;-i8&amp;lt;/code&amp;gt; (cf. also [https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/command-reference/compiler-commands/compilation-command-options.html manual]).&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=31</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=31"/>
		<updated>2022-02-17T15:43:45Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel oneAPI */ load&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com/ Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;).&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mkl/latest&lt;br /&gt;
* &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; ... loads a module.&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; before being able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=30</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=30"/>
		<updated>2022-02-17T15:39:14Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Intel oneAPI */ shopt&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com/ Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;).&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mkl/latest&lt;br /&gt;
&lt;br /&gt;
Inside a shell script (bash), you have to do &amp;lt;code&amp;gt;shopt -s expand_aliases&amp;lt;/code&amp;gt; before being able to use the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=29</id>
		<title>Zeon</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=Zeon&amp;diff=29"/>
		<updated>2022-02-17T15:30:56Z</updated>

		<summary type="html">&lt;p&gt;Brendel: neu&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= New Opterox-Login =&lt;br /&gt;
&lt;br /&gt;
Server is &amp;lt;code&amp;gt;zeon.physik.uni-due.de&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Queueing system = [https://slurm.schedmd.com/ Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; shows overall load and available queues (&#039;&#039;partitions&#039;&#039;).&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* simplest submission: &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel&amp;lt;/code&amp;gt; &#039;&#039;job-id&#039;&#039; kills job.&lt;br /&gt;
&lt;br /&gt;
= Intel oneAPI =&lt;br /&gt;
&lt;br /&gt;
Intel compilers, MKL and MPI are installed, must be selected via the [https://modules.readthedocs.io/en/latest module system]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include that directory in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the installed modules, relevant are:&lt;br /&gt;
** compiler/latest → &amp;lt;code&amp;gt;ifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;icc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mpi/latest → &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiifort&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicpc&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mpiicc&amp;lt;/code&amp;gt;, ...&lt;br /&gt;
** mkl/latest&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=28</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=28"/>
		<updated>2021-12-28T11:58:55Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* OpenMolcas */ rm-rf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  rm -rf $MOLCAS_WORKDIR&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=27</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=27"/>
		<updated>2021-12-28T11:48:40Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* OpenMolcas */ MOLCAS_WORKDIR&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export MOLCAS_WORKDIR=/tmp/$USER-$SLURM_JOB_NAME-$SLURM_JOB_ID&lt;br /&gt;
  mkdir $MOLCAS_WORKDIR&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
  &lt;br /&gt;
  # Emptying/removing $MOLCAS_WORKDIR in the end is reccomended.&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=26</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=26"/>
		<updated>2021-12-28T09:16:08Z</updated>

		<summary type="html">&lt;p&gt;Brendel: +OpenMolcas&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Scientific Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
Versions (not all tested):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== OpenMolcas ==&lt;br /&gt;
&lt;br /&gt;
(compiled with Intel compiler and MKL)&lt;br /&gt;
&lt;br /&gt;
Minimal example script to be &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;ed:&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  &lt;br /&gt;
  export MOLCAS=/usr/local/openmolcas&lt;br /&gt;
  export PATH=$PATH:$MOLCAS&lt;br /&gt;
  export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/latest/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/mkl/latest/lib/intel64&lt;br /&gt;
  export OMP_NUM_THREADS=${SLURM_NTASKS:-1}&lt;br /&gt;
  &lt;br /&gt;
  pymolcas the_input.inp&lt;br /&gt;
&lt;br /&gt;
If you want/need to use the module system instead of setting &amp;lt;code&amp;gt;LD_LIBRARY_PATH&amp;lt;/code&amp;gt; manually:&lt;br /&gt;
&lt;br /&gt;
  shopt -s expand_aliases&lt;br /&gt;
  source /etc/profile.d/modules.sh&lt;br /&gt;
  &lt;br /&gt;
  module use /opt/intel/oneapi/modulefiles&lt;br /&gt;
  module -s load compiler/latest&lt;br /&gt;
  module -s load mkl/latest&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=25</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=25"/>
		<updated>2021-12-21T10:22:35Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ style&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039;, the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039;, to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=24</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=24"/>
		<updated>2021-12-21T10:22:08Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ wording&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology) named:&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039; being the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039; to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=23</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=23"/>
		<updated>2021-12-21T09:05:29Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ queues&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* There are two queues (&#039;&#039;partitions&#039;&#039; in Slurm terminology):&lt;br /&gt;
** &#039;&#039;CPUs&#039;&#039; being the default&lt;br /&gt;
** &#039;&#039;GPUs&#039;&#039; to be selected via &amp;lt;code&amp;gt;-p GPUs&amp;lt;/code&amp;gt; for jobs which involve a GPU&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=22</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=22"/>
		<updated>2021-12-09T13:48:39Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ doc-Links, squeue-alias, GPUs, bash auf Knoten&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sinfo.html sinfo]&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/squeue.html squeue]&amp;lt;/code&amp;gt; shows running jobs. You can modify its output via the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. To make that permanent put something like &amp;lt;code&amp;gt;alias squeue=&#039;squeue -o &amp;quot;%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %C %o&amp;quot;&#039;&amp;lt;/code&amp;gt; into your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Currently, there&#039;s just one &#039;&#039;partiton&#039;&#039;: &amp;quot;a-cluster&amp;quot;&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;[https://slurm.schedmd.com/sbatch.html sbatch] -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* To allocate GPUs as well, add &amp;lt;code&amp;gt;-G &amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; or &amp;lt;code&amp;gt;--gpus=&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039; with &#039;&#039;n&#039;&#039; ∈ {1,2}. You can specify the type as well by prepending &amp;lt;code&amp;gt;rtx2080:&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rtx3090:&amp;lt;/code&amp;gt; to &#039;&#039;n&#039;&#039;.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;[https://slurm.schedmd.com/srun.html srun]&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* For an interactive shell with &#039;&#039;n&#039;&#039; reserved cores on a compute node: &amp;lt;code&amp;gt;srun --pty -c&amp;lt;/code&amp;gt;&#039;&#039;n&#039;&#039;&amp;lt;code&amp;gt; bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=21</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=21"/>
		<updated>2021-12-07T16:18:43Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* GROMACS */ Ana&amp;#039;s example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* Currently, there&#039;s just one &#039;&#039;partiton&#039;&#039;: &amp;quot;a-cluster&amp;quot;&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ana provided an [https://wiki.uni-due.de/vilaverde/index.php/File:Gromacs_cpu.sh example script] to be submitted via &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=20</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=20"/>
		<updated>2021-10-15T21:01:33Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ Mail IS configured.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* Currently, there&#039;s just one &#039;&#039;partiton&#039;&#039;: &amp;quot;a-cluster&amp;quot;&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* Don&#039;t use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=19</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=19"/>
		<updated>2021-10-14T12:14:14Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Queueing system: Slurm */ &amp;amp;-jobs, no mail&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* Currently, there&#039;s just one &#039;&#039;partiton&#039;&#039;: &amp;quot;a-cluster&amp;quot;&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* Dont use background jobs (&amp;lt;code&amp;gt;&amp;amp;&amp;lt;/code&amp;gt;), unless you &amp;lt;code&amp;gt;wait&amp;lt;/code&amp;gt; for them before the end of the script.&lt;br /&gt;
* Mail notification is not configured, &#039;&#039;&#039;yet&#039;&#039;&#039;.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=18</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=18"/>
		<updated>2021-10-14T11:51:17Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Login */ external hostname&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External Hostname is &amp;lt;code&amp;gt;a-cluster.physik.uni-due.de&amp;lt;/code&amp;gt; (134.91.59.16), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* Currently, there&#039;s just one &#039;&#039;partiton&#039;&#039;: &amp;quot;a-cluster&amp;quot;&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
	<entry>
		<id>https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=17</id>
		<title>A-Cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.uni-due.de/ittp/index.php?title=A-Cluster&amp;diff=17"/>
		<updated>2021-09-30T16:25:45Z</updated>

		<summary type="html">&lt;p&gt;Brendel: /* Simulation Software */ PATH -&amp;gt; scripts&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Linux cluster with currently 13 compute nodes (CPUs: 416 cores, GPUs: 8x RTX 2080 + 18x RTX 3090), purchased by Ana Vila Verde and Christopher Stein&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
External address is 134.91.59.31 (will change soon and then get a hostname), internal hostname is &amp;lt;code&amp;gt;stor2&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Queueing system: [https://slurm.schedmd.com/documentation.html Slurm] =&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; displays the cluster&#039;s total load.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; shows running jobs.&lt;br /&gt;
* Currently, there&#039;s just one &#039;&#039;partiton&#039;&#039;: &amp;quot;a-cluster&amp;quot;&lt;br /&gt;
* In the most simple cases, jobs are submitted via &amp;lt;code&amp;gt;sbatch -n&amp;lt;/code&amp;gt; &#039;&#039;n&#039;&#039; &#039;&#039;script-name&#039;&#039;. The number &#039;&#039;n&#039;&#039; of CPUs is available within the script as &amp;lt;code&amp;gt;$SLURM_NTASKS&amp;lt;/code&amp;gt;. It&#039;s not necessary to pass it on to &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, since the latter evaluates it on its own, anyway.&lt;br /&gt;
* &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; is intended for interactive jobs (stdin+stdout+stderr stay attached to the terminal) and its &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; doesn&#039;t only reserve &#039;&#039;n&#039;&#039; cores but starts &#039;&#039;n&#039;&#039; jobs. (Those shouldn&#039;t contain &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;, otherwise you&#039;d end up with &#039;&#039;n&#039;&#039;² busy cores.)&lt;br /&gt;
* The assignment of cores can be non-trivial (cf. also [[Slurm/Task-Affinity|task affinity]]), some rules:&lt;br /&gt;
** gromacs: &#039;&#039;&#039;Don&#039;t&#039;&#039;&#039; use its &amp;lt;code&amp;gt;-pin&amp;lt;/code&amp;gt; options.&lt;br /&gt;
&lt;br /&gt;
= Simulation Software =&lt;br /&gt;
&lt;br /&gt;
... installed (on the &#039;&#039;compute nodes&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
The [https://modules.readthedocs.io/en/latest module system] is not involved. Instead, scripts provided by the software set the environment.&lt;br /&gt;
&lt;br /&gt;
== AMBER ==&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber18&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/amber20&amp;lt;/code&amp;gt; (provides &amp;lt;code&amp;gt;parmed&amp;lt;/code&amp;gt; as well)&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;amber.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GROMACS ==&lt;br /&gt;
&lt;br /&gt;
(not all tested)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2018.3&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-2020.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-3.3.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-4.6.4&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.0.1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/local/gromacs-5.1.1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Script to source therein (assuming [https://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]): &amp;lt;code&amp;gt;bin/GMXRC.bash&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler &amp;amp; Co. =&lt;br /&gt;
&lt;br /&gt;
* is located in &amp;lt;code&amp;gt;/opt/intel/oneapi&amp;lt;/code&amp;gt;&lt;br /&gt;
* must be made available via &amp;lt;code&amp;gt;module use /opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; (unless you include &amp;lt;code&amp;gt;/opt/intel/oneapi/modulefiles&amp;lt;/code&amp;gt; in your &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt;), then &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; lists the available modules.&lt;/div&gt;</summary>
		<author><name>Brendel</name></author>
	</entry>
</feed>