====== Available computing partitions ======
Preliminary Remark: The SLURM scheduler obeys fair-share rules and applies scoring to each user account. Actually, depending on the amount of resources (wall clock and CPU time, main memory usage, generic resources) allocated by a user within the last seven days, the scheduler calculates the priority of the jobs being scheduled.
\\
\\
* smp (3 nodes): 3 TB DDR3/DDR4 main memory and 64 cores Intel Xeon E7-4850 v4 at 2.10GHz (AVX2); two nodes (smp002 and smp003) with 2 Nvidia Tesla P100 with 16 GB GPU memory <==
* standard (159 nodes): 250GB DDR4 main memory and 20 cores Intel Xeon E5-2630 v4 at 2.20GHz (AVX2)
* gpu: (22 nodes): 500GB DDR4 main memory, 2 Nvidia Tesla P100 with 16 GB GPU memory and 20 cores Intel Xeon E5-2630 v4 at 2.20GHz (AVX2)
\\
Remarks:
==> SMP nodes allow massive parallel intra-node shared memory calculations!!!<==
Alles Nodes have benn interconnected by Omni-Path switched fabric from Intel (based on Infiniband, but augmented by advanced QOS and package handling).
\\
date of revision: 09-23-2019 © kraus