Stylesheet style.css not found, please contact the developer of "arctic" template.

Available computing partitions

<html> Preliminary Remark: The SLURM scheduler obeys fair-share rules and applies scoring to each user account. Actually, depending on the amount of resources (wall clock and CPU time, main memory usage, generic resources) allocated by a user within the last seven days, the scheduler calculates the priority of the jobs being scheduled. </html>

  • <html> smp (3 nodes): 3 TB DDR3/DDR4 main memory and 64 cores Intel Xeon E7-4850 v4 at 2.10GHz (AVX2); two nodes (smp002 and smp003) with 2 Nvidia Tesla P100 with 16 GB GPU memory ⇐= </html>
  • <html> standard (159 nodes): 250GB DDR4 main memory and 20 cores Intel Xeon E5-2630 v4 at 2.20GHz (AVX2) </html>
  • <html> gpu: (22 nodes): 500GB DDR4 main memory, 2 Nvidia Tesla P100 with 16 GB GPU memory and 20 cores Intel Xeon E5-2630 v4 at 2.20GHz (AVX2) </html>


<html> Remarks: </html>

<html> =⇒ SMP nodes allow massive parallel intra-node shared memory calculations!!!⇐=<br/> Alles Nodes have benn interconnected by Omni-Path switched fabric from Intel (based on Infiniband, but augmented by advanced QOS and package handling). </html>


date of revision: 09-23-2019 © kraus

hpc/hpc_tubit/clust_conf/partitions.txt · Zuletzt geändert: 2019/07/23 16:19 von kraus
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0