Hardware inventory

 

  General specs: 49 nodes, 1040 cores, Infiniband network, 840 TB storage

 

node

cpu

total cores/node

RAM gpu model
gres/gpu: spec
features
chuck-[1-4]

2 x Xeon E5-2665

16 cores

64 - E5-2665, mpi3, sandybridge
chuck-[5-10]

2 x Xeon E5-2665

16 cores

64 - E5-2665, mpi3, sandybridge
chuck-[11-12]

2 x Xeon E5-2680 v2

20 cores

64 - E5-2680v2, mpi2, ivybridge
chuck-[13-24]

2 x Xeon E5-2680 v2

20 cores

32 - E5-2680v2, mpi2, ivybridge
chuck-[25-28]

2 x Xeon E5-2640 v4

20 cores

64 - E5-2640v4, mpi1, broadwell
chuck-[29-32]

2 x Xeon Silver 4316

40 cores

128 - Silver4316, icelake
chuck-[39-50]

2 x Xeon E5-2640 v4

20 cores

64 - E5-2640v4, mpi1, broadwell
chuck-[51-53]

1 x Xeon Silver 4216

16 cores

384 2 x Nvidia Quadro RTX 6000
gres/gpu:turing=2
Silver4216, mpi4, cascadelake,
gpu_QuadroRTX6000
chuck-54

2 x Xeon Silver 4316

40 cores

1024

2 x Nvidia Tesla A100  80 GB
gres/gpu:ampere=2

Silver4316,icelake,gpu_A100

chuck-58

(only for
grav. waves group)

2 x Xeon Silver 4216

32 cores

768

8 x Nvidia Tesla V100 32 GB NVLink

gres/gpu:volta:8

Silver4216,mpi4,cascadelake,gpu_V100

 

In order to use gpu you have to use partition gpu and specify number of requested gpus, e.g.: "-p gpu --gres=gpu:ampere:1" . gres values in the table show available (max) number of gpus of given architecture in the node.

Features can be used to refine node selection. In particular you can use cpu architecture as defined in gcc, e.g. if you use SLURM option "-C broadwell" you can optimize the code using gcc option "-march=broadwell". Please note that the nodes are inhomogeneous - if you don't want to restrict your jobs to newer nodes the code has to be compiled with "-march=sandybridge".