CHUCK cluster - knowledge base

 

Table of contents

 


 

Programming environment

 

Compilers

 

GNU compilers

 

Standard GNU compilers are installed: gcc/g++/gfortran ; debugger: gdb ; profiler: gprof .
Documentation: man pages, online documentation for all versions
The default GCC version in Rocky 9 is 11.4 but newer versions are provided as Software Collections:


# list available collections:
scl list-collections
# load collection in current shell
scl enable gcc-toolset-13 bash

 

Please nothe that 'scl load' command, which loads a collection into the current shell is unsupported for gcc toolsets. One has to use 'scl enable' which starts a new shell.

If you compile a code using software collection please remember to load the same environment in the job script, before execution.

 

Intel toolkit - compilers and libraries

 

Intel HPC toolkit is installed on the cluster. It contains Intel oneAPI toolkit + Fortran + MPI .
Documentation: List of components, Documentation


The OneAPI components are loaded as environmental modules:

 

# to list available modules:
module available
# to load a module in current shell
module add compiler/latest
# to purge all loaded modules
module purge

 

If you compile a code in a module environment, please remember to load the same environment in the job script, before execution.

 

MPI

 

MPI is a standard for parallel computations in the distributed memory model. There are various MPI implementations on chuck: mvapich2, mpich, openmpi, Intel MPI. Each implementation can be activated using environmental modules. Use `module av` to list all available modules.

The highly recommended implementation to use on chuck is mvapich2 since it can exploit our fast, Infiniband network interfaces.

In all implementations, the names of mpi compilers are: mpicc, mpicxx, mpifort. In practice, those are shell scripts which call ordinary compilers with proper paths and mpi libraries. mvapich2, mpich and openmpi call GNU compilers and Intel MPI calls Intel compilers. This means that you can use e.g. mvapich2 with the gcc-toolset-13 :

 

scl load gcc-toolset-13
module add mpi/vapich2-x86-64

 

Important! By default, the system packages for mvapich2, mpich and openmpi are configured with options which sometimes can substantially reduce performance or cause problems when linking with other libraries. These options are related to the so-called code hardening (https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html , https://wiki.gentoo.org/wiki/GCC_optimization#Hardening_optimizations ) . You can see the full list of gcc options used by mvapich2 with the command `mpicc -show`. If you don't want to use -fpic and -fPIE options on chuck, in the mvapich2 environment we provide locally modified scripts: mpicc-nh, mpicxx-nh and mpifort-nh .

 

Libraries

 

Popular scientific libraries are installed on the cluster. Beside of the standard packages, some were recompiled locally for better optimization and additional features. All locally compiled software can be found in the /opt directory. Currently we have there:

 

  • fftw3 - compiled with mvapich2 (mpicc-nh, default gcc v11) and optimized for broadwell architecture (to exploit AVX2 instructions). Please remember to ask SLURM for broadwell and newer nodes "-C broadwell,cascadelake,icelake". Both float and double versions are compiled. Exact compilation options:
    CFLAGS="-O3 -fomit-frame-pointer -mtune=broadwell -malign-double -fstrict-aliasing -fno-schedule-insns" MPICC=mpicc-nh ./configure --prefix=/opt/fftw/3.3.10-gcc11-mvapich2-broadwell/ --enable-avx --enable-avx2 --enable-fma --enable-shared --enable-sse2 --enable-threads --enable-mpi --enable-openmp
  • gsl in versions 2.5 and 2.7 (system package provides v2.6)
  • Sleef v3.6 - a vectorized math library (https://sleef.org/)

 

HDF4, HDF5 - the distribution packages provide HDF4 v.4.2.15 and  HDF5 v.1.12.1 . There is no MPI support - please contat adm if you need it.




Back to top

 


 

Queueing system

TBW
 

Introduction

 

Rules and limits

 

SLURM commands



Back to top