CHUCK cluster - knowledge base

 

Table of contents

 


 

Programming environment

 

Compilers

 

GNU compilers

 

Standard GNU compilers are installed: gcc/g++/gfortran ; debugger: gdb ; profiler: gprof .
Documentation: man pages, online documentation for all versions
The default GCC version in Rocky 9 is 11.4 but newer versions are provided as Software Collections:


# list available collections:
scl list-collections
# load collection in a new bash shell
scl enable gcc-toolset-13 bash

 

Please nothe that 'scl load' command, which loads a collection into the current shell is unsupported for gcc toolsets. If you need it in the currect shell (which is probably what you want in the job script) use:

source scl_source enable gcc-toolset-13

If you compile a code using software collection please remember to load the same environment in the job script, before execution.

 

Intel toolkit - compilers and libraries

 

Intel HPC toolkit is installed on the cluster. It contains Intel oneAPI toolkit + Fortran + MPI .
Documentation: List of components, Documentation


The OneAPI components are loaded as environmental modules:

 

# to list available modules:
module available
# to load a module in current shell
module add compiler/latest
# to purge all loaded modules
module purge

 

If you compile a code in a module environment, please remember to load the same environment in the job script, before execution.

 

MPI

 

MPI is a standard for parallel computations in the distributed memory model. There are various MPI implementations on chuck: mvapich2, mpich, openmpi, Intel MPI. Each implementation can be activated using environmental modules.

Use `module av` to list all available modules and 'module add' to add a module to your working shell environment.

The highly recommended implementation to use on chuck is mvapich2 since it can exploit our fast, Infiniband network interfaces.

In all implementations, the names of mpi compilers are: mpicc, mpicxx, mpifort. In practice, those are shell scripts which call ordinary compilers with proper paths and mpi libraries. mvapich2, mpich and openmpi call GNU compilers and Intel MPI calls Intel compilers. This means that you can use e.g. mvapich2 with the gcc-toolset-13 :

 

source scl_source enable gcc-toolset-13
module add mpi/mvapich2-x86-64

 

 

Note: please be careful to not mix different mpi implementations when using other libraries/software. For example, if you use mvapich2, you have to use the version of HDF5 library compiled with mvapich2.

 

Important! By default, the system packages for mvapich2, mpich and openmpi are configured with options which sometimes can substantially reduce performance or cause problems when linking with other libraries. These options are related to the so-called code hardening (https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html , https://wiki.gentoo.org/wiki/GCC_optimization#Hardening_optimizations ) . You can see the full list of gcc options used by mvapich2 with the command `mpicc -show`. If you don't want to use -fpic and -fPIE options, on chuck, in the mvapich2 environment we provide locally modified scripts: mpicc-nh, mpicxx-nh and mpifort-nh .

 

Libraries

 

Popular scientific libraries are installed on the cluster. Beside of the standard packages, some were recompiled locally for better optimization and additional features. All locally compiled software can be found in the /opt directory.

Currently we have there:

 

  • fftw3 - the distribution packages provide version 3.3.8 of fftw3. In addition to this, in /opt we provide locally compiled version 3.3.8 with mvapich2 support (mpicc-nh, default gcc v11) and optimized for broadwell architecture (to exploit AVX2 instructions). Please remember to ask SLURM for broadwell and newer nodes "-C broadwell,cascadelake,icelake". Both float and double versions are compiled. Exact compilation options:
    CFLAGS="-O3 -fomit-frame-pointer -mtune=broadwell -malign-double -fstrict-aliasing -fno-schedule-insns" MPICC=mpicc-nh ./configure --prefix=/opt/fftw/3.3.10-gcc11-mvapich2-broadwell/ --enable-avx --enable-avx2 --enable-fma --enable-shared --enable-sse2 --enable-threads --enable-mpi --enable-openmp
  • gsl in versions 2.5 and 2.7 (system package provides v2.6)
  • Sleef v3.6 - a vectorized math library (https://sleef.org/)

 

HDF4 & HDF5

 

The distribution packages provide HDF4 v.4.2.15 and  HDF5 v.1.12.1 but there is no support for MPI.

 

The MPI support is provided by additional packages for openmpi and mpich implementations. To use them you have to provide proper paths both for compilation (include and library dirs) and runtime (LD_LIBRARY_PATH):

  • for hdf5 with openmpi use: -I/usr/include/openmpi-x86_64/  and  -L/usr/lib64/openmpi/lib/
  • for hdf5 with mpich use: -I/usr/include/mpich-x86_64/  and   -L/usr/lib64/mpich/lib/

 

HDF5 with the (recommended on chuck) mvapich2 implementation was compiled locally and can be found in /opt/hdf/hdf-1.14.5-mvapich2/:

  • for hdf5 with mvapich2 use: -I/opt/hdf/hdf-1.14.5-mvapich2/include and -L/opt/hdf/hdf-1.14.5-mvapich2/lib/




Back to top

 


 

Queueing system

TBW
 

Introduction

 

Rules and limits

 

SLURM commands



Back to top