Installing and Loading HPC-X

Procedure_Heading_Icon.PNG
To install HPC-X:

  1. Extract hpcx.tbz into your current working directory.

    Copy
    Copied!
                

    tar -xvf hpcx.tbz


  2. Update shell variable of the location of HPC-X installation.

    Copy
    Copied!
                

    $ cd hpcx $ export HPCX_HOME=$PWD

HPC-X includes Open MPI v4.1.x. Each Open MPI version has its own module file which can be used to load the desired version.

The symbolic links hpcx-init.sh and modulefiles/hpcx point to the default version (Open MPI v4.1.x).

Procedure_Heading_Icon.PNG
To load Open MPI/OpenSHMEM v4.1.x based package:

Copy
Copied!
            

% source $HPCX_HOME/hpcx-init.sh % hpcx_load % env | grep HPCX % mpicc $HPCX_MPI_TESTS_DIR/examples/hello_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_c % mpirun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_c % oshcc $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c % oshrun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c % hpcx_unload

As of version 1.7, Mellanox no longer distributes HPC-X builds based on the Intel compiler suite. However, after following the HPC-X deployment example below, HPC-X can subsequently be rebuilt from source with your Intel compiler suite as follows:

Copy
Copied!
            

$ tar xfp ${HPCX_HOME}/sources/openmpi-gitclone.tar.gz $ cd ${HPCX_HOME}/sources/openmpi-gitclone $ ./configure CC=icc CXX=icpc F77=ifort FC=ifort --prefix=${HPCX_HOME}/ompi-icc \ --with-hcoll=${HPCX_HOME}/hcoll \ --with-ucx=${HPCX_HOME}/ucx \ --with-platform=contrib/platform/mellanox/optimized \ 2>&1 | tee config-icc-output.log $ make -j32 all 2>&1 | tee build_icc.log && make -j24 install 2>&1 | tee install_icc.log

In the above example, 4 switches are used to specify the compiler suite:

CC:

Specifies the C compiler

CXX:

Specifies the C++ compiler

F77:

Specifies the Fortran 77 compiler

FC:

Specifies the Fortran 90 compiler

Warning

We strongly recommend using a single compiler suite whenever possible. Unexpected or undefined behavior can occur when you mix compiler suites in unsupported ways (e.g., mixing Fortran 77 and Fortran 90 compilers between different compiler suite is almost guaranteed not to work.)

In all cases, the Intel compiler suite must be found in your PATH and be able to successfully compile and link non-MPI applications before Open MPI will be able to be built properly.

For rebuilding HPC-X open-source components, please use the helper script as described in "Rebuilding Open MPI Using a Helper Script" section.

Copy
Copied!
            

% module use $HPCX_HOME/modulefiles % module load hpcx % mpicc $HPCX_MPI_TESTS_DIR/examples/hello_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_c % mpirun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_c % oshcc $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c % oshrun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c % module unload hpcx

Starting from version 2.1, HPC-X toolkit is provided with a set of environments. You are to select the environment that meets your needs best.

  • HPC-X with CUDA® support - hpcx

    Warning

    Cuda support in SLES 11, RHEL 6 and RHEL OSs lower than 7.4 with PPC arch is no longer available.

    This is the default option which is optimized for best performance for the single-thread mode. This option supports both GPU and non-GPU setups.

    Warning

    Starting with CUDA 11.0, the minimum recommended GCC compiler is at least GCC 5 due to C++11 requirements in CUDA libraries e.g. cuFFT and CUB.

  • HPC-X with multi-threading support - hpcx-mt
    This option enables multi-threading support in all of the HPC-X components. Please use this module in order to run multi-threaded applications.

  • HPC-X for profiling - hpcx-prof
    This option enables UCX compiled with profiling information.

  • HPC-X for debug - hpcx-debug
    This option enables UCX/HCOLL/SHARP compiled in debug mode.

  • HPC-X stack - hpcx-stack
    This environment contains all the libraries that 'Vanilla HPCX' has, except for OMPI.

Warning

When HPC-X is launched with Open MPI without a resource manager job environment (slurm,pbs, etc.), or when it is launched from a compute node, the default rsh/ssh-based launcher will be used. This launcher does not propagate environment variables to the compute nodes. Thus, it is important to ensure the propagation of LD_LIBRARY_PATH variable from HPC-x is done as follows.

Copy
Copied!
            

%mpirun -x LD_LIBRARY_PATH -np 2 -H host1,host2 $HPCX_MPI_TESTS_DIR/examples/hello_c

Warning

Note that only one of the environments can be loaded to be run.

For information on how to load and use the additional environments, please refer to the HPC-X README file (embedded in the HPC-X package).

HPC-X supports Singularity containerization technology, which help deploying and running distributed applications without launching an entire virtual machine (VM) for each application.

For instructions on the technology and how to create a standalone Singularity container with MLNX_OFED and HPC-X inside, please visit:

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.