image image image image image

On This Page

Installing HPC-X

To install HPC-X:

  1. Extract hpcx.tbz into your current working directory.

    tar -xvf hpcx.tbz

  2. Update shell variable of the location of HPC-X installation.

    $ cd hpcx
    $ export HPCX_HOME=$PWD

Building and Running Applications with HPC-X

HPC-X includes Open MPI v4.1.x. Each Open MPI version has its own module file which can be used to load the desired version.

The symbolic links and modulefiles/hpcx point to the default version (Open MPI v4.1.x).

To load Open MPI/OpenSHMEM v4.1.x based package:

% source $HPCX_HOME/
% hpcx_load
% env | grep HPCX
% mpicc $HPCX_MPI_TESTS_DIR/examples/hello_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_c
% mpirun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_c
% oshcc $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c
% oshrun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c
% hpcx_unload

Building HPC-X with the Intel Compiler Suite

As of version 1.7, Mellanox no longer distributes HPC-X builds based on the Intel compiler suite. However, after following the HPC-X deployment example below, HPC-X can subsequently be rebuilt from source with your Intel compiler suite as follows:

$ tar xfp ${HPCX_HOME}/sources/openmpi-gitclone.tar.gz
$ cd ${HPCX_HOME}/sources/openmpi-gitclone
$ ./configure CC=icc CXX=icpc F77=ifort FC=ifort --prefix=${HPCX_HOME}/ompi-icc \
--with-hcoll=${HPCX_HOME}/hcoll \
--with-ucx=${HPCX_HOME}/ucx \
--with-platform=contrib/platform/mellanox/optimized \
2>&1 | tee config-icc-output.log 
$ make -j32 all 2>&1 | tee build_icc.log && make -j24 install 2>&1 | tee install_icc.log

In the above example, 4 switches are used to specify the compiler suite:


Specifies the C compiler


Specifies the C++ compiler


Specifies the Fortran 77 compiler


Specifies the Fortran 90 compiler

We strongly recommend using a single compiler suite whenever possible. Unexpected or undefined behavior can occur when you mix compiler suites in unsupported ways (e.g., mixing Fortran 77 and Fortran 90 compilers between different compiler suite is almost guaranteed not to work.)

In all cases, the Intel compiler suite must be found in your PATH and be able to successfully compile and link non-MPI applications before Open MPI will be able to be built properly.

For rebuilding HPC-X open-source components, please use the helper script as described in "Rebuilding Open MPI Using a Helper Script" section.

Loading HPC-X Environment from Modules

 To load Open MPI/OpenSHMEM v4.1.x based package:

% module use $HPCX_HOME/modulefiles
% module load hpcx
% mpicc $HPCX_MPI_TESTS_DIR/examples/hello_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_c
% mpirun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_c
% oshcc $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c.c -o $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c
% oshrun -np 2 $HPCX_MPI_TESTS_DIR/examples/hello_oshmem_c
% module unload hpcx

HPC-X Environments

Starting from version 2.1, HPC-X toolkit is provided with a set of environments. You are to select the environment that meets your needs best.

  • HPC-X with CUDA® support - hpcx

    Cuda support in SLES 11, RHEL 6 and RHEL OSs lower than 7.4 with PPC arch is no longer available.

    This is the default option which is optimized for best performance for the single-thread mode. This option supports both GPU and non-GPU setups. 

    HPC-X is compiled against CUDA version 11.2. which does not support GCC versions newer than v10. Therefore, HPC-X built on systems with GCC versions above v10 will not have CUDA support.
    Starting with CUDA 11.0, the minimum recommended GCC compiler is at least GCC 5 due to C++11 requirements in CUDA libraries e.g. cuFFT and CUB.

  • HPC-X with multi-threading support - hpcx-mt
    This option enables multi-threading support in all of the HPC-X components. Please use this module in order to run multi-threaded applications. 

  • HPC-X for profiling - hpcx-prof
    This option enables UCX compiled with profiling information.

  • HPC-X for debug - hpcx-debug
    This option enables UCX/HCOLL/SHARP compiled in debug mode.

  • HPC-X stack - hpcx-stack
    This environment contains all the libraries that 'Vanilla HPCX' has, except for OMPI.

When HPC-X is launched with Open MPI in an environment where there is no resource manager installed (slurm,pbs, etc.), or when it is launched from a compute node, the OMPI default rsh/ssh based launcher will be used. This would prevent the launch from propagating the library path to the compute nodes. Thus, make sure to pass LD_LIBRARY_PATH variable as follows.

%mpirun -x LD_LIBRARY_PATH -np 2 -H host1,host2  $HPCX_MPI_TESTS_DIR/examples/hello_c

Note that only one of the environments can be loaded to be run.

For information on how to load and use the additional environments, please refer to the HPC-X README file (embedded in the HPC-X package).

HPC-X and Singularity

HPC-X supports Singularity containerization technology, which help deploying and running distributed applications without launching an entire virtual machine (VM) for each application. 

For instructions on the technology and how to create a standalone Singularity container with MLNX_OFED and HPC-X inside, please visit: