Migration to RDMA-Core
Migration to RDMA-Core-v1.0

User-Mode Memory Registration (UMR)

UMR is a fast registration mode which uses Send queues. This feature enables the usage of RDMA operations and scatters data through appropriate memory keys on the remote side.

The following UMR and Non-Inline UMR experimental verbs are no longer the default verbs used for configuration of neither feature.

UMR Verbs:

  • Ibv_exp_create_qp

    • exp_create_flags: IBV_EXP_QP_CREATE_UMR

    • max_inl_send_klms

  • ibv_exp_query_device

    • umr_cap

    • umr_fixed_size_caps

  • ibv_exp_post_send

    • exp_opcode: IBV_EXP_WR_UMR_FILL, IBV_EXP_WR_UMR_INVALIDATE

    • ext_op: umr (umr type, memory_objects, exp_access, modified_mr, base_addr, num_mrs, mem_reg_list, mem_repeat_block_list, repeat_count, stride_dim)

  • ibv_exp_poll_cq

    • exp_opcode: IBV_EXP_WC_UMR

  • ibv_exp_create_mr

  • ibv_exp_query_mkey

Non-Inline UMR Verbs:

  • environmental variable: MLX*_POST_SEND_PREFER_BF

  • ibv_exp_dealloc_mkey_list_memory

  • ibv_exp_alloc_mkey_list_memory

  • IBV_EXP_DEVICE_UMR, IBV_EXP_DEVICE_UMR_FIXED_SIZE, IBV_EXP_DEVICE_MR_ALLOCATE

  • IBV_EXP_DEVICE_ATTR_UMR , IBV_EXP_DEVICE_ATTR_UMR_FIXED_SIZE_CAPS

UMR Verbs:

  • Ibv_create_qp_ex

    • send_ops_flags: IBV_QP_EX_WITH_BIND_MW, IBV_QP_EX_WITH_LOCAL_INV

  • mlx5dv_create_qp

    • send_ops_flags: MLX5DV_QP_EX_WITH_MR_INTERLEAVED, MLX5DV_QP_EX_WITH_MR_LIST

  • mlx5dv_wr_post

    • mlx5dv_wr_mr_interleaved

    • mlx5dv_wr_mr_list

  • mlx5dv_wc_opcode

    • MLX5DV_WC_UMR

Non-Inline UMR Verbs:

Warning

This feature is currently not supported by RDMA-Core.

For further information, please contact Support.

Copy
Copied!
            

struct mlx5dv_qp_init_attr mlx5_qp_attr = {}; struct ibv_qp_init_attr_ex init_attr_ex = {}; struct ibv_qp *qp; struct ibv_qp_ex *qpx; struct mlx5dv_qp_ex *dv_qp; struct mlx5dv_mkey *dv_mkey; struct mlx5dv_mkey_init_attr mkey_init_attr = {}; struct mlx5dv_mr_interleaved *array_interleaved; int num_interleaved = 1; int repeat_count = 2; int message_size = 4096; int skip_bytes_interleaved = 20;   mlx5_qp_attr.comp_mask = MLX5DV_QP_INIT_ATTR_MASK_SEND_OPS_FLAGS; mlx5_qp_attr.send_ops_flags = MLX5DV_QP_EX_WITH_MR_INTERLEAVED; init_attr_ex.cap.max_inline_data = 128; init_attr_ex.send_ops_flags = IBV_QP_EX_WITH_SEND;   qp = mlx5dv_create_qp(context, &init_attr_ex, &mlx5_qp_attr); qpx = ibv_qp_to_qp_ex(qp); dv_qp = mlx5dv_qp_ex_from_ibv_qp_ex(qpx); mkey_init_attr.create_flags = MLX5DV_MKEY_INIT_ATTR_FLAGS_INDIRECT; mkey_init_attr.max_entries = 4; mkey_init_attr.pd = pd;   dv_mkey = mlx5dv_create_mkey(&mkey_init_attr); array_interleaved = calloc(1, num_interleaved * sizeof(struct mlx5dv_mr_interleaved)); qpx->wr_flags = IBV_SEND_INLINE | IBV_SEND_SIGNALED; array_interleaved[0].addr = (uintptr_t)mr->addr; array_interleaved[0].bytes_count = (message_size - skip_bytes_interleaved) / 2; array_interleaved[0].bytes_skip = skip_bytes_interleaved / 2; array_interleaved[0].lkey = mr->lkey; ibv_wr_start(qpx); mlx5dv_wr_mr_interleaved (dv_qp, dv_mkey, access_flags, repeat_count, num_interleaved, array_interleaved); ret = ibv_wr_complete(ctx->qpx);

© Copyright 2023, NVIDIA. Last updated on Sep 8, 2023.