NVIDIA BlueField-2 SNAP for NVMe and Virtio-blk v3.8.0
NVIDIA BlueField-2 SNAP for NVMe and Virtio-blk v3.8.0

Appendix – JSON File Format

This section is relevant only for the following cases:

  • Using legacy mode in which the user prefers to not use the recommended SNAP commands, but to use the JSON file format

  • NVMe-RDMA full offload mode in which the configuration is only possible with the JSON file format

The configuration parameters are divided into two categories: Controller and backends.

Legacy Mode

Note

For the non-full offload mode, it is recommended to use the SNAP RPC commands (described in .SNAP Commands v3.5.0) and not the legacy mode of the JSON file format described in this section.

Copy
Copied!
            

{ "ctrl": { "func_num": 0, "rdma_device": "mlx5_2", "sqes": 0x6, "cqes": 0x4, "cq_period": 3, "cq_max_count": 6, "nr_io_queues": 32, "mn": "Mellanox BlueField NVMe SNAP Controller", "sn": "MNC12", "mdts": 4, "oncs": 0, "offload": false, "max_namespaces": 1024, "quirks": 0x0, "version": "1.3.0" }, "backends": [ { "type": "spdk_bdev", "paths": [ { } ] } ] }


NVMe-RDMA Full Offload Mode

Note

For NVMe-RDMA full offload mode, users can only use the JSON file format (and not the SNAP RPC commands).

Copy
Copied!
            

{ "ctrl": { "func_num": 0, "rdma_device": "mlx5_2", "sqes": 0x6, "cqes": 0x4, "cq_period": 3, "cq_max_count": 6, "nr_io_queues": 32, "mn": "Mellanox BlueField NVMe SNAP Controller", "sn": "MNC12", "mdts": 4, "oncs": 0, "offload": true, "max_namespaces": 1024, "quirks": 0x0 "version": "1.3.0" }, "backends": [ { "type": "nvmf_rdma", "name": "testsubsystem", "paths": [ { "addr": "1.1.1.1", "port": 4420, "ka_timeout_ms": 15000, "hostnqn": "r-nvmx03" } ] } ] }


Controller Parameters

Parameters in SNAP JSON configuration file. Default file is located in /etc/mlnx_snap/mlnx_snap.json.

Parameter

Description

Legal Values

Default

rpc_server

Describes the RPC server socket for passing through RPC commands.

Relevant only when using vendor-specific RPC commands from host.

Any

""

offload

Enable full-offload mode

true/false

false

nr_io_queues

Maximum number of I/O queues.

Note

The actual number of queues is limited by number of queues supported by FW.

≥ 0

32

mn

Model number

String (up to 40 chars)

"MLX NVMe Ctrl"

sn

Serial number

String (up to 20 chars)

"MNC12"

nn

Number of namespaces (NN) indicates the maximum value of a valid NSID for the NVM subsystem. If the mnan field is cleared to 0h, then this field also indicates the maximum number of namespaces supported by the NVM subsystem.

0-0xFFFFFFFE

0xFFFFFFFE

mnan

Maximum number of allowed namespaces (MNAN) supported by the NVM subsystem

1-0xFFFFFFFE

1024

mdts

Max data transfer size. This value is in units of the minimum memory page size (CAP.MPSMIN) and is reported as a power of two (2n).

A value of 0h indicates that there is no maximum data transfer size.

1-6

4

quirks

Bitmask for enabling specific NVMe driver quirks in order to work with non-NVMe spec compliant drivers.

  • Bit 0 – send namespace change async events even if driver does not explicitly request them via the SET_FTRS command

    Note

    Enable this if the NVMe driver knows to handle namespace change but does not use SET_FTRS. CentOS 7.5 inbox driver does this.

  • Bit 1 – send new namespace change events even if previous ones are not yet cleared by the driver

    Note

    CentOS 7.5 inbox driver requires this.

  • Bit 2 – force Number of Namespaces (NN) on Identify controller to dynamically track and indicate both the maximum value of a valid NSID and the maximum number of namespaces supported by the NVM controller. There is no limitation on namespaces NSIDs on controller level.

  • Bit 3 – force OACS to enable namespace management bit

    Note

    VMWare driver requires this bit to be set.

0x0-0x3

0x0

max_namespaces

Limit number of available namespaces

Any

1024


Backend Parameters

Theses parameters are used to define the backend server.

Note

Even though a list of backends can be configured, currently only a single backend is supported.

Parameter

Description

Legal Values

Default

type

Backend type:

  • "memdisk" – RAM based local storage

  • "nvmf_rdma" – NVMe-oF over RDMA remote storage

  • "posix_io" – file-based storage

  • "spdk_bdev" – SPDK block devices

nvmf_rdma, spdk_bdev

"spdk_bdev"

name

Depends on backend type:

  • "nvmf_rdma" – remote subsystem name

  • "memdisk"/”spdk_bdev” – unused

  • "posix_io" – backend filename

Any

Null

size_mb

Represents the desired size (in MB) of the opened backend.

Relevant only for memdisk/posix_io backends.

Any

Unused

block_order

Represents the desired block size (in logarithmic scale) of the opened backend.

Relevant only for memdisk/posix_io backends.

9, 12

Unused

Path Section

This section is relevant only if backend type is set to "nvmf_rdma". For each backend, a list of paths can be specified using the following parameters:

Parameter

Description

Legal Values

Default

addr

Target IPv4 address

String in x.x.x.x format

"192.168.101.2"

port

Target port number

1024-65534

4420

ka_timeout_ms

Keepalive timeout in msec

>0

15000

Nqn

Host NQN

String up to 223-char long

"nqn.2014-08.org.nvmexpress:uuid:11111111-2222-3333-4444-555555555555"

© Copyright 2024, NVIDIA. Last updated on May 21, 2024.