NVIDIA Tegra
NVIDIA DRIVE OS 5.1 Linux SDK

Developer Guide
5.1.15.0 Release


 
Inter-Process Communication
 
Terminology
NvSciIpc Configuration Data
Adding a New Channel
NvSciIpc API Usage
The NvSciIpc library provides API for any two (2) entities in a system to communicate with each other irrespective of where they are placed. Entities can be:
in different threads in the same process.
in the same process.
in different processes in the same VM.
in different VMs on the same SoC.
or in different SoCs.
Each of these different boundaries are abstracted by a library that provides unified communication (read/write) API to entities.
Terminology
Channel: An NvSciIpc channel connection allows bidirectional exchange of fixed-length messages between exactly two NvSciIpc endpoints.
Endpoint: A software entity that uses NvSciIpc channel to communicate with another software entity.
Frame: A message that consists of a sequence of bytes that is sent along an NvSciIpc channel from one of the two NvSciIpc endpoints of the NvSciIpc channel to the other NvSciIpc endpoint.
Frame Size: The size, in bytes, of every message exchanged along an NvSciIpc channel. Each NvSciIpc channel may have a distinct frame size.
Frame Count: The maximum number of NvSciIpc frames that may simultaneously be queued for transfer in each direction along an NvSciIpc channel.
Backend: An NvSciIpc backend implements NvSciIpc functionality for a particular class of NvSciIpc channels. For example, for NvSciIpc communication confined to SoCs, there are five different classes of NvSciIpc channels depending on the maximum level of separation between NvSciIpc endpoints that is supported by the NvSciIpc channel.
INTER_THREAD: Handles communication between entities that may be in different threads in the same process.
INTER_PROCESS: Handles communication between entities that may be in different processes in the same VM.
INTER_VM: Handles communication between entities that may be in different VMs in the same SoC.
INTER_CHIP: Handles communication between entities that may be in different SoCs.
Endpoint Handle: Abstract data type that is passed to all NvSciIpc channel communication.
Channel Reset: Defines the abrupt end of communication by one of the NvSciIpc endpoints. In case of reset, no communication is allowed over NvSciIpc channel until both endpoints reset their internal states and are ready for communication. NvSciIpc relies on its backend to implement the channel reset mechanism. It may require cooperation from both endpoints where two endpoints have to wait for confirmation that the other has reset its local state.
Notification: An asynchronous signal along an NvSciIpc channel through which one of the two endpoints of the NvSciIpc channel indicates to the other NvSciIpc endpoint that there may be an event for the latter NvSciIpc endpoint to process.
NvSciIpc Configuration Data
NvSciIpc configuration data is used to define all the channels present for a given Guest VM. Currently these details are provided via the device tree (DT), where each line contains details about a single NvSciIpc channel in the system.
Each channel entry is added to the DT property in string list form.
For INTER_THREAD and INTER_PROCESS backend, the format is :
<Backend-Name>, <Endpoint-Name>, <Endpoint-Name>, <Backend-Specific-Data>,
For INTER_VM and INTER_CHIP backend, the format is :
<Backend-Name>, <Endpoint-Name>, <Backend-Specific-Data>,
<Endpoint-Name> is unique string that is used to tag/identify a single NvSciIpc endpoint in a system. Ideally it should describe the purpose for which the NvSciIpc endpoint is created.
For INTER_THREAD and INTER_PROCESS backends, two endpoint names must be defined.
<Backend-Name> must be one of the following:
INTER_THREAD
INTER_PROCESS
INTER_VM
INTER_CHIP
<Backend-Specific-Data> may span multiple fields.
The INTER_THREAD and INTER_PROCESS backend contains two (2) integer fields that describe <No-of-Frames Frame-Size> tuple. <Frame-Size> must be multiples of 64 bytes.
INTER_VM contains a single integer field that denotes the IVC queue ID.
INTER_CHIP contains a single integer field that denotes the inter-chip device number.
Note:
For Linux VMs, the configuration data above is passed as a plain text file in the root file system. It is located at: /etc/nvsciipc.cfg.
Example NvSciIpc DT node
/ {
chosen {
nvsciipc {
/* NvSciIpc configuration format : string array
*
* INTER_THREAD/PROCESS backend case:
* "BACKEND_TYPE" "ENDPOINT1_NAME" "ENDPOINT2_NAME" "BACKEND_INFO1" "BACKEND_INFO2",
*
* INTER_VM/CHIP backend case:
* "BACKEND_TYPE" "ENDPOINT_NAME" "BACKEND_INFO1" "BACKEND_INFO2",
*
* BACKEND_TYPE : INTER_THREAD, INTER_PROCESS, INTER_VM, INTER_CHIP
* For INTER_THREAD and INTER_PROCESS, two endpoints name should be different.
* you can use different suffix with basename for them.
*/
compatible = "nvsciipc,channel-db";
/* Below IPC channels are defined only for testing and debugging purpose.
* They SHOULD be removed in production.
* itc_test, ipc_test, ipc_test_a, ipc_test_b, ipc_test_c
* ivc_test
*/
nvsciipc,channel-db =
"INTER_THREAD", "itc_test_0", "itc_test_1", "64", "1536", /* itc_test */
"INTER_PROCESS", "ipc_test_0", "ipc_test_1", "64", "1536", /* ipc_test */
"INTER_PROCESS", "ipc_test_a_0", "ipc_test_a_1", "64", "1536", /* ipc_test_a */
"INTER_PROCESS", "ipc_test_b_0", "ipc_test_b_1", "64", "1536", /* ipc_test_b */
"INTER_PROCESS", "ipc_test_c_0", "ipc_test_c_1", "64", "1536", /* ipc_test_c */
"INTER_VM", "ivc_test", "255", "0"; /* ivc_test */
status = "okay";
};
};
};
Example NvSciIpc config file format
# <Backend_name> <Endpoint-name1> <Endpoint-name2> <backend-specific-info>
INTER_PROCESS ipc_test_0 ipc_test_1 64 1536
INTER_PROCESS ipc_test_a_0 ipc_test_a_1 64 1536
INTER_PROCESS ipc_test_b_0 ipc_test_b_1 64 1536
INTER_PROCESS ipc_test_c_0 ipc_test_c_1 64 1536
INTER_THREAD itc_test_0 itc_test_1 64 1536
INTER_VM ivm_test 255
INTER_VM loopback_tx 256
INTER_VM loopback_rx 257
Adding a New Channel
This section describes how to add new channels.
Adding a New INTER_THREAD and INTER_PROCESS Channel
INTER_THREAD and INTER_PROCESS channels are implemented using POSIX shared memory, and POSIX mqueue for notifications. You must add a new line in the /etc/nvsciipc.cfg file describing the new channel.
There is no need to boot the Linux VM. Restart the NvSciIpc application once nvscipc.cfg is updated.
Adding a New INTER_VM Channel
INTER_VM channel relies on Hypervisor to set up the shared area details between two (2) VMs. At present, it is done via IVC queues that are described in the PCT. For any new INTER_VM channel:
1. Add a new IVC queue between two (2) VMs to the PCT file (platform_config.h) of the corresponding platform.
2. Update the NvSciIpc configuartion data (DT in QNX and cfg file in Linux) in both VMs. This requires adding a new entry that describes the channel information in both VMs.
3. If that INTER_VM channel is defined in configuration data but has no entry in PCT, that channel is ignored.
IVC queue table format of PCT
.ivc = {
.queue = {
... skipped ...
[queue id] = { .peers = {VM1_ID, VM2_ID}, .nframes = ##, .frame_size = ## },
... skipped ...
},
... skipped ...
}
/* example */
[255] = { .peers = {GID_GUEST_VM, GID_UPDATE}, .nframes = 64, .frame_size = 1536 },
NvSciIpc API Usage
Each application first has to call NvSciIpcInit() before using any of the other NvSciIpc APIs. This initializes the NvSciIpc library instance for that application.
Note:
NvSciIpcInit() must be called by application only once at startup..
Initializing the NvSciIpc Library
NvSciError err;
err = NvSciIpcInit();
if (err != NvSciError_Success) {
return err;
}
Prepare an NvSciIpc Endpoint for read/write
To enable read/write on an endpoint, the following steps must be completed.
1. The application must open the endpoint.
2. Get the endpoint information, such as the number of frames and each frame size. This is important as only single frames can be read/written at a given time.
3. Get the FD associated with the endpoint. This is required to handle event notifications.
4. Reset the endpoint. This is important as it ensures that the endpoint is not reading/writing any stale data (for example, from the previous start or instance).
Prepare an NvSciIpc Endpoint for read/write
NvSciIpcEndpoint ipcEndpoint;
struct NvSciIpcEndpointInfo info;
int32_t fd;
NvSciError err;
err = NvSciIpcOpenEndpoint("ipc_endpoint", &ipcEndpoint);
if (err != NvSciError_Success) {
goto fail;
}
err = NvSciIpcGetLinuxEventFd(ipcEndpoint, &fd);
if (err != NvSciError_Success) {
goto fail;
}
err = NvSciIpcGetEndpointInfo(ipcEndpoint, &info);
if (err != NvSciError_Success) {
goto fail;
}
printf("Endpointinfo: nframes = %d, frame_size = %d\n", info.nframes, info.frame_size);
NvSciIpcResetEndpoint(ipcEndpoint);
Writing to the NvSciIpc Endpoint
The follow example shows how to write to the NvSciIpc endpoint.
Write to NvSciIpc channel
NvSciIpcEndpoint ipcEndpoint;
struct NvSciIpcEndpointInfo info;
int32_t fd;
fd_set rfds;
uint32_t event = 0;
void *buf;
int32_t buf_size, bytes;
int retval;
NvSciError err;
FD_ZERO(&rfds);
FD_SET(fd, &rfds);
buf = malloc(info.frame_size);
if (buf == NULL) {
goto fail;
}
while (1) {
err = NvSciIpcGetEvent(ipcEndpoint, &event);
if (err != NvSciError_Success) {
goto fail;
}
if (event & NV_SCI_IPC_EVENT_WRITE) {
/* Assuming buf contains the pointer to data to be written.
* buf_size contains the size of data. It should be less than
* Endpoint frame size.
*/
err = NvSciIpcWrite(ipcEndpoint, buf, buf_size, &bytes);
if(err != NvSciError_Success) {
printf("error in writing endpoint\n");
goto fail;
}
} else {
retval = select(fd + 1, &rfds, NULL, NULL, NULL);
if ((retval < 0) & (retval != EINTR)) {
exit(-1);
}
}
}
Reading from the NvSciIpc Endpoint
The following example shows how to read from an NvSciIpc endpoint.
Read from NvSciIpc channel
NvSciIpcEndpoint ipcEndpoint;
struct NvSciIpcEndpointInfo info;
int32_t fd;
fd_set rfds;
uint32_t event = 0;
void *buf;
int32_t buf_size, bytes;
int retval;
NvSciError err;
FD_ZERO(&rfds);
FD_SET(fd, &rfds);
buf = malloc(info.frame_size);
if (buf == NULL) {
goto fail;
}
while (1) {
err = NvSciIpcGetEvent(ipcEndpoint, &event);
if (err != NvSciError_Success) {
goto fail;
}
if (event & NV_SCI_IPC_EVENT_READ) {
/* Assuming buf contains pointer to area where frame is read.
* buf_size contains the size of data. It should be less than
* Endpoint frame size.
*/
err = NvSciIpcRead(ipcEndpoint, buf, buf_size, &bytes);
if(err != NvSciError_Success) {
printf("error in reading endpoint\n");
goto fail;
}
} else {
retval = select(fd + 1, &rfds, NULL, NULL, NULL);
if ((retval < 0) & (retval != EINTR)) {
exit(-1);
}
}
}
Cleaning-up an NvSciIpc Endpoint
Once read/write is completed, you must free and clean the resources that were allocated by NvSciIpc endpoints.
Clean up the Endpoint
NvSciIpcEndpoint ipcEndpoint;
 
NvSciIpcCloseEndpoint(ipcEndpoint);
De-Initialize NvSciIpc Library
De-Initialize NvSciIpc Library
NvSciIpcDeInit();