Inter-Process Communication

The NvSciIpc library provides API for any two (2) entities in a system to communicate with each other irrespective of where they are placed. Entities can be:
in different threads in the same process.
in the same process.
in different processes in the same VM.
in different VMs on the same SoC.
or in different SoCs.
Each of these different boundaries are abstracted by a library that provides unified communication (read/write) API to entities.

Terminology

Channel: An NvSciIpc channel connection allows bidirectional exchange of fixed-length messages between exactly two NvSciIpc endpoints.
Endpoint: A software entity that uses NvSciIpc channel to communicate with another software entity.
Frame: A message that consists of a sequence of bytes that is sent along an NvSciIpc channel from one of the two NvSciIpc endpoints of the NvSciIpc channel to the other NvSciIpc endpoint.
Frame Size: The size, in bytes, of every message exchanged along an NvSciIpc channel. Each NvSciIpc channel may have a distinct frame size.
Frame Count: The maximum number of NvSciIpc frames that may simultaneously be queued for transfer in each direction along an NvSciIpc channel.
Backend: An NvSciIpc backend implements NvSciIpc functionality for a particular class of NvSciIpc channels. For example, for NvSciIpc communication confined to SoCs, there are five different classes of NvSciIpc channels depending on the maximum level of separation between NvSciIpc endpoints that is supported by the NvSciIpc channel.
INTER_THREAD: Handles communication between entities that may be in different threads in the same process.
INTER_PROCESS: Handles communication between entities that may be in different processes in the same VM.
INTER_VM: Handles communication between entities that may be in different VMs in the same SoC.
INTER_CHIP: Handles communication between entities that may be in different SoCs.
Endpoint Handle: Abstract data type that is passed to all NvSciIpc channel communication.
Channel Reset: Defines the abrupt end of communication by one of the NvSciIpc endpoints. In case of reset, no communication is allowed over NvSciIpc channel until both endpoints reset their internal states and are ready for communication. NvSciIpc relies on its backend to implement the channel reset mechanism. It may require cooperation from both endpoints where two endpoints have to wait for confirmation that the other has reset its local state.
Notification: An asynchronous signal along an NvSciIpc channel through which one of the two endpoints of the NvSciIpc channel indicates to the other NvSciIpc endpoint that there may be an event for the latter NvSciIpc endpoint to process.

NvSciIpc Configuration Data

NvSciIpc configuration data is used to define all the channels present for a given Guest VM. Currently these details are provided via the device tree (DT), where each line contains details about a single NvSciIpc channel in the system.
Each channel entry is added to the DT property in string list form.
For INTER_THREAD and INTER_PROCESS backend, the format is :
<Backend-Name>, <Endpoint-Name>, <Endpoint-Name>, <Backend-Specific-Data>,
For INTER_VM and INTER_CHIP backend, the format is :
<Backend-Name>, <Endpoint-Name>, <Backend-Specific-Data>,
<Endpoint-Name> is unique string that is used to tag/identify a single NvSciIpc endpoint in a system. Ideally it should describe the purpose for which the NvSciIpc endpoint is created.
For INTER_THREAD and INTER_PROCESS backends, two endpoint names must be defined.
<Backend-Name> must be one of the following:
INTER_THREAD
INTER_PROCESS
INTER_VM
INTER_CHIP
<Backend-Specific-Data> may span multiple fields.
The INTER_THREAD and INTER_PROCESS backend contains two (2) integer fields that describe <No-of-Frames Frame-Size> tuple. <Frame-Size> must be multiples of 64 bytes.
INTER_VM contains a single integer field that denotes the IVC queue ID.
INTER_CHIP contains a single integer field that denotes the inter-chip device number.
Note:
For Linux VMs, the configuration data above is passed as a plain text file in the root file system. It is located at: /etc/nvsciipc.cfg.
Example NvSciIpc DT node
/ {
chosen {
nvsciipc {
/* NvSciIpc configuration format : string array
*
* INTER_THREAD/PROCESS backend case:
* "BACKEND_TYPE" "ENDPOINT1_NAME" "ENDPOINT2_NAME" "BACKEND_INFO1" "BACKEND_INFO2",
*
* INTER_VM/CHIP backend case:
* "BACKEND_TYPE" "ENDPOINT_NAME" "BACKEND_INFO1" "BACKEND_INFO2",
*
* BACKEND_TYPE : INTER_THREAD, INTER_PROCESS, INTER_VM, INTER_CHIP
* For INTER_THREAD and INTER_PROCESS, two endpoints name should be different.
* you can use different suffix with basename for them.
*/
compatible = "nvsciipc,channel-db";
/* Below IPC channels are defined only for testing and debugging purpose.
* They SHOULD be removed in production.
* itc_test, ipc_test, ipc_test_a, ipc_test_b, ipc_test_c
* ivc_test
*/
nvsciipc,channel-db =
"INTER_THREAD", "itc_test_0", "itc_test_1", "64", "1536", /* itc_test */
"INTER_PROCESS", "ipc_test_0", "ipc_test_1", "64", "1536", /* ipc_test */
"INTER_PROCESS", "ipc_test_a_0", "ipc_test_a_1", "64", "1536", /* ipc_test_a */
"INTER_PROCESS", "ipc_test_b_0", "ipc_test_b_1", "64", "1536", /* ipc_test_b */
"INTER_PROCESS", "ipc_test_c_0", "ipc_test_c_1", "64", "1536", /* ipc_test_c */
"INTER_VM", "ivc_test", "255", "0"; /* ivc_test */
status = "okay";
};
};
};
Example NvSciIpc config file format
# <Backend_name> <Endpoint-name1> <Endpoint-name2> <backend-specific-info>
INTER_PROCESS ipc_test_0 ipc_test_1 64 1536
INTER_PROCESS ipc_test_a_0 ipc_test_a_1 64 1536
INTER_PROCESS ipc_test_b_0 ipc_test_b_1 64 1536
INTER_PROCESS ipc_test_c_0 ipc_test_c_1 64 1536
INTER_THREAD itc_test_0 itc_test_1 64 1536
INTER_VM ivm_test 255
INTER_VM loopback_tx 256
INTER_VM loopback_rx 257

Adding a New Channel

Adding a New INTER_THREAD and INTER_PROCESS Channel

INTER_THREAD and INTER_PROCESS channels are implemented using POSIX shared memory, and POSIX mqueue for notifications. You must add a new line in the /etc/nvsciipc.cfg file describing the new channel.
There is no need to boot the Linux VM. Restart the NvSciIpc application once nvscipc.cfg is updated.

Adding a New INTER_VM Channel

INTER_VM channel relies on Hypervisor to set up the shared area details between two (2) VMs. At present, it is done via IVC queues that are described in the PCT. For any new INTER_VM channel:
1. Add a new IVC queue between two (2) VMs to the PCT file (platform_config.h) of the corresponding platform. The VM partition IDs are defined in the server-partitions.mk makefile. The frame_size value is in multiples of 64 bytes. The maximum IVC queue entries are 512 (the value imposed by the DRIVE OS Hypervisor kernel.)
2. Update the NvSciIpc configuration data (DT in QNX and cfg file in Linux) in both VMs. This requires adding a new entry that describes the channel information in both VMs.
3. If the INTER_VM channel is defined in configuration data but has no entry in the PCT, that channel is ignored.
Example: IVC queue table format of PCT
.ivc = {
.queue = {
... skipped ...
[queue id] = { .peers = {VM1_ID, VM2_ID}, .nframes = ##, .frame_size = ## },
... skipped ...
},
... skipped ...
}
/* example */
[255] = { .peers = {GID_GUEST_VM, GID_UPDATE}, .nframes = 64, .frame_size = 1536 },

NvSciIpc API Usage

Each application first has to call NvSciIpcInit() before using any of the other NvSciIpc APIs. This initializes the NvSciIpc library instance for that application.
Note:
NvSciIpcInit() must be called by application only once at startup..
Initializing the NvSciIpc Library
NvSciError err;
err = NvSciIpcInit();
if (err != NvSciError_Success) {
return err;
}

Prepare an NvSciIpc Endpoint for read/write

To enable read/write on an endpoint, the following steps must be completed.
1. The application must open the endpoint.
2. Get the endpoint information, such as the number of frames and each frame size. This is important as only single frames can be read/written at a given time.
3. Get the FD associated with the endpoint. This is required to handle event notifications.
4. Reset the endpoint. This is important as it ensures that the endpoint is not reading/writing any stale data (for example, from the previous start or instance).
Prepare an NvSciIpc Endpoint for read/write
NvSciIpcEndpoint ipcEndpoint;
struct NvSciIpcEndpointInfo info;
int32_t fd;
NvSciError err;
err = NvSciIpcOpenEndpoint("ipc_endpoint", &ipcEndpoint);
if (err != NvSciError_Success) {
goto fail;
}
err = NvSciIpcGetLinuxEventFd(ipcEndpoint, &fd);
if (err != NvSciError_Success) {
goto fail;
}
err = NvSciIpcGetEndpointInfo(ipcEndpoint, &info);
if (err != NvSciError_Success) {
goto fail;
}
printf("Endpointinfo: nframes = %d, frame_size = %d\n", info.nframes, info.frame_size);
NvSciIpcResetEndpoint(ipcEndpoint);

Writing to the NvSciIpc Endpoint

The follow example shows how to write to the NvSciIpc endpoint.
Write to NvSciIpc channel
NvSciIpcEndpoint ipcEndpoint;
struct NvSciIpcEndpointInfo info;
int32_t fd;
fd_set rfds;
uint32_t event = 0;
void *buf;
int32_t buf_size, bytes;
int retval;
NvSciError err;
FD_ZERO(&rfds);
FD_SET(fd, &rfds);
buf = malloc(info.frame_size);
if (buf == NULL) {
goto fail;
}
while (1) {
err = NvSciIpcGetEvent(ipcEndpoint, &event);
if (err != NvSciError_Success) {
goto fail;
}
if (event & NV_SCI_IPC_EVENT_WRITE) {
/* Assuming buf contains the pointer to data to be written.
* buf_size contains the size of data. It should be less than
* Endpoint frame size.
*/
err = NvSciIpcWrite(ipcEndpoint, buf, buf_size, &bytes);
if(err != NvSciError_Success) {
printf("error in writing endpoint\n");
goto fail;
}
} else {
retval = select(fd + 1, &rfds, NULL, NULL, NULL);
if ((retval < 0) & (retval != EINTR)) {
exit(-1);
}
}
}

Reading from the NvSciIpc Endpoint

Read from NvSciIpc channel
NvSciIpcEndpoint ipcEndpoint;
struct NvSciIpcEndpointInfo info;
int32_t fd;
fd_set rfds;
uint32_t event = 0;
void *buf;
int32_t buf_size, bytes;
int retval;
NvSciError err;
FD_ZERO(&rfds);
FD_SET(fd, &rfds);
buf = malloc(info.frame_size);
if (buf == NULL) {
goto fail;
}
while (1) {
err = NvSciIpcGetEvent(ipcEndpoint, &event);
if (err != NvSciError_Success) {
goto fail;
}
if (event & NV_SCI_IPC_EVENT_READ) {
/* Assuming buf contains pointer to area where frame is read.
* buf_size contains the size of data. It should be less than
* Endpoint frame size.
*/
err = NvSciIpcRead(ipcEndpoint, buf, buf_size, &bytes);
if(err != NvSciError_Success) {
printf("error in reading endpoint\n");
goto fail;
}
} else {
retval = select(fd + 1, &rfds, NULL, NULL, NULL);
if ((retval < 0) & (retval != EINTR)) {
exit(-1);
}
}
}

Cleaning-up an NvSciIpc Endpoint

Once read/write is completed, you must free and clean the resources that were allocated by NvSciIpc endpoints.
Clean up the Endpoint
NvSciIpcEndpoint ipcEndpoint;
 
NvSciIpcCloseEndpoint(ipcEndpoint);

De-Initialize NvSciIpc Library

De-Initialize NvSciIpc Library
NvSciIpcDeinit();

NvSciEventService API Usage

NvSciEventService is an event-driven framework that provides OS-agnostic APIs to send events and wait for events. The framework enables you to build portable event-driven applications and simplifies the steps required to prepare endpoint connections.
Initializing the NvSciEventService Library
Each application must call NvSciEventLoopServiceCreate() before using any of the other NvSciEventService and NvSciIpc APIs. This call initializes the NvSciEventService library instance for the application.
Note:
NvSciEventLoopServiceCreate() must be called by the application only once at startup. Only single loop service is currently supported.
 
NvSciEventLoopService *eventLoopService;
NvSciError err;
err = NvSciEventLoopServiceCreate(1, &eventLoopService);
if (err != NvSciError_Success) {
goto fail;
}
err = NvSciIpcInit();
if (err != NvSciError_Success) {
return err;
 

Preparing an Endpoint with NvSciEventService for Read/Write

To enable read/write on an endpoint, the following steps must be completed.
1. Open the endpoint with an event service that is previously instantiated.
2. Get the event notifier associated with the endpoint that was created in Step 1.
3. Get the endpoint information, such as the number of frames and each frame size. This is important as only single frames can be read/written at a given time.
4. Reset the endpoint. This is important as it ensures that the endpoint is not reading/writing any stale data (for example, from the previous start or instance).
Note:
The event associated with an endpoint is called a native event. It is created internally when NvSciIpcGetEventNotifier() is called and is not visible to the application.
Prepare an endpoint and getting an event notifier
 
NvSciEventLoopService *eventLoopService;
NvSciIpcEndpoint ipcEndpoint;
NvSciEventNotifier *eventNotifier;
struct NvSciIpcEndpointInfo info;
NvSciError err;
err = NvSciIpcOpenEndpointWithEventService("ipc_endpoint", &ipcEndpoint, &eventLoopService->EventService);
if (err != NvSciError_Success) {
goto fail;
}
err = NvSciIpcGetEventNotifier(ipcEndpoint, &eventNotifier);
if (err != NvSciError_Success) {
goto fail;
}
err = NvSciIpcGetEndpointInfo(ipcEndpoint, &info);
if (err != NvSciError_Success) {
goto fail;
}
printf("Endpointinfo: nframes = %d, frame_size = %d\n", info.nframes, info.frame_size);
NvSciIpcResetEndpoint(ipcEndpoint);
 

Waiting for a Single Event for Read/Write

Before reading data, a connection must be established between two endpoint processes. This can be done by calling NvSciIpcGetEvent().
Basic event handling mechanism is already described in Writing to the NvSciIpc Endpoint section. (The only difference is calling WaitForEvent() instead of select()).
Note:
WaitForEvent() is an event-blocking call. It must be called from a single thread only.
To process the write event in a loop, add the WRITE event check routine and NvSciIpcWrite().
Wait for a single event for read
 
NvSciEventLoopService *eventLoopService;
NvSciIpcEndpoint ipcEndpoint;
NvSciEventNotifier *eventNotifier;
struct NvSciIpcEndpointInfo info;
int64_t timeout;
uint32_t event = 0;
void *buf;
int32_t buf_size, bytes;
int retval;
NvSciError err;
timeout = NV_SCI_EVENT_INFINITE_WAIT;
buf = malloc(info.frame_size);
if (buf == NULL) {
goto fail;
}
buf_size = info.frame_size;
while (1) {
err = NvSciIpcGetEvent(ipcEndpoint, &event);
if (err != NvSciError_Success) {
goto fail;
}
if (event & NV_SCI_IPC_EVENT_READ) {
/* Assuming buf contains pointer to area where frame is read.
* buf_size contains the size of data. It should be less than
* Endpoint frame size.
*/
err = NvSciIpcRead(ipcEndpoint, buf, buf_size, &bytes);
if(err != NvSciError_Success) {
printf("error in reading endpoint\n");
goto fail;
}
} else {
err = eventLoopService->WaitForEvent(eventNotifier, timeout);
if(err != NvSciError_Success) {
printf("error in waiting event\n");
goto fail;
}
}
}
 

Waiting for Multiple Events for Read/Write

In this scenario, multiple endpoints are opened, and multiple event notifiers are created
A connection must be established between two endpoint processes before reading the data. This can be done by calling NvSciIpcGetEvent().
The event handling mechanism is similar to waiting for a single event, but it can wait for multiple events.
Check the newEventArray boolean array returned by WaitForMultipleEvents()to determine which event is notified.
Note:
WaitForEvent() is an event-blocking call. It must be called from a single thread only.
To process the write event in a loop, add WRITE event check routine and NvSciIpcWrite().
Here is a list of supported events:
Multiple native events
Multiple local events
Multiple native and local events
Wait for multiple events for read/write
 
#define NUM_OF_EVENTNOTIFIER 2
NvSciEventLoopService *eventLoopService;
NvSciIpcEndpoint ipcEndpointArray[NUM_OF_EVENTNOTIFIER];
NvSciEventNotifier *eventNotifierArray[NUM_OF_EVENTNOTIFIER];
bool newEventArray[NUM_OF_EVENTNOTIFIER];
struct NvSciIpcEndpointInfo infoArray; /* Assuming two endpoints have the same info */
int64_t timeout;
uint32_t event = 0;
void *buf;
int32_t buf_size, bytes, inx;
int retval;
NvSciError err;
bool gotEvent;
timeout = NV_SCI_EVENT_INFINITE_WAIT;
buf = malloc(info.frame_size);
if (buf == NULL) {
goto fail;
}
buf_size = info.frame_size;
for (inx = 0; inx < NUM_OF_EVENTNOTIFIER; inx++) {
newEventArray[inx] = true;
}
while (1) {
gotEvent = false;
for (inx = 0; inx < NUM_OF_EVENTNOTIFIER; inx++) {
if (newEventArray[inx]) {
err = NvSciIpcGetEvent(ipcEndpointArray[inx], &event);
if (err != NvSciError_Success) {
goto fail;
}
if (event & NV_SCI_IPC_EVENT_READ)) {
/* Assuming buf contains pointer to area where frame is read.
* buf_size contains the size of data. It should be less than
* Endpoint frame size.
*/
err = NvSciIpcRead(ipcEndpointArray[inx], buf, buf_size, &bytes);
if(err != NvSciError_Success) {
printf("error in reading endpoint\n");
goto fail;
}
gotEvent = true;
}
}
}
if (gotEvent) {
continue;
}
err = eventLoopService->WaitForMultipleEvents(eventNotifierArray,
NUM_OF_EVENTNOTIFIER, timeout, newEventArray);
if(err != NvSciError_Success) {
printf("error in waiting event\n");
goto fail;
}
}
 

Creating a Local Event

A local event does not require an associated endpoint. It uses two threads of a process. When creating a local event, use NvSciEventLoopServiceCreate()instead of NvSciIpcInit.
One thread called a sender is for sending a signal and the other called receiver is for waiting for the signal.
The application must first create a local event by calling EventService.CreateLocalEvent(). This call also creates an event notifier and associates the notifier with the local event.
Create local event
 
NvSciEventLoopService *eventLoopService;
NvSciLocalEvent *localEvent;
NvSciError err;
err = eventLoopService->EventService.CreateLocalEvent(
&eventLoopService->EventService,
&localEvent);
if (err != NvSciError_Success) {
goto fail;
}
 

Sending a Signal with a Local Event

A sender in the same or different thread can send a signal to a receiver by calling Signal().
Send a signal with local event
 
NvSciLocalEvent *localEvent;
NvSciError err;
err = localEvent->Signal(localEvent);
if (err != NvSciError_Success) {
goto fail;
}
 

Waiting for a Local Event

A sender can send a signal to a receiver in the same or different thread.
A receiver in the same or a different thread can be notified of the signal being sent by sender.
The receiver uses WaitForEvent for a single signal or WaitForMultipleEvents for multiple signals or mixed events associated with an endpoint.
Note:
WaitForEvent() is an event-blocking call. It must be called from a single thread only.
Here is a list of supported events:
Single native event
Single local events
Wait for a local event
 
NvSciEventLoopService *eventLoopService;
NvSciLocalEvent *localEvent;
NvSciError err;
int64_t timeout;
timeout = NV_SCI_EVENT_INFINITE_WAIT;
while (1) {
err = eventLoopService->WaitForEvent(localEvent->eventNotifier, timeout);
if(err != NvSciError_Success) {
printf("error in waiting event\n");
goto fail;
}
}
/* Do something with the local event notified */
}
 

Cleaning Up Event Notifier and Local Event

Event notifiers for local events and for native events are deleted when they are no longer used. A local event is deleted explicitly by an application while a native event is deleted implicitly by an event notifier.
When deleting events, the application must delete the event notifier before the local event.
Clean up an event notifier and a local event
 
NvSciEventNotifier *nativeEventNotifier; /* Assume this is event notifier for native event */
NvSciLocalEvent *localEvent;
nativeEventNotifier->Delete(nativeEventNotifier);
local->eventNotifier->Delete(local->eventNotifier);
local->Delete(local);
 

De-Initializing NvSciEventService Library

The application must call EventService.Delete() after de-initializing the NvSciIpc library.
De-initialize NvSciEventService library
 
NvSciIpcEndpoint ipcEndpoint;
NvSciEventLoopService *eventLoopService;
 
NvSciIpcCloseEndpoint(ipcEndpoint);
NvSciIpcDeinit();
 
eventLoopService->EventService.Delete(&eventLoopService->EventService);