NvSIPL CoE Camera Application Developer Guide#
The nvsipl_coe_camera
application demonstrates how to use the NVIDIA Sensor Input Processing Library (SIPL) with Camera Over Ethernet (CoE) cameras. CoE cameras transmit image data over Ethernet networks, providing flexibility in camera placement and simplified wiring compared to traditional camera interfaces.
Key Features#
CoE Camera Support: Direct integration with CoE cameras via Ethernet.
Multiple Output Formats: Support for ICP (raw) and ISP0/ISP1/ISP2 (processed YUV) outputs.
Dynamic Buffer Management: Automatic buffer size calculation for various sensor configurations.
Multi-threaded Processing: A separate thread for each pipeline stage.
Auto Control Plug-in: Support for NITO-based image processing tuning.
System Components#
+-----------------+ +--------------+ +-----------------+
| CoE Camera |--->| Ethernet |--->| Jetson Thor |
| | | Network | | Platform |
| • VB1940 Sensor | | | | |
| • Image Proc. | | | | • SIPL Library |
| • Network Stack | | | | • Application |
| | | | | • Buffer Mgmt. |
+-----------------+ +--------------+ +-----------------+
Pipeline Overview#
CoE Camera → Network → SIPL ICP → ISP Pipeline → Application Consumers
│
├── RAW Consumer (ICP output)
├── ISP0 Consumer (YUV output)
├── ISP1 Consumer (YUV output)
└── ISP2 Consumer (YUV output)
Application Setup and Configuration#
Prerequisites#
Hardware
NVIDIA Jetson Thor platform with CoE camera support.
CoE camera (such as VB1940-based).
Ethernet network connection.
Software
NVIDIA JetPack 7.0.
SIPL libraries.
NvSci libraries.
Required System Libraries
The following libraries must be present on the system for proper operation.
/usr/lib/aarch64-linux-gnu/nvidia/libnvsipl_control.so /usr/lib/aarch64-linux-gnu/nvidia/libnvsipl_devblk_cdi.so /usr/lib/aarch64-linux-gnu/nvidia/libnvsipl_devblk_crypto.so /usr/lib/aarch64-linux-gnu/nvidia/libnvsipl_devblk_ddi.so /usr/lib/aarch64-linux-gnu/nvidia/libnvsipl_devblk.so /usr/lib/aarch64-linux-gnu/nvidia/libnvsipl_query.so /usr/lib/aarch64-linux-gnu/nvidia/libnvsipl.so /usr/lib/aarch64-linux-gnu/nvidia/libnvsipl_pipeline.so /usr/lib/aarch64-linux-gnu/nvidia/libnvcamerahal.so /usr/lib/aarch64-linux-gnu/nvidia/libnvfusacap.so /usr/lib/nvsipl_uddf/libnvuddf_eagle_library.so /usr/lib/nvsipl_drv/libnvsipl_qry_vb1940.so
These libraries are typically installed with NVIDIA JetPack.
Note
The
/usr/lib/nvsipl_uddf/
path contains UDDF Eagle driver libraries.The
/usr/lib/nvsipl_drv/
path contains query database libraries.
Installation#
Download the SDK and install JetPack.
If JetPack 7.0 with Jetson Linux 38.2 is installed on NVIDIA Jetson Thor, skip this step.
Download SDK Manager from NVIDIA JetPack SDK Downloads and Notes and follow the provided steps to install JetPack.
SDK Manager copies SIPL package files to
/usr/src/jetson_sipl_api/sipl
.Note
Back up existing driver libraries and applications before building and installing the new ones.
On the Jetson device, install the required build tools:
ubuntu@jetson:~$ sudo apt-get update && sudo apt-get install build-essential ubuntu@jetson:~$ sudo apt-get install cmake
Build the application:
ubuntu@jetson:~$ cd /usr/src/jetson_sipl_api/sipl ubuntu@jetson:/usr/src/jetson_sipl_api/sipl$ sudo mkdir build ubuntu@jetson:/usr/src/jetson_sipl_api/sipl$ cd build/ ubuntu@jetson:/usr/src/jetson_sipl_api/sipl$ sudo cmake ../ ubuntu@jetson:/usr/src/jetson_sipl_api/sipl$ sudo make ubuntu@jetson:/usr/src/jetson_sipl_api/sipl$ sudo make install
After successful installation, the application is available at the following location:
/usr/bin/nvsipl_coe_camera
Configuration Methods#
The application supports three configuration methods:
Platform configuration name:
nvsipl_coe_camera -c <platform-config-name>
.JSON configuration file:
nvsipl_coe_camera -t <path-to-config.json>
.CoE override file (optional):
nvsipl_coe_camera -c <config> --coe-override <override.csv>
.
Usage and Command-Line Options#
Basic Usage Examples#
# Run with default settings (5 seconds, no file output)
./nvsipl_coe_camera -c VB1940_Camera
# Run with custom duration
./nvsipl_coe_camera -c VB1940_Camera -r 10
# Enable RAW output capture
./nvsipl_coe_camera -c VB1940_Camera -R -W 10 -f test_capture
# Enable multiple outputs with custom NITO path
./nvsipl_coe_camera -c VB1940_Camera -R -0 -1 -W 5 -f capture -N /custom/nito/path
# Use JSON configuration file
./nvsipl_coe_camera -t custom_config.json -0 -W 3 -v 2
# Use CoE override file for network configuration
./nvsipl_coe_camera -c VB1940_Camera --coeConfigOverridePath override.csv -R -W 5
Command-Line Options#
Option |
Long Form |
Argument |
Description |
---|---|---|---|
|
|
None |
Display usage information and available configurations. |
|
|
|
Exit application after n seconds (default: 5) |
|
|
|
Platform configuration name. |
|
|
|
Custom platform configuration JSON file. |
|
|
None |
Enable RAW (ICP) output capture. |
|
|
None |
Disable ISP0 output capture. |
|
|
None |
Disable ISP1 output capture. |
|
|
None |
Disable ISP2 output capture. |
|
|
|
Number of frames to write to file (default: 0). |
|
|
|
Filename prefix for dumped files. |
|
|
|
Verbosity level (0–4; default: 1). |
|
|
|
Path to folder containing NITO files. |
|
|
Path to CoE configuration override file. |
Configuration Sources#
The application accepts configuration from two sources:
Built-in Platform Configurations (
-c
option):Use predefined platform configurations from the SIPL database:
./nvsipl_coe_camera -c VB1940_Camera
To see available configurations:
./nvsipl_coe_camera -h
JSON Configuration Files (
-t
option):Use custom JSON configuration files:
./nvsipl_coe_camera -t /path/to/config.json
Verbosity Levels#
The application supports five verbosity levels:
Level |
Description |
Output |
---|---|---|
0 |
No logging |
None. |
1 |
Error only |
Error messages (default). |
2 |
Warning |
Error and warning messages. |
3 |
Info |
Error, warning, and info messages. |
4 |
Debug |
All messages including detailed debug info. |
Example with debug output:
./nvsipl_coe_camera -c VB1940_Camera -v 4
Normal Streaming with CoE Cameras#
The following steps outline the normal streaming use case:
Call INvSIPLCamera::GetInstance: Use this pointer to call into all the SIPL APIs.
Call SetPlatformCfg: Set valid platform configuration.
For each device block and sensor call SetPipelineCfg: Set valid pipeline configuration and
NvSIPLPipelineQueue
objects.Call Init: Initialize the SIPL camera system.
Image Creation:
Create
NvSciBufModule
usingNvSciBufModuleOpen
.For each ISP output:
Create
NvSciBufAttrList
usingNvSciBufAttrListCreate
.Set any general attributes required using
NvSciBufAttrListSetAttrs
. Include attributes to set image formats for ISP.Call
GetImageAttributes
to set default attributes or validate the attributes previously set.Create a Reconciled Attribute List and a Conflict Attribute List.
Create and fill any other attribute list for consumers of the ISP outputs.
Reconcile all lists using
NvSciBufAttrListReconcile
.Create as many buffer objects as required to create a buffer pool using
NvSciBufObjAlloc
.After the object is allocated, the attribute lists are no longer needed. You can destroy them by calling
NvSciBufAttrListFree
.Register the buffer objects with SIPL by calling
RegisterImages
.
Auto Control: For each sensor on which ISP needs to be enabled:
Load the NITO file into system memory.
Call
RegisterAutoControlPlugin
.
Synchronization:
Create
NvSciSyncModule
usingNvSciSyncModuleOpen
.For each sensor that needs ISP:
Create a
NvSciSyncAttrList
usingNvSciSyncAttrListCreate
(producer list).Call
FillNvSciSyncAttrList
for this attribute list with valid parameters:Valid pipeline ID.
outType
: ICP or ISP0. (ISP1 or ISP2 can be used, but only one is required per pipeline.)clientType
:SIPL_SIGNALER
if it is to be used as an EOFFence.SIPL_WAITER
if it is to be used as a PreFence.
Create another
NvSciSyncAttrList
(consumer list):If the producer list is
SIGNALER
, the consumer list should be a waiter.If the producer list is
WAITER
, the consumer list should be a signaler.
Fill the attributes required for the producer list:
If the CPU consumer is a waiter, set the following suggested attributes:
NvSciSyncAttrKey_NeedCpuAccess
: trueNvSciSyncAttrKey_RequiredPerm
:NvSciSyncAccessPerm_WaitOnly
If the consumer is some other engine, call the analogous
FillNvSciSyncAttrList
API for that engine.
Create a Reconciled Attribute List and a Conflict Attribute List.
Reconcile all the lists using
NvSciSyncAttrListReconcile
.Create sync objects using the attribute lists with a call to
NvSciSyncObjAlloc
.After the object is allocated, destroy the attribute lists by using
NvSciSyncAttrListFree
.Register
NvSciSyncObj
with SIPL by callingRegisterNvSciSyncObj
with valid parameters:Valid pipeline ID.
outType
: ICP or ISP0. (ISP1 or ISP2 can be used, but only one is required per pipeline.)clientType
:NVSIPL_EOFSYNCOBJ
if it is to be used as an EOFFence.NVSIPL_PRESYNCOBJ
if it is to be used as a PreFence.
Create Threads: Create one thread per sensor for each of the following queues to get the payload when available and perform the relevant actions:
Output queues of ICP and ISPs that are enabled:
Action 1: Call
GetImageData
on theINvSIPLBuffer
payload that was obtained.Action 2: Check
frameSeqNumValid
and printframeSequenceNumber
.Optional for consumers of SIPL:
When a consumer is using the buffer, call
AddRef
on the buffer from the consumer end.Wait on the EOF fence in the buffer obtained by calling
GetEOFNvSciSyncFence
using CPU.Ensure the consumer calls
Release
on the buffer only after it has finished processing.
Action 3: Call
Release
when done with the buffer to allow SIPL to access it.Important
Release the buffers in a timely manner to avoid starving SIPL of buffers.
Notification queue of each pipeline enabled:
Action 1: Look for
NOTIF_INFO*
notifications if the required user can keep track; otherwise, they can be ignored.Action 2: Check for warning notifications for
NOTIF_WARN_ICP_FRAME_DROP
orNOTIF_WARN_ICP_FRAME_DISCONTINUITY
and keep a count.Action 3: Check for any other notifications and report them to the user.
Important
Be sure to get these notifications because the size of the queue is limited. If the queue is not cleared, SIPL fails to write new content to the queue.
Start the threads.
Waiter Context:
CPU waiter context is helpful when you need to wait on the EOF fences of SIPL.
You can use
NvSciSyncModule
to create a context using theNvSciSyncCpuWaitContextAlloc
API.
Call Start: Begin streaming.
Queue threads should see incoming frame data.
Sleep for N seconds: N is passed in the command line. During this time, check for any errors in the notification queue and report errors to the user.
Call Stop to stop streaming.
Optional: For each sensor, report the number of frame drops and discontinuities.
Call Deinit: Deinitialize the camera system.
Deinitialize buffer resources:
Free the
NvSciBufObj
objects by callingNvSciBufObjFree
.Free
NvSciBufModule
by callingNvSciBufModuleClose
.
Initialization Sequence for CoE-specific Implementation#
The application follows this initialization sequence:
1. SIPL Setup#
// Get SIPL Camera and Query instances
m_upCamera = INvSIPLCamera::GetInstance();
m_upQuery = INvSIPLCameraQuery::GetInstance();
// Parse database and configuration
status = m_upQuery->ParseDatabase();
if (m_cmdline.sTestConfigFile != "") {
status = m_upQuery->ParseJsonFile(m_cmdline.sTestConfigFile);
} else if (m_cmdline.sConfigName != "") {
status = m_upQuery->GetCameraSystemConfig(m_cmdline.sConfigName, camSysConfig);
}
2. Platform Configuration#
// Set platform configuration with CoE camera system
status = m_upCamera->SetPlatformCfg(cameraSystemConfig);
// Build module list from camera system config
m_coeModules.clear();
for (size_t i = 0; i < cameraSystemConfig.cameras.size(); i++) {
CoeModuleData module;
module.sensorId = cameraSystemConfig.cameras[i].sensorInfo.id;
module.index = static_cast<uint32_t>(i);
module.name = cameraSystemConfig.cameras[i].name;
module.platform = cameraSystemConfig.cameras[i].platform;
module.sensorName = cameraSystemConfig.cameras[i].sensorInfo.name;
m_coeModules.push_back(module);
}
3. Pipeline Configuration#
// Global CoE pipeline configuration
NvSIPLPipelineConfiguration g_coePipelineCfg = {
.captureOutputRequested = true, // ICP (raw) output
.isp0OutputRequested = true, // ISP0 (YUV) output
.isp1OutputRequested = false, // ISP1 (optional)
.isp2OutputRequested = false, // ISP2 (optional)
.disableSubframe = true,
.bufferCfg = {
.maxCaptureBufferCount = 4U, // Minimum required for CoE mode
.maxIsp0BufferCount = 64U, // Standard ISP buffer count
.maxIsp1BufferCount = 64U, // Standard ISP buffer count
.maxIsp2BufferCount = 64U, // Standard ISP buffer count
}
};
// Configure pipeline for each CoE module
for (const auto& module : m_coeModules) {
uint32_t sensorId = module.sensorId;
uint32_t index = module.index;
// Initialize pipeline queues for this module
m_queues[index] = {};
status = m_upCamera->SetPipelineCfg(sensorId, g_coePipelineCfg, m_queues[index]);
}
4. SIPL Initialization#
status = m_upCamera->Init();
Image Buffer Management#
Buffer Object Management#
The application uses a dedicated CoeBufObjManager
class for managing NvSciBuf
objects:
class CoeBufObjManager {
public:
std::vector<NvSciBufObj> m_sciBufObjs;
CoeBufObjManager(NvSciBufModule sciBufModule) : m_sciBufModule(sciBufModule) { }
SIPLStatus AllocateBufObjs(INvSIPLCamera *siplCamera,
uint32_t uSensor,
INvSIPLClient::ConsumerDesc::OutputType output,
uint32_t numObjects);
~CoeBufObjManager();
private:
NvSciBufModule m_sciBufModule {};
};
Buffer Registration Process#
Create buffer attributes:
NvSciBufAttrList attrList; NvSciBufAttrListCreate(m_sciBufModule, &attrList); // Set general attributes NvSciBufAttrKeyValuePair attrKvp[] = { { NvSciBufGeneralAttrKey_Types, &bufType, sizeof(bufType) }, { NvSciBufGeneralAttrKey_RequiredPerm, &accessPerm, sizeof(accessPerm) }, { NvSciBufGeneralAttrKey_NeedCpuAccess, &isCpuAcccessReq, sizeof(isCpuAcccessReq) }, { NvSciBufGeneralAttrKey_EnableCpuCache, &isCpuCacheEnabled, sizeof(isCpuCacheEnabled) } };
Get image attributes from SIPL:
// Set general attributes first NvSciBufAttrKeyValuePair attrKvp[] = { { NvSciBufGeneralAttrKey_Types, &bufType, sizeof(bufType) }, { NvSciBufGeneralAttrKey_RequiredPerm, &accessPerm, sizeof(accessPerm) }, { NvSciBufGeneralAttrKey_NeedCpuAccess, &isCpuAcccessReq, sizeof(isCpuAcccessReq) }, { NvSciBufGeneralAttrKey_EnableCpuCache, &isCpuCacheEnabled, sizeof(isCpuCacheEnabled) } }; size_t uNumAttrs = (output == INvSIPLClient::ConsumerDesc::OutputType::ICP) ? 2U : 4U; err = NvSciBufAttrListSetAttrs(*(attrList.get()), attrKvp, uNumAttrs); // Get image attributes from SIPL status = siplCamera->GetImageAttributes(uSensor, output, *(attrList.get()));
Reconcile attributes:
std::unique_ptr<NvSciBufAttrList, CloseNvSciBufAttrList> reconciledAttrList; std::unique_ptr<NvSciBufAttrList, CloseNvSciBufAttrList> conflictAttrList; reconciledAttrList.reset(new NvSciBufAttrList()); conflictAttrList.reset(new NvSciBufAttrList()); err = NvSciBufAttrListReconcile(attrList.get(), 1U, reconciledAttrList.get(), conflictAttrList.get());
Allocate buffer objects:
// Pre-allocate vector capacity to avoid reallocations during frame capture m_sciBufObjs.reserve(numObjects); for (size_t i = 0U; i < numObjects; i++) { NvSciBufObj bufObj {}; err = NvSciBufObjAlloc(*(reconciledAttrList.get()), &bufObj); CHK_NVSCISTATUS_AND_RETURN(err, "NvSciBufObjAlloc()"); CHK_PTR_AND_RETURN(bufObj, "NvSciBufObjAlloc()"); m_sciBufObjs.push_back(bufObj); }
Register with SIPL:
status = m_upCamera->RegisterImages(uSensor, outputType, m_sciBufObjs);
Auto Control Plug-in Registration#
For ISP processing, register the auto control plug-in:
// Check whether any ISP outputs are enabled
bool hasISPOutputs = false;
for (auto& moduleInfo : m_coeModules) {
uint32_t index = moduleInfo.index;
if (m_queues[index].isp0CompletionQueue != nullptr ||
m_queues[index].isp1CompletionQueue != nullptr ||
m_queues[index].isp2CompletionQueue != nullptr) {
hasISPOutputs = true;
break;
}
}
if (!hasISPOutputs) {
LOG_INFO("No ISP outputs enabled (ICP-only configuration)");
return NVSIPL_STATUS_OK;
}
// Load NITO file for each sensor
struct CoeModuleData *module = CoeFirstModule();
while (module != nullptr) {
uint32_t uSensor = module->sensorId;
// Load NITO file
std::vector<uint8_t> blob;
SIPLStatus loadStatus = LoadNITOFile(m_cmdline.sNitoFolderPath, module->sensorName, blob);
// Register plug-in
ISiplControlAuto* autoControl = nullptr; // For NV_PLUGIN
status = m_upCamera->RegisterAutoControlPlugin(uSensor, NV_PLUGIN, autoControl, blob);
module = CoeNextModule();
}
Frame Processing and Consumer Threads#
Thread Architecture#
The application creates multiple threads per camera module:
ICP Thread: Processes raw image data from Image Capture Pipeline.
ISP0/ISP1/ISP2 Threads: Process YUV data from Image Signal Processor.
Event Thread: Handles pipeline notifications.
CPU Signal Thread: Manages CPU signaling.
Consumer Classes#
Base Consumer Class#
class CoeConsumerBase {
public:
virtual SIPLStatus ProcessBuffer(INvSIPLClient::INvSIPLNvMBuffer* pBuffer,
uint32_t frameNum, uint32_t cameraModule) = 0;
virtual ConsumerType GetType() const = 0;
virtual const char* GetTypeName() const = 0;
};
RAW Consumer (ICP Output)#
class CoeRawConsumer : public CoeConsumerBase {
private:
uint32_t GetDynamicRawBufferSize(INvSIPLClient::INvSIPLNvMBuffer* pBuffer);
SIPLStatus SaveRawFrame(INvSIPLClient::INvSIPLNvMBuffer* pBuffer,
uint32_t frameNum, uint32_t cameraModule);
};
YUV Consumer (ISP Output)#
class CoeYuvConsumer : public CoeConsumerBase {
private:
INvSIPLClient::ConsumerDesc::OutputType m_outputType;
NvSciSyncCpuWaitContext m_cpuWaitContext;
SIPLStatus SaveYuvFrame(INvSIPLClient::INvSIPLNvMBuffer* pBuffer,
uint32_t frameNum, uint32_t cameraModule);
};
Thread Processing Loop#
void SIPLCoeCamera::CoeImageThread(uint32_t const cameraModule, uint32_t const threadIndex) {
LOG_DBG("CoE Image Thread %u for module %u started", threadIndex, cameraModule);
// Create appropriate consumer using factory
std::unique_ptr<CoeConsumerBase> consumer = CoeConsumerFactory::CreateConsumer(
threadIndex, cameraModule, m_cpuWaitContext,
m_cmdline.bEnableRaw, m_cmdline.bDisableISP0,
m_cmdline.bDisableISP1, m_cmdline.bDisableISP2,
m_cmdline.uNumWriteFrames);
if (!consumer) {
LOG_ERR("Failed to create consumer for thread index: %u", threadIndex);
return;
}
SIPLStatus status = NVSIPL_STATUS_OK;
INvSIPLClient::INvSIPLBuffer *pBuffer = nullptr;
ThreadData *threadData = &m_threadData[cameraModule][threadIndex];
m_threadReady[cameraModule][threadIndex] = true;
while (!m_exitAllThreads && (status != NVSIPL_STATUS_EOF)) {
if (threadData->imageQueue) {
status = threadData->imageQueue->Get(pBuffer, 1000000); // 1 second timeout
if (status == NVSIPL_STATUS_OK && pBuffer != nullptr) {
m_queueCounts[cameraModule][threadIndex]++;
// Cast to NvM buffer for processing
INvSIPLClient::INvSIPLNvMBuffer *pNvMBuffer =
dynamic_cast<INvSIPLClient::INvSIPLNvMBuffer *>(pBuffer);
if (pNvMBuffer != nullptr) {
uint32_t frameNumber = m_queueCounts[cameraModule][threadIndex];
// Process buffer based on consumer type
if (m_cmdline.bEnableRaw && consumer->GetType() == ConsumerType::RAW_CONSUMER) {
SIPLStatus processStatus = consumer->ProcessBuffer(pNvMBuffer, frameNumber, cameraModule);
} else if (threadIndex >= THREAD_INDEX_ISP0 && threadIndex <= THREAD_INDEX_ISP2) {
SIPLStatus status = WriteISPBufferToFile(pNvMBuffer, frameNumber, cameraModule, threadIndex);
}
}
// Return buffer to SIPL
SIPLStatus releaseStatus = pBuffer->Release();
pBuffer = nullptr;
}
} else {
usleep(10000); // 10ms
}
}
}
File Output and Frame Dumping#
RAW Frame Saving#
The application automatically calculates RAW buffer sizes based on sensor configuration:
uint32_t GetDynamicRawBufferSize(INvSIPLClient::INvSIPLNvMBuffer* pBuffer) {
NvSciBufObj rawBufObj = pBuffer->GetNvSciBufImage();
// Get buffer attributes
NvSciBufAttrKeyValuePair imgAttrs[] = {
{ NvSciBufImageAttrKey_PlanePitch, NULL, 0 },
{ NvSciBufImageAttrKey_PlaneHeight, NULL, 0 },
};
NvSciBufAttrListGetAttrs(bufAttrList, imgAttrs, sizeof(imgAttrs) / sizeof(imgAttrs[0]));
uint32_t pitch = *(static_cast<const uint32_t*>(imgAttrs[0].value));
uint32_t height = *(static_cast<const uint32_t*>(imgAttrs[1].value));
return pitch * height;
}
Frame File Naming Convention#
RAW files:
/tmp/coe_sensor<N>_raw_frame_<N>.raw
YUV files:
/tmp/<prefix>_sensor<N>_<output>_frame_<N>.yuv
<N>
is the sensor or frame number.
<prefix>
is a configurable prefix (default: nvsipl_coe_camera
).
<output>
is ISP0
, ISP1
, or ISP2
.
File Writer Implementation#
Steps required to write image data to files, building on the normal streaming process:
Obtain the buffer object:
NvSciBufObj bufObj = pBuffer->GetNvSciBufImage();
Get the buffer properties:
NvSciBufAttrList bufAttrList; NvSciBufObjGetAttrList(bufObj, &bufAttrList); NvSciBufAttrKeyValuePair imgAttrs[] = { { NvSciBufImageAttrKey_PlaneCount, NULL, 0 }, { NvSciBufImageAttrKey_PlaneColorFormat, NULL, 0 }, { NvSciBufImageAttrKey_PlaneHeight, NULL, 0 }, { NvSciBufImageAttrKey_PlaneWidth, NULL, 0 }, { NvSciBufImageAttrKey_PlanePitch, NULL, 0 }, { NvSciBufImageAttrKey_PlaneAlignedSize, NULL, 0 }, { NvSciBufImageAttrKey_PlaneBitsPerPixel, NULL, 0 }, { NvSciBufGeneralAttrKey_CpuNeedSwCacheCoherency, NULL, 0 } }; NvSciBufAttrListGetAttrs(bufAttrList, imgAttrs, sizeof(imgAttrs) / sizeof(imgAttrs[0]));
Determine the file extension by format:
raw
for Bayer images.yuv
for YUV images.
Handle ISP output synchronization:
if (isISPOutput) { NvSciSyncFence fence = NvSciSyncFenceInitializer; status = pBuffer->GetEOFNvSciSyncFence(&fence); NvSciSyncFenceWait(&fence, m_cpuWaitContext, FENCE_FRAME_TIMEOUT_MS); NvSciSyncFenceClear(&fence); }
Format Override Support#
ICP Output Formats
Based on sensor configuration, set the correct input format type and pixel order in platform configuration.
CoE sensors typically do not allow format overriding because they have fixed output formats.
ISP Output Formats
Users can specify different ISP output formats as long as they are supported.
Before calling
GetImageAttributes
, override ISP output formats to apply user preferences.The default format is YUV 420 SemiPlanar UINT 8 BlockLinear.
Refer to SIPL documentation for supported format combinations.
// Example: Override ISP format before GetImageAttributes NvSciBufAttrKeyValuePair formatOverride[] = { { NvSciBufImageAttrKey_PlaneColorFormat, &desiredFormat, sizeof(desiredFormat) } }; NvSciBufAttrListSetAttrs(attrList, formatOverride, 1);
NvSciStream Integration#
The NvSciStream library provides utilities to construct streaming applications for use cases that have complex buffer handling and synchronization. It also helps simplify IPC and IVC use cases.
High-level steps for setup (analogous to steps 5 and 7 of normal streaming):
Create a static pool of packets for a producer.
Create a producer for the static pool.
Connect the producer to the downstream consumer.
Ensure that the static pool, producer, queue, and downstream consumer are connected.
Create and reconcile buffer attributes across producer and consumer, ensuring that they are present in the stream.
Set up and reconcile the synchronization attributes across producer and consumer to ensure that they are present in the stream.
Allocate
NvSciSyncObj
for producer, register with producer, and inform the stream that the sync object is set.Allocate
NvSciSyncObj
for consumer, register with consumer, and inform the stream that the sync object is set.Get the consumer sync object and register with producer and vice versa.
Allocate buffers.
Create
PoolPackets
and insert buffers into the packets.Replace the buffers and insert cookies as identifiers.
Ensure that the
PacketsComplete
event is observed.Set status to
PacketImport
.Check whether setup is complete.
For each packet, ensure that it is ready.
High-level steps for streaming on the producer side (SIPL):
Have a Post function to send buffers to consumers. In normal streaming, this function is called from the pipeline queue and performs the following actions:
Calls
GetEOFNvSciSyncFence
andGetNvSciBufImage
.Finds the cookie of the buffer.
Calls
AddRef
on the buffer.Informs the stream that the fence for the packet with the cookie ID has been set.
Informs the stream that the packet is present.
Increments the number of buffers with consumers and clears the fence.
Have a thread running to receive buffers from consumers. This thread needs to perform the following actions as long as the number of buffers with consumers is greater than 0:
Query events.
When the packet is ready, get the fence and packet.
Decrement the number of buffers with consumers.
Call
AddNvSciSyncPrefence
on the buffer.Call
Release
for the buffer.Clear the fence.
Implement a similar function and thread on the consumer.
For complex multi-consumer scenarios, NvSciStream
can simplify buffer management.
Key Benefits
Simplified IPC and IVC use cases.
Complex buffer handling automation.
Streamlined synchronization management.
Setup Overview
Create a static pool of packets for producer.
Create a producer for the static pool.
Connect the producer to downstream consumers.
Reconcile buffer and sync attributes across the stream.
Allocate and register
NvSciSyncObj
for producer and consumer.Exchange sync objects between producer and consumer.
Allocate buffers and create pool packets.
Complete setup and verify readiness.
Streaming Process
// Producer side (SIPL)
void PostBuffer(INvSIPLClient::INvSIPLBuffer* pBuffer) {
NvSciSyncFence fence = NvSciSyncFenceInitializer;
NvSciBufObj bufObj = pBuffer->GetNvSciBufImage();
// Find cookie and notify stream
uint32_t cookie = FindBufferCookie(bufObj);
pBuffer->AddRef();
// Set fence and present packet
SetPacketFence(cookie, fence);
PresentPacket(cookie);
NvSciSyncFenceClear(&fence);
}
// Consumer thread
void ConsumerThread() {
while (numBuffersWithConsumers > 0) {
QueryEvents();
if (PacketReady()) {
auto [fence, packet] = GetPacketAndFence();
numBuffersWithConsumers--;
// Process and return buffer
AddNvSciSyncPrefence(buffer, fence);
ReleaseBuffer(buffer);
NvSciSyncFenceClear(&fence);
}
}
}
Error Handling and Notifications#
Event Processing#
The application processes various pipeline notifications:
void SIPLCoeCamera::OnEvent(NvSIPLPipelineNotifier::NotificationData &oNotificationData) {
switch (oNotificationData.eNotifType) {
case NvSIPLPipelineNotifier::NOTIF_INFO_ICP_PROCESSING_DONE:
m_uNumFrameCaptured++;
break;
case NvSIPLPipelineNotifier::NOTIF_WARN_ICP_FRAME_DROP:
m_uNumFrameDrops++;
break;
case NvSIPLPipelineNotifier::NOTIF_ERROR_ICP_CAPTURE_FAILURE:
m_bInError = true;
break;
// ... handle other notification types
}
}
Common Error Types#
Frame Drops: Network congestion or processing delays.
Capture Failures: Hardware or communication issues.
Processing Failures: ISP processing errors.
Configuration Files#
JSON Configuration File#
The following is an example of a JSON configuration file for a CoE camera. For more details, refer to SIPL Query JSON Guide for CoE Camera Development.
{
"cameras": [
{
"name": "CoE_Camera_0",
"platform": "VB1940",
"sensorInfo": {
"id": 0,
"name": "VB1940_SENSOR"
},
"cameratype": {
"CoECamera": {
"hsbId": 0,
"sensors": [
{
"ip_address": "192.168.1.2",
"mac_address": "8c:1f:64:6d:70:03",
}
]
}
}
}
],
"transports": [
{
"CoETransSettings": {
"name": "CoE_Transport_0",
"hsbId": 0,
"interface_name": "mgbe0_0",
"ip_address": "192.168.1.2"
}
}
]
}
CoE Override CSV Format#
# Format: hsb_id, <HSB-ID>, <interface-name>, <MAC-address>, <IP-address>
hsb_id,0,mgbe0_0,8c:1f:64:6d:70:03,192.168.1.2
Troubleshooting#
Common Issues#
Camera not detected
Verify that the
mgbe0_0
interface is up and the IP address is set.Ping the sensor.
If you are using an override file, ensure that the MAC and IP addresses in the CoE override file are correct.
NITO file errors
Verify the NITO file path.
Check file permissions.
Ensure that the sensor name matches the NITO filename. For example,
vb1940.nito
.
Debug Commands#
# Enable maximum verbosity
nvsipl_coe_camera -c VB1940_Camera -v 4
# Check available configurations
nvsipl_coe_camera --list-configs
Log Analysis#
Monitor application logs for the following:
Buffer allocation failures
Network timeout errors
Frame sequence discontinuities
Pipeline queue overflows