Gst-nvdsudpsrc#
The Gst-nvdsudpsrc plugin is a source type component which is used to receive the UDP-RTP packets from the network. Internally, the plugin uses Rivermax SDK APIs for network communications. NVIDIA Rivermax® offers a unique IP-based solution for any media and data streaming use case. For more details, see the Rivermax Product Page..
Rivermax utilizes the kernel bypass technology and RDMA capabilities to achieve better CPU performance, low latency, and higher bandwidth. On top of Rivermax based enhancements, some of memory and buffer management optimization have been implemented to further reduce the CPU utilization in case of high packet rate use cases.
This component also supports RTP header and payload separation. RTP header and payloads can be received in separate memories. Header will always be in system memory while payload can directly be copied to GPU (Pinned) memory. This can avoid memory copies in cases when GPU processing is performed on the RTP payloads. Header and payload separation will happen only if header-size
property is set to non-zero value and for fixed header size.
High resolution uncompressed video streams have a very high number of RTP packets per second. In such case, OSS de-packetization component (rtpvrawdepay) becomes bottleneck to process stream real-time. To handle such cases, nvdsudpsrc
has added support for de-packetization of uncompressed video and audio streams as per SMPTE 2110-20/30 specifications. In this mode, nvdsudpsrc will provide GstBuffer having complete video / audio frame as an output instead of RTP packets.
Note
To use de-packetization for uncompressed video and audio payloads within nvdsudpsrc
, there are certain assumptions regarding input stream.
Each RTP packet must be of fixed size with fixed header and payload size.
There should not be more than one sample row data (SRD) per RTP packet.
With the support for de-packetization, nvdsudpsrc
can now have four different operating modes based on the value of caps
property of the component.
- Default mode
nvdsudpsrc will receive RTP packets having any type of payload from the network and push those packets to the downstream depayloader component for de-packetization. In this mode, nvdsudpsrc component is agnostic to content of the RTP packet. This is default mode and the source pad should have “application/x-rtp, ……” as caps.
- Uncompressed video frame as output
nvdsudpsrc will receive RTP packets having uncompressed video as payload and will do de-packetization as per SMPTE 2110-20 specification to form a video frame before sending to downstream components. This mode will be activated when caps property is set with “video/x-raw(memory:NVMM), …..” caps. From DS 8.0, caps can be set to “video/x-raw, …..” as well if required, to get output in software memory.
- Uncompressed audio frame as output
nvdsudpsrc will receive RTP packets having uncompressed audio as payload and will do de-packetization as per SMPTE 2110-30 specification to form an audio frame before sending to downstream components. This mode will be activated when the caps property is set with “audio/x-raw, …..” caps.
- Generic data as output
nvdsudpsrc will receive RTP packets having any type of payload and will do de-packetization to form a frame before sending to downstream components. In this case, the frame boundary is not decided based on the parsing of the RTP / payload header but based on the configurable value of the number of packets. i.e., the value of
payload-multiple
property will decide on how many packets are considered as a frame and nvdsudpsrc will remove the RTP header of those many packets and combine the payloads to form a frame. This mode will be activated when the caps property is set with “application/x-custom, …..” caps.- SMPTE 2022-7 feature support
The plugin supports SMPTE 2022-7 dual path reception. Its main purpose is to ensure uninterrupted transport of media streams, by listening to redundant stream of RTP packets to safeguard against packet loss or network/interface failures.
To enable this SMPTE 2022-7 feature, make use of properties “st2022-7-streams” and “local-iface-ip”.
The order you list the streams and IP addresses is important. Each stream in “st2022-7-streams” must be paired, in order, with its matching IP address in “local-iface-ip”. It’s a strict 1:1 mapping. The first stream goes with the first IP, the second stream with the second IP, and so on
Example pipeline to receive two duplicate streams (SMPTE 2022-7) of 1920x1080 / 10bit / 4:2:2 / UYVP format / 30fps through two local interfaces:
gst-launch-1.0 nvdsudpsrc local-iface-ip=<ip address of NIC1>,<ip address of NIC2> st2022-7-streams=<unicast / multicast address for stream1>:<stream1 port>,<unicast / multicast address of stream2>:<stream2 port> caps='video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)UYVP, framerate=30/1' header-size=20 payload-size=1200 gpu-id=0 ! nvvideoconvert ! 'video/x-raw' ! fakesink -v --gst-debug=3
- Timestamps in RTP packet headers
By default, nvdsudpsrc ignores the timestamp in RTP packet and generates buffer pts/dts based on the system clock.
If
use-rtp-timestamp
property is set, the plugin will parse the RTP timestamp it received, and use it to calculate the buffer pts / running time. The logic for reconstruction of rtp ticks and converting it to running time is as follows:- Get current time from epoch time (real clock) - Convert current time to RTP ticks. (clock rate used for calculation to rtp ticks) - Mask upper 32 bits out of the above RTP ticks and consider as rtp_tick_base for reconstruction of 64 bit RTP ticks. - Add 32 bit rtp ticks received from RTP packet header to this rtp_tick_base (This will be 64 bit reconstructed rtp ticks) - Convert the 64 bit rtp ticks to RTP time (RTP time = rtp ticks in seconds / clock rate) Buffer PTS/DTS = (RTP time - Base Time Of Pipeline) - LEAP_SECONDS (if enabled "adjust-leap-seconds" is enabled)
We should use CLOCK_REALTIME for the sender and receiver pipeline, in case we want to achieve reconstruction of RTP timestamp (punched from nvdsudpsink/sender) at nvdsudpsrc/receiver component. That is why, when
use-rtp-timestamp
property is enabled the nvdsudpsrc acts as a clock provider to the pipeline based on REALTIME_CLOCK. Also, for the above logic to work, it is important that the systems used are sychronized to same time using some protocols like NTP or PTP.The
adjust-leap-seconds
property is used to subtract the leap-seconds (37) from the received RTP timestamp in TAI time, so that it matches the UTC time. Use it, if nvdsudpsrc is expected to receive RTP packets in TAI time and your pipeline is using UTC time for buffer PTS/DTS.For most scenarios “nvdsudpsink” component will sent TAI time as RTP timestamps, so this property should be enabled. Exception comes when
pass-rtp-timestamp
is employed bynvdsudpsink
and its upstream component does not provide timestamps in TAI time. It is the user’s responsibility to identify if TAI time based timestamps are expected in such cases.If
use-rtp-timestamp
is not enabled, thenadjust-leap-seconds
will have no effect.
System can also have Gstreamer provided OSS implementation of udp source (udpsrc
) component. In that case system would have two implementations for udp source - udpsrc
and nvdsudpsrc
.
nvdsudpsrc
component can only be used with NVIDIA ConnectX-5 and above cards after having installed Rivermax
SDK and its license.
Download and setup the
Rivermax
1.70.x SDK here: https://developer.nvidia.com/networking/rivermax-getting-startedFollow the instruction on the SDK page to obtain
Rivermax
development license.To select
nvdsudpsrc
out of two installations, use eitherLOCAL_IFACE_IP
environment variable orlocal-iface-ip
property. Use the command below to export the environment variable:export LOCAL_IFACE_IP=<IP of NIC>
nvdsudpsrc
component also requiresCAP_NET_RAW
capability. Either run the application that usesnvdsudpsrc
component with superuser privilege or set theCAP_NET_RAW
capabilities using the following command.sudo setcap CAP_NET_RAW=ep <absolute path of application>
For example:
sudo setcap CAP_NET_RAW=ep /opt/nvidia/deepstream/deepstream/bin/deepstream-app sudo setcap CAP_NET_RAW=ep /usr/bin/gst-launch-1.0

Inputs and Outputs#
Inputs
None
Control parameters
LOCAL_IFACE_IP ENV flag or local-iface-ip property
caps
Output
GstBufferList having RTP packets as buffer content.
GstBuffer having uncompressed audio or video frame.
Features#
The following table summarizes the features of the plugin.
Feature |
Description |
Release |
---|---|---|
Supports header and payload separation |
Separate memories can be allocated for RTP header and payload |
DS 6.0 |
Supports any type of RTP packet (Compressed, Uncompressed, audio etc.) |
No restriction on content of RTP payload |
DS 6.0 |
Supports RTCP packets |
In addition to RTP, RTCP packets can also be received |
DS 6.0 |
Supports RTP payload directly in GPU memory |
Content of RTP payload can directly be in GPU memory. This can avoid copy if GPU processing of payload is required |
DS 6.0 |
Supports de-packetization of uncompressed video |
RTP packets having uncompressed video as payload can be depacketized and converted to video frame as per SMPTE 2110-20 specification |
DS 6.3 |
Supports de-packetization of uncompressed audio |
RTP packets having uncompressed audio as payload can be depacketized and converted to audio frame as per SMPTE 2110-30 specification |
DS 6.3 |
Supports de-packetization of generic payload |
RTP packets having fixed header and payload size can be depacketized to form a frame. In this mode headers will be removed and payloads will be combined to form a frame |
DS 6.3 |
Supports SMPTE 2022-7 |
By using Rivermax media API’s, can receive redundant packets over independent network paths. |
DS-8.0 |
Gst Properties#
The following table describes the Gst-nvdsudpsrc plugin’s Gst properties.
Property |
Meaning |
Type and Range |
Example / Notes |
Platforms |
---|---|---|---|---|
port |
The port number to receive the RTP packets from |
Integer, 0 to 65535 |
Port=5004 |
dGPU Jetson |
address |
IP address of the server to receive packets from |
String |
address=192.168.4.60 |
dGPU Jetson |
uri |
Uri of the server in the form of udp://<ip>:<port> |
String |
uri=udp://192.168.4.60:5004 |
dGPU Jetson |
payload-size |
Size of payload in RTP packet |
Integer, 0 to 65535 |
payload-size=1500 |
dGPU Jetson |
header-size |
RTP header size |
Integer, 0 to 65535 |
header-size=12 |
dGPU Jetson |
num-packets |
Number of packets for which memory to allocate |
Integer, 0 to 2147483647 |
num-packets=10000 |
dGPU Jetson |
local-iface-ip |
IP Address associated with network interface through which to receive the data |
String |
local-iface-ip=192.168.2.20 |
dGPU Jetson |
buffer-size |
Size of the kernel receive buffer in bytes |
Integer, 0 to 2147483647 |
buffer-size=50000 |
dGPU Jetson |
reuse |
Enable reuse of the port |
Boolean |
reuse=1 |
dGPU Jetson |
multicast-iface |
The network interface on which to join the multicast group |
String |
multicast-iface=eth0 |
dGPU Jetson |
auto-multicast |
Automatically join/leave multicast groups |
Boolean |
auto-multicast=1 |
dGPU Jetson |
loop |
Used for setting the multicast loop parameter |
Boolean |
loop=1 |
dGPU Jetson |
source-address |
Unicast address to receive the data only from that sender |
String |
source-address=”192.168.3.4” |
dGPU Jetson |
gpu-id |
GPU device id to allocate the buffers |
Integer, -1 to 32767 |
gpu-id=0 |
dGPU Jetson |
payload-multiple |
Output buffer to be multiple of these number of packets |
Integer, 0 to 65535 |
payload-multiple=4320 |
dGPU Jetson |
adjust-leap-seconds |
Adjust RTP timestamp for leap seconds when calculating running time |
Boolean |
adjust-leap-seconds=false |
dGPU Jetson |
ptp-src |
IP Address of PTP source |
String |
ptp-src=”192.168.2.20” |
dGPU Jetson |
st2022-7-streams |
Comma-separated list of IP:port pairs for ST2022-7 redundant streams |
String |
st2022-7-streams=”192.168.1.10:5004,192.168.101.10:5004” |
dGPU Jetson |
use-rtp-timestamp |
Parse RTP timestamp from rtp-header and attach as buffer PTS and DTS |
Boolean |
use-rtp-timestamp=false |
dGPU Jetson |
Example pipelines#
Pipeline to receive and play 24 bit 2 channel 48k audio stream:
LOCAL_IFACE_IP=<ip address of NIC> gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> port=<port number> ! 'application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)L24, encoding-params=(string)2, channels=(int)2, payload=(int)97' ! rtpL24depay ! rawaudioparse use-sink-caps=1 ! queue ! autoaudiosink -v --gst-debug=3
Pipeline to receive and display 10bit YUV 4:2:2 1080p30 video stream:
gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> port=<port number> local-iface-ip=<ip addr of NIC> ! 'application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:2, depth=(string)10, width=(string)1920, height=(string)1080, colorimetry=(string)BT709, payload=(int)96' ! rtpvrawdepay ! nvvideoconvert ! nveglglessink -v --gst-debug=3
Pipeline to receive and display 10bit YUV 4:2:2 1080p30 video stream without
additional depayload component
:gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> port=<port number> local-iface-ip=<ip addr of NIC> caps='video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)UYVP, framerate=30/1' header-size=20 payload-size=1200 ! nvvideoconvert ! nveglglessink -v --gst-debug=3
Pipeline to receive and display 10bit YUV 4:2:2 1080p30 video stream without additional depayload component and using
GPU Direct
:gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> port=<port number> local-iface-ip=<ip addr of NIC> caps='video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)UYVP, framerate=30/1' header-size=20 payload-size=1200 gpu-id=0 ! nvvideoconvert ! nveglglessink -v --gst-debug=3
Pipeline to receive and play 24 bit 2 channel 48k audio stream without additional depayload component:
gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> local-iface-ip=<ip address of NIC> port=<port number> caps='audio/x-raw, format=(string)S24BE, layout=(string)interleaved, rate=(int)48000, channels=(int)2' payload-size=288 header-size=12 ! autoaudiosink -v --gst-debug=3
Pipeline to receive and depacketize generic payload. Following pipeline receives uncompressed video as generic paylaod:
gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> local-iface-ip=<ip address of NIC> port=<port number> caps='application/x-custom', header-size=20, payload-size=1200, payload-multiple=4320 ! fakesink -v --gst-debug=3
Pipeline to receive and display 10bit YUV 4:2:2 1080p30 video stream, which use RTP header timestamp for buffer PTS/Running time of pipeline:
gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> local-iface-ip=<ip address of NIC> port=<port number> caps='video/x-raw(memory:NVMM), format=UYVP, width=1920, height=1080, framerate=30/1' gpu-id=0 header-size=20 payload-size=1200 use-rtp-timestamp=1 adjust-leap-seconds=1 ! nvvideoconvert ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, format=(string)BGRx' ! queue ! nveglglessink sync=1 max-lateness=50000000 --gst-debug=3 -v
Pipeline to receive 10bit YUV 4:2:2 1080p30 video stream, use RTP header timestamp for buffer PTS/Running time of pipeline and pass the same timestamp to next node using nvdsudpsink:
gst-launch-1.0 nvdsudpsrc address=<unicast / multicast address> local-iface-ip=<ip address of NIC> port=<port number> caps='video/x-raw(memory:NVMM), format=UYVP, width=1920, height=1080, framerate=30/1' gpu-id=0 header-size=20 payload-size=1200 use-rtp-timestamp=1 adjust-leap-seconds=1 ! nvvideoconvert ! 'video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)30/1, format=(string)UYVP' ! queue ! nvdsudpsink host=<ip address> port=<port number> local-iface-ip=<ip addr of NIC> pass-rtp-timestamp=1 payload-size=1220 packets-per-line=4 sdp-file=<sdp file> sync=1 -v --gst-debug=3