Creating an AI Application =============================== Previous section explained how to run pre-created DeepStream test1 application. This section will explain how to create DeepStream test1 application from scratch using Composer. .. image:: /content/Application_workflow.png :align: center :alt: Application workflow Launch Composer ------------------------------------------------- * Launch Composer tool using following command :: composer .. image:: /content/GraphComposer_Default_Launch.PNG :align: center :alt: Composer Default Launch Drag and Drop Components --------------------------- DeepStream pipeline primarily depends on Gstreamer plugins which are represented by ``INvDsElement`` base type in Graph Composer. Use GroupBy button to list component by their base type. All the components under ``INvDsElement`` represent GStreamer plugins. DeepStream test1 application detects people on single batch video source and renders the output on display with bounding boxes. It requires * Source plugin component for input * Mux plugin component as infer plugin component requires mux before it * Video inference plugin component with peoplenet model * OSD plugin component to draw bounding boxes * Render plugin component to display output | All these components are avaialble in extensions published to NVIDIA Cloud repository and can be browsed in component list window of Composer. Users can drag and drop these components from component list window to canvas. Refer to :doc:`DS_Zero_Coding_DS_Components` for all type of components supported by DeepStream. .. image:: /content/GraphComposer_Add_Components.gif :align: center :alt: Add Components Configure Components ------------------------------ Component's parameters can be configured once these are added to the canvas by selecting the component on canvas. If property is not updated then it uses default value hence not all properties are required to be set. Configure components by setting properties to expected values * Set input file path in source component * Set batch size and GPU ID in mux * Set inference model to use in video inference component .. note:: Plugin component (INvDsElement) properties match 1:1 with GStreamer plugin properties. | .. image:: /content/GraphComposer_Configure_Components.gif :align: center :alt: Configure Components .. note:: ``NvDsInferVideo`` and ``NvInferAudio`` components have special connector infer-model-config which allows users to connect ``INvDsInferModelConfigComponent`` components. Users can package infer config file along with other model related files in these components. Using this config is optional and users can also directly program the infer config file path in the config-file-path property. Connect Components ----------------------- It is required to connect input/output ports of the components to the establish data transfer. Input/output ports of ``INvDsElement`` match 1:1 with Gstreamer plugin pads. Current Composer does not validate input/output data type in this release, it will just check if the input/output handles are compatible. Data type validation is planned for future release. Users can match port types using names e.g. video-out, video-in, audio-out, audio-in etc. Only out and in ports can be connected to each other, in to in or out to out connections are not allowed. .. image:: /content/GraphComposer_Connect_Components.gif :align: center :alt: Connect Components Add GStreamer Scheduler ------------------------- DeepStream components uses DeepStream and GStreamer plugins which requires GStreamer runtime to execute the pipeline. GStreamer scheduler component handles background GStreamer functionality to create and execute pipeline from graph. .. image:: /content/GraphComposer_Add_Scheduler.gif :align: center :alt: Add Scheduler Count Number Of People -------------------------- How to add some more functionality to this inference pipeline such as measuring FPS or counting objects? Let's add people counting to this pipeline since we used people detection network. .. image:: /content/GraphComposer_Count_People.gif :align: center :alt: Count People People counting component is added as a probe handler on output video port. Probe handler components registers callbacks which are called whenever data is avaialble on the connected port. It allows users to process the data from the ports, these handlers can either add new data to the GstBuf or modify the existing GstBuf. Save Graph ------------------ Graph can be saved by right clicking on canvas. Follow instructions from :doc:`DS_GraphComposer_Run_Sample_App` to run the application using saved `.yaml` graph file. Refer to Container Builder config files in ``/opt/nvidia/deepstream/deepstream-6.0/reference_graphs/deepstream-test1`` as a reference to create new config for the saved graph. .. image:: /content/GraphComposer_Save_Option.PNG :align: center :alt: Save Graph Use Multiple inputs -------------------- Users can add mutiple ``NvDsSingleSrcInput`` components and connect to MvDsStreamMux component. This does not allow runtime add/remove of inputs. .. image:: /content/GraphComposer_Multi_Input_Mux.gif :align: center :alt: Multi Input Mux Runtime add/remove inputs ------------------------------------- There is ``NvDsMultiSrcInput`` component which is based on a Gstream bin. It takes list of inputs as property allowing runtime changes in the list through runtime manipulator component, smart record start/stop. Runtime add/remove and smart record start/stop signals can be triggered by other components. You can implement your custom components to trigger these actions, for example, cloud message, keyboard event etc., .. image:: /content/GraphComposer_Multi_Input_Src.gif :align: center :alt: Multi Input Src