Plugin Development Tutorial
Introduction
This document is meant to be a step-by-step tutorial on creating a new custom NVIGI plugin from a provided code template, and then integrating it into a simple command-line utility. This process is also described (at a much coarser and higher level) in the Plugin Development Guide, which we recommend to study as well.
As explained in the Kinds of Plugins section of the plugin development guide, there are inference and utility NVIGI plugins. This tutorial will focus on creating one that follows an inference-like pattern from a provided code template; many of the same steps can also be used for utility plugins.
NOTE: This document will assume you are developing your custom NVIGI plugin inside of the NVIGI Core PDK directory structure in order to take advantage of the build scripts and procedures already in place. At a later time you may want to integrate your own development processes or build pipelines.
Development Setup
Download the NVIGI Core Plugin Development Kit (PDK) from the latest entry in the Releases page in the GitHub repository.
NOTE: From this point on we will refer the directory path where the NVIGI Core PDK 7-zip file was extracted to as
<CORE_PDK_ROOT>.
Duplicating and Renaming the Template Plugin Source
NOTE: For purposes of this guide, we will assume your new custom NVIGI plugin will be called
nvigi.mygpt.
Go to
<CORE_PDK_ROOT>\sources\plugins\and duplicate the foldernvigi.template.inferenceinside; rename it asnvigi.mygpt.Go to
<CORE_PDK_ROOT>\sources\plugins\nvigi.mygpt\Rename public header
nvigi_template_infer.htonvigi_mygpt.h.Open
<CORE_PDK_ROOT>\sources\plugins\nvigi.mygpt\nvigi_mygpt.hin a text editor, and perform the following replacements:Rename the namespace for the plugin ID:
E.g.:
namespace template_ai
becomes
namespace mygpt
Rename all of the input slot IDs:
E.g.:
constexpr const char* kTemplateAIInputPrompt = "prompt"; constexpr const char* kTemplateAIOutputResponse = "response";
becomes
constexpr const char* kMyGPTInputPrompt = "prompt"; constexpr const char* kMyGPTOutputResponse = "response";
Rename each of the structs in the header file (declarations, constructors and also in the
NVIGI_VALIDATE_STRUCTmacro), replacing the substringTemplateAIwithMyGPTE.g.:
struct alignas(8) TemplateAICreationParameters { TemplateAICreationParameters() = default; NVIGI_UID(UID({ 0x11111111, 0x2222, 0x3333, {0x44, 0x55, 0x66, 0x77, 0x88, 0x99, 0xaa, 0xbb} }), kStructVersion1) }; NVIGI_VALIDATE_STRUCT(TemplateAICreationParameters)
becomes
struct alignas(8) MyGPTCreationParameters { MyGPTCreationParameters() = default; NVIGI_UID(UID({ 0x11111111, 0x2222, 0x3333, {0x44, 0x55, 0x66, 0x77, 0x88, 0x99, 0xaa, 0xbb} }), kStructVersion1) }; NVIGI_VALIDATE_STRUCT(MyGPTCreationParameters)
At the end of the header file, there is an using-declaration for the
InferenceInterfaceclass so it can be referred to asITemplateAI; rename that using-declaration toIMyGPTE.g.:
using ITemplateAI = InferenceInterface;
becomes
using IMyGPT = InferenceInterface;
Use the NVIGI utility tool to generate new UIDs for the plugin
Open a Visual Studio 2022 Developer Console to
<CORE_PDK_ROOT>.Run the following command:
bin\Debug_x64\nvigi.tool.utils.exe --plugin nvigi.plugin.mygpt
Copy the entire line that begins with
constexpr PluginID kId = ...in the console output, and replace the corresponding line in the headernamespace nvigi { namespace plugin { namespace mygpt { constexpr PluginID kId = {{0x54571404, 0x3d6a, 0x44a4,{0x8e, 0xf3, 0x70, 0x83, 0x97, 0x7b, 0x97, 0xf6}}, 0xa1e72b}; //{54571404-3D6A-44A4-8EF3-7083977B97F6} [nvigi.plugin.mygpt] ... } } ...
Use the same NVIGI utility tool to generate new UIDs for each struct
For each struct in the header file, run the following command (replacing the name of the struct accordingly):
bin\Debug_x64\nvigi.tool.utils.exe --interface MyGPTCreationParameters
Then replace the UID string as appropriate
struct alignas(8) MyGPTCreationParameters { MyGPTCreationParameters() { }; NVIGI_UID(UID({0xc9d66831, 0x52e1, 0x4f02,{0x80, 0xed, 0x9f, 0x29, 0x2a, 0x7e, 0x30, 0x34}}), kStructVersion1) ... }; NVIGI_VALIDATE_STRUCT(MyGPTCreationParameters)
IMPORTANT NOTE: Please notice some of the structs have a regular version (e.g.
MyGPTCreationParameters) and an extended version (e.g.MyGPTCreationParametersEx); please make sure to repeat the UID generation process for each separately.Rename folder
<CORE_PDK_ROOT>\sources\plugins\nvigi.mygpt\backendto match the target backend you are using.For example, the GPT plugin interface in the NVIGI SDK release pack has multiple backends:
One based on REST APIs for AI inference in a cloud instance
One based on the GGML tensor library for local AI inference.
Further, this GGML backend also supports multiple APIs to implement the tensors: CUDA, Direct3D 12, Vulkan, a CPU-based non-accelerated one, etc.
For more details, please inspect the public git source available at the GitHub Plugins source repo.
Ideally, all backends should implement the same interface declared in
nvigi_myplugin.hFor purposes of this tutorial, we will leave the
backendfolder intact and as the only existing backend for ournvigi.mygptplugin.
Next step, is to modify the implementation file to match the changes made to the interface in the header file.
Rename the file
<CORE_PDK_ROOT>\sources\plugins\nvigi.mygpt\backend\templateEntry.cppto something else (e.g.<CORE_PDK_ROOT>\sources\plugins\nvigi.mygpt\backend\myGPTEntry.cpp), then open it in a text editor.Rename the class
TemplateAIPlugintoMyGPTPlugin, including all references to it in the file. Best practice is using the text editor’s search-and-replace functionality with the “match case” and “whole word” options enabled. This should replace it within the C++ class itself as well as theNVIGI_MODERN_PLUGINplugin export macro at the end of the file.Repeat the previous substitution step with the
MyGPTPluginContextstruct, using the same parameters (both “match case” and “whole word” options on). Again, this should substitute it in the struct declaration and theNVIGI_MODERN_PLUGINplugin export macro at the end of the file again.Next, apply the same substitution scheme to all the renamings that were done to the header file to the implementation file; meaning:
Nested namespace
template_aitomygpt.Using-declaration
ITemplateAItoIMyGPT.String constants
kTemplateAIInputPromptandkTemplateAIOutputResponsetokMyGPTInputPromptandkMyGPTOutputResponse, respectively.Structs
TemplateAICreationParametersandTemplateAICreationParametersExtoMyGPTCreationParametersandMyGPTCreationParametersEx, respectively.
Modifying the Unit Tests
NVIGI Core ships with a set of unit tests for each plugin; these are implemented using the Catch2 testing framework. These unit tests are executed via a command-line utility (nvigi.test.exe), which source code is included in the PDK pack. In order to run those unit tests, simply run open a Visual Studio 2022 Development console to the <CORE_PDK_ROOT> directory, and execute the following command:
bin\Debug_x64\nvigi.test.exe
After the tests run, a timestamped log file inside <CORE_PDK_ROOT>\bin\Debug_x64\ is created with the output.
This section explains how to modify the unit tests for the new custom plugin nvigi.mygpt and how to add them to the nvigi.test.exe utility.
Open the file
<CORE_PDK_ROOT>\source\plugins\nvigi.mygpt\backend\tests.hin a file editor.Update the first include macro to point to the renamed header file for the new plugin:
#include "source/plugins/nvigi.template.inference/nvigi_template_infer.h"
becomes
#include "source/plugins/nvigi.mygpt/nvigi_mygpt.h"
Rename the nested namespace
namespace nvigi { namespace template_ai { ... }}
becomes
namespace nvigi { namespace mygpt { ... }}
Use your text editor’s search-and-replace functionality (with the “match case” and “whole word” options enabled) to replace the many uses of
plugin::template_ai::kIdforplugin::mygpt::kId. These are always used in calls to load or unload an interface to the new plugin.Note the declaration of the old template plugin’s interface:
nvigi::ITemplateAI* itemplate{};
Using search-and-replace, rename all the occurences of
ITemplateAIanditemplatetoIMyGPTandiMyGPT, respectively.Repeat the previous step for the old template plugin’s creation parameters:
TemplateAICreationParameters templateParams{};
Rename
TemplateAICreationParametersandtemplateParamstoMyGPTCreationParametersandmyGPTParams, respectively.Likewise, repeat the previous step and rename string constants
kTemplateAIInputPromptandkTemplateAIOutputResponsetokMyGPTInputPromptandkMyGPTOutputResponse, respectively (as we did in the new plugin’s header and implementation).Seek all unit tests declarations in
tests.h, and replace the test names to reflect the new plugin’s name. For example:TEST_CASE("template_ai_basic", "[template],[inference],[cpu]") { ... }
becomes
TEST_CASE("mygpt_basic", "[inference],[cpu]") { ... }
(Also do remove the
[template]Catch2-style tag from the test case string)
Duplicating and Renaming the Model Files
NVIGI inference plugins are always paired with one or more sets of AI model files. In the case of the inference template plugin (the one used as a starting point for the new plugin), the included model files are simply file placeholders, do not contain real data, and they are not truly loaded by the plugin. Given that, for now, the new plugin replicates the functionality of the inference template plugin, we must replicate the placeholder model files as well.
In a Windows Explorer window open to
<CORE_PDK_ROOT>\data\nvigi.models\, duplicate thenvigi.plugin.template.inferencedirectory and rename it tonvigi.plugin.mygpt.
Modifying the Build Scripts
Modify
<CORE_PDK_ROOT>\tools\packaging\package.pyto add your project to the build system; we recommend using the corresponding sections for other plugins as an example. Here is a snapshot from the file showing various components:First, add the details of
plugin.mygptto the all component list:all_components = { ... 'plugin.template.inference' : { 'platforms': all_plat, 'sharedlib': ['nvigi.plugin.template.inference'], 'includes': ['source/plugins/nvigi.template.inference/nvigi_template_infer.h'], 'sources': ['plugins/nvigi.template.inference', 'shared'], 'premake': 'source/plugins/nvigi.template.inference/premake.lua', 'model': 'nvigi.plugin.template.inference', 'public_models': ['{01234567-0123-0123-0123-0123456789AB}'] }, ... }
becomes
all_components = { ... 'plugin.template.inference' : { 'platforms': all_plat, 'sharedlib': ['nvigi.plugin.template.inference'], 'includes': ['source/plugins/nvigi.template.inference/nvigi_template_infer.h'], 'sources': ['plugins/nvigi.template.inference', 'shared'], 'premake': 'source/plugins/nvigi.template.inference/premake.lua', 'model': 'nvigi.plugin.template.inference', 'public_models': ['{01234567-0123-0123-0123-0123456789AB}'] }, 'plugin.mygpt' : { 'platforms': all_plat, 'sharedlib': ['nvigi.plugin.mygpt'], 'includes': ['source/plugins/nvigi.mygpt/nvigi_mygpt.h'], 'sources': ['plugins/nvigi.mygpt', 'shared'], 'premake': 'source/plugins/nvigi.mygpt/premake.lua', 'model': 'nvigi.plugin.mygpt', 'public_models': ['{01234567-0123-0123-0123-0123456789AB}'] }, ... }
Next, add
plugin.mygptto the runtime component list (the ones to be included in the package built):runtime_components = [ 'core.framework', 'plugin.hwi.common', 'plugin.hwi.cuda', 'plugin.hwi.d3d12', 'plugin.template.generic', 'plugin.template.inference', 'test', 'tool.utils' ]
becomes
runtime_components = [ 'core.framework', 'plugin.hwi.common', 'plugin.hwi.cuda', 'plugin.hwi.d3d12', 'plugin.template.generic', 'plugin.template.inference', 'plugin.mygpt', 'test', 'tool.utils' ]
Lastly, add
plugin.mygptto the runtime source component list (the ones included in the source for the build):runtime_source_components = [ 'core.framework', 'plugin.template.generic', 'plugin.template.inference' ]
becomes
runtime_source_components = [ 'core.framework', 'plugin.template.generic', 'plugin.template.inference', 'plugin.mygpt' ]
Next we need to modify file
premake.luain your plugin’s directory (<CORE_PDK_ROOT>\source\plugins\nvigi.mygpt\premake.lua) to include everything required to make your plugin build:group "plugins/template" project "nvigi.plugin.template.inference" kind "SharedLib" pluginBasicSetup("template.inference") files { "./**.h", "./**.cpp", } vpaths { ["impl"] = {"./**.h", "./**.cpp" }} includedirs { ROOT .. "source/plugins/nvigi.template.inference/backend", ROOT .. "source/plugins/nvigi.template.inference", } ... group ""
becomes
group "plugins/template" project "nvigi.plugin.mygpt" kind "SharedLib" pluginBasicSetup("mygpt") files { "./**.h", "./**.cpp", } vpaths { ["impl"] = {"./**.h", "./**.cpp" }} includedirs { ROOT .. "source/plugins/nvigi.mygpt/backend", ROOT .. "source/plugins/nvigi.mygpt", } ... group ""
Lastly, modify the global
<CORE_PDK_ROOT>\premake.luafile to include the new plugin’spremake.luafile. Simply append the following line:include("source/plugins/nvigi.mygpt/premake.lua")
Building the CorePDK Source with the New Plugin
Open a Visual Studio 2022 Developer Console to
<CORE_PDK_ROOT>.Run
setup.bat vs2022. This will create a folder called_projectthat contains all of the Visual Studio 2022 project and solution files.NOTE: The first time
setup.batruns it will download some build dependencies and install them in a central repository. Subsequent executins ofsetup.batwill not do that and will be much quicker.NOTE: If
setup.batfails with an error from thepackmanpackage downloader, please re-runsetup.batagain as there are rare but possible issues with link-creation on initial run.To build the project, there are two options:
Open
<CORE_PDK_ROOT>\_project\vs2022\nvigicoresdk.slnin Visual Studio 2022, and use it to build the entire solution, orRun
build.bat -Debugon the developer console
Either of the previous two options will create a temporary folder called
_artifacts, and it will also update the executable files and shared libraries inside<CORE_PDK_ROOT>\bin\Debug_x86
Considerations when Developing Custom Plugins
Modify source code (add new headers or cpp files as needed) to perform actions you need for your plugin
If adding new files, make sure to re-run
setup.batIf adding a new directory with source code, make sure to update your plugin’s
premake.luafile as well as re-runningsetup.bat
Make sure NOT to include internal headers in your public
nvigi_$name.hheaderWhen adding new data structures/interfaces which are shared either publicly or internally, you must use the
nvigi.tool.utilsas described in GUID creationWhen modifying an existing shared data or interfaces always follow the “do not break C ABI compatibility” guidelines
Wrap all you externally exposed functions with
NVIGI_CATCH_EXCEPTIONChoose your GPU backend(s) by uncommenting the appropriate sections in
premake.lua:PLUGIN_USES_CUDAfor NVIDIA GPU supportPLUGIN_USES_D3D12for DirectX 12 support (Windows)PLUGIN_USES_VULKANfor Vulkan support (cross-platform)You can enable multiple backends simultaneously