Creating an Application
In this section, we’ll address:
how to define an Application class
how to configure an Application
At this time, the Holoscan SDK only supports a single fragment per application. This means that the application can have only one workflow and work on a single machine. We plan to support multiple fragments per application in a future release.
The following code snippet shows an example Application code skeleton:
We define the
App
class that inherits from theApplication
base class.We create an instance of the
App
class inmain()
using themake_application()
function.The
run()
method starts the application which will execute itscompose()
method where the custom workflow will be defined.
#include <holoscan/holoscan.hpp>
class App : public holoscan::Application {
public:
void compose() override {
// Define Operators and workflow
// ...
}
};
int main() {
auto app = holoscan::make_application<App>();
app->run();
return 0;
}
We define the
App
class that inherits from theApplication
base class.We create an instance of the
App
class in__main__
.The
run()
method starts the application which will execute itscompose()
method where the custom workflow will be defined.
from holoscan.core import Application
class App(Application):
def compose(self):
# Define Operators and workflow
# ...
if __name__ == "__main__":
app = App()
app.run()
An application can be configured at different levels:
providing the GXF extensions that need to be loaded (when using GXF operators)
configuring parameters for your application, including the operators in the workflow
The sections below will describe how to configure each of them, starting with a native support for YAML-based configuration for convenience.
YAML Configuration support
Holoscan supports loading arbitrary parameters from a YAML configuration file at runtime, making it convenient to configure each item listed above, or other custom parameters you wish to add on top of the existing API. For C++ applications, it also provides the ability to change the behavior of your application without needing to recompile it.
Usage of the YAML utility is optional. Configurations can be hardcoded in your program, or done using any parser of your choosing.
Here is an example YAML configuration:
string_param: "test"
float_param: 0.50
bool_param: true
dict_param:
key_1: value_1
key_2: value_2
Ingesting these parameters can be done using the two methods below:
The
config()
method takes the path to the YAML configuration file. If the input path is relative, it will be relative to the current working directory.The
from_config()
method returns anArgList
object for a given key in the YAML file. It holds a list ofArg
objects, each of which holds a name (key) and a value.If the
ArgList
object has only oneArg
(when the key is pointing to a scalar item), it can be converted to the desired type using theas()
method by passing the type as an argument.The key can be a dot-separated string to access nested fields.
// Pass configuration file
auto app = holoscan::make_application<App>();
app->config("path/to/app_config.yaml");
// Scalars
auto string_param = app->from_config("string_param").as<std::string>();
auto float_param = app->from_config("float_param").as<float>();
auto bool_param = app->from_config("bool_param").as<bool>();
// Dict
auto dict_param = app->from_config("dict_param");
auto dict_nested_param = app->from_config("dict_param.key_1").as<std::string>();
// Print
std::cout << "string_param: " << string_param << std::endl;
std::cout << "float_param: " << float_param << std::endl;
std::cout << "bool_param: " << bool_param << std::endl;
std::cout << "dict_param:\n" << dict_param.description() << std::endl;
std::cout << "dict_param['key1']: " << dict_nested_param << std::endl;
// // Output
// string_param: test
// float_param: 0.5
// bool_param: 1
// dict_param:
// name: arglist
// args:
// - name: key_1
// type: YAML::Node
// value: value_1
// - name: key_2
// type: YAML::Node
// value: value_2
// dict_param['key1']: value_1
The
config()
method takes the path to the YAML configuration file. If the input path is relative, it will be relative to the current working directory.The
kwargs()
method return a regular python dict for a given key in the YAML file.Advanced: this method wraps the
from_config()
method similar to the C++ equivalent, which returns anArgList
object if the key is pointing to a map item, or anArg
object if the key is pointing to a scalar item. AnArg
object can be cast to the desired type (e.g.,str(app.from_config("string_param"))
).
# Pass configuration file
app = App()
app.config("path/to/app_config.yaml")
# Scalars
string_param = app.kwargs("string_param")["string_param"]
float_param = app.kwargs("float_param")["float_param"]
bool_param = app.kwargs("bool_param")["bool_param"]
# Dict
dict_param = app.kwargs("dict_param")
dict_nested_param = dict_param["key_1"]
# Print
print(f"string_param:{string_param}")
print(f"float_param:{float_param}")
print(f"bool_param:{bool_param}")
print(f"dict_param:{dict_param}")
print(f"dict_param['key_1']:{dict_nested_param}")
# # Output:
# string_param: test
# float_param: 0.5
# bool_param: True
# dict_param: {'key_1': 'value_1', 'key_2': 'value_2'}
# dict_param['key_1']: 'value_1'
from_config()
cannot be used as inputs to the built-in operators
at this time, it’s therefore recommended to use kwargs()
in Python.
With both from_config
and kwargs
, the returned ArgList
/dictionary will include both the key and its associated item if that item value is a scalar. If the item is a map/dictionary itself, the input key is dropped, and the output will only hold the key/values from that item.
Loading GXF extensions
If you use operators that depend on GXF extensions for their implementations (known as GXF operators), the shared libraries (.so
) of these extensions need to be dynamically loaded as plugins at runtime.
The SDK already automatically handles loading the required extensions for the built-in operators in both C++ and Python, as well as common extensions (listed here). To load additional extensions for your own operators, you can use one of the following approach:
extensions:
- libgxf_myextension1.so
- /path/to/libgxf_myextension2.so
auto app = holoscan::make_application<App>();
auto exts = {"libgxf_myextension1.so", "/path/to/libgxf_myextension2.so"};
for (auto& ext : exts) {
app->executor().extension_manager()->load_extension(ext);
}
from holoscan.gxf import load_extensions
from holoscan.core import Application
app = Application()
context = app.executor.context_uint64
exts = ["libgxf_myextension1.so", "/path/to/libgxf_myextension2.so"]
load_extensions(context, exts)
To be discoverable, paths to these shared libraries need to either be absolute, relative to your working directory, installed in the lib/gxf_extensions
folder of the holoscan package, or listed under the HOLOSCAN_LIB_PATH
or LD_LIBRARY_PATH
environment variables.
Configuring operators
Operators are instantiated in the compose()
method of your application. They have three type of fields which can be configured: parameters, conditions, and resources.
Configuring operator parameters
Operators could have parameters defined in their setup
method to better control their behavior (see details when creating your own operators). The snippet below would be the implementation of this method for a minimal operator named MyOp
, that takes a string and a boolean as parameters; we’ll ignore any extra details for the sake of this example:
void setup(OperatorSpec& spec) override {
spec.param(string_param_, "string_param");
spec.param(bool_param_, "bool_param");
}
def setup(self, spec: OperatorSpec):
spec.param("string_param")
spec.param("bool_param")
# Optional in python. Could define `self.
` instead in `def __init__`
Given this YAML configuration:
myop_param:
string_param: "test"
bool_param: true
bool_param: false # we'll use this later
We can configure an instance of the MyOp
operator in the application’s compose
method like this:
void compose() override {
// Using YAML
auto my_op1 = make_operator<MyOp>("my_op1", from_config("myop_param"));
// Same as above
auto my_op2 = make_operator<MyOp>("my_op2",
Arg("string_param", std::string("test")), // can use Arg(key, value)...
Arg("bool_param") = true // ... or Arg(key) = value
);
}
def compose(self):
# Using YAML
my_op1 = MyOp(self, name="my_op1", **self.kwargs("myop_param"))
# Same as above
my_op2 = MyOp(self,
name="my_op2",
string_param="test",
bool_param=True,
)
If multiple ArgList
are provided with duplicate keys, the latest one overrides them:
void compose() override {
// Using YAML
auto my_op1 = make_operator<MyOp>("my_op1",
from_config("myop_param"),
from_config("bool_param")
);
// Same as above
auto my_op2 = make_operator<MyOp>("my_op2",
Arg("string_param", "test"),
Arg("bool_param") = true,
Arg("bool_param") = false
);
// -> my_op `bool_param_` will be set to `false`
}
def compose(self):
# Using YAML
my_op1 = MyOp(self, name="my_op1",
from_config("myop_param"),
from_config("bool_param"),
)
# Note: We're using from_config above since we can't merge automatically with kwargs
# as this would create duplicated keys. However, we recommend using kwargs in python
# to avoid limitations with wrapped operators, so the code below is preferred.
# Same as above
params = self.kwargs("myop_param").update(self.kwargs("bool_param"))
my_op2 = MyOp(self, name="my_op2", params)
# -> my_op `bool_param` will be set to `False`
Configuring operator conditions
By default, operators will continuously run. To change that behavior, some condition classes (C++/Python
) can be passed to the constructor of an operator to define when it (its compute()
method) should execute. This includes:
A
CountCondition
(C++
/Python
) can be used to only execute the operator a specific number of times.void compose() override { // Will only run 10 times auto op = make_operator<MyOp>("my_op", make_condition<CountCondition>(10)); }
def compose(self): # Will only run 10 times my_op = MyOp(self, CountCondition(self, 10), name="my_op")
A
BooleanCondition
(C++
/Python
) can be used to configure when to disable or enable an operator.void compose() override { enable_op_condition = make_condition<BooleanCondition>("my_bool_condition") auto op = make_operator<MyOp>("my_op", enable_op_condition); }
def compose(self): enable_op_condition = BooleanCondition(self, name="my_bool_condition") my_op = MyOp(self, enable_op_condition, name="my_op")
The condition object has two APIs -
enable_tick()
anddisable_tick()
- which will control whether the operator should execute or not. It can be called outside of the operator, or within the operatorcompute()
method, based on any arbitrary condition. In the latter case, the name that is provided to the constructor ("my_bool_condition"
here) must match the name used to retrieve it in the operator’scompute()
method. For example:void compute(InputContext&, OutputContext& op_output, ExecutionContext&) override { // ... if (<condition expression>) { // e.g. if (index_ >= 10) auto my_bool_condition = condition<BooleanCondition>("my_bool_condition"); if (my_bool_condition) { // if condition exists (not true or false) my_bool_condition->disable_tick(); // this will stop the operator } } // ... }
def compute(self, op_input, op_output, context): # ... if <condition expression>: # e.g, self.index >= 10 my_bool_condition = self.conditions.get("my_bool_condition") if my_bool_condition: # if condition exists (not true or false) my_bool_condition.disable_tick() # this will stop the operator # ...
Configuring operator resources
This section still needs to be written.
One-operator Workflow
The simplest form of a workflow would be a single operator.
Fig. 11 A one-operator workflow
The graph above shows an Operator (C++
/Python
) (named MyOp
) that has neither inputs nor output ports.
Such an operator may accept input data from the outside (e.g., from a file) and produce output data (e.g., to a file) so that it acts as both the source and the sink operator.
Arguments to the operator (e.g., input/output file paths) can be passed as parameters as described in the section above.
We can add an operator to the workflow by calling add_operator
(C++
/Python
) method in the compose()
method.
The following code shows how to define a one-operator workflow in compose()
method of the App
class (assuming that the operator class MyOp
is declared/defined in the same file).
class App : public holoscan::Application {
public:
void compose() override {
// Define Operators
auto my_op = make_operator<MyOp>("my_op");
// Define the workflow
add_operator(my_op);
}
};
class App(Application):
def compose(self):
# Define Operators
my_op = MyOp(self, name="my_op")
# Define the workflow
self.add_operator(my_op)
Linear Workflow
Here is an example workflow where the operators are connected linearly:
Fig. 12 A linear workflow
In this example, SourceOp produces a message and passes it to ProcessOp. ProcessOp produces another message and passes it to SinkOp.
We can connect two operators by calling the add_flow()
method (C++
/Python
) in the compose()
method.
The add_flow()
method (C++
/Python
) takes the source operator, the destination operator, and the optional port name pairs.
The port name pair is used to connect the output port of the source operator to the input port of the destination operator.
The first element of the pair is the output port name of the upstream operator and the second element is the input port name of the downstream operator.
An empty port name (“”) can be used for specifying a port name if the operator has only one input/output port.
If there is only one output port in the upstream operator and only one input port in the downstream operator, the port pairs can be omitted.
The following code shows how to define a linear workflow in the compose()
method of the App
class (assuming that the operator classes SourceOp
, ProcessOp
, and SinkOp
are declared/defined in the same file).
class App : public holoscan::Application {
public:
void compose() override {
// Define Operators
auto source = make_operator<SourceOp>("source");
auto process = make_operator<ProcessOp>("process");
auto sink = make_operator<SinkOp>("sink");
// Define the workflow
add_flow(source, process); // same as `add_flow(source, process, {{"output", "input"}});`
add_flow(process, sink); // same as `add_flow(process, sink, {{"", ""}});`
}
};
class App(Application):
def compose(self):
# Define Operators
source = SourceOp(self, name="source")
process = ProcessOp(self, name="process")
sink = SinkOp(self, name="sink")
# Define the workflow
self.add_flow(source, process) # same as `self.add_flow(source, process, {("output", "input")})`
self.add_flow(process, sink) # same as `self.add_flow(process, sink, {("", "")})`
Complex Workflow (Multiple Inputs and Outputs)
You can design a complex workflow like below where some operators have multi-inputs and/or multi-outputs:
Fig. 13 A complex workflow (multiple inputs and outputs)
class App : public holoscan::Application {
public:
void compose() override {
// Define Operators
auto reader1 = make_operator<Reader1>("reader1");
auto reader2 = make_operator<Reader2>("reader2");
auto processor1 = make_operator<Processor1>("processor1");
auto processor2 = make_operator<Processor2>("processor2");
auto processor3 = make_operator<Processor3>("processor3");
auto writer = make_operator<Writer>("writer");
auto notifier = make_operator<Notifier>("notifier");
// Define the workflow
add_flow(reader1, processor1, {{"image", "image1"}, {"image", "image2"}, {"metadata", "metadata"}});
add_flow(reader1, processor1, {{"image", "image2"}});
add_flow(reader2, processor2, {{"roi", "roi"}});
add_flow(processor1, processor2, {{"image", "image"}});
add_flow(processor1, writer, {{"image", "image"}});
add_flow(processor2, notifier);
add_flow(processor2, processor3);
add_flow(processor3, writer, {{"seg_image", "seg_image"}});
}
};
class App(Application):
def compose(self):
# Define Operators
reader1 = Reader1Op(self, name="reader1")
reader2 = Reader2Op(self, name="reader2")
processor1 = Processor1Op(self, name="processor1")
processor2 = Processor2Op(self, name="processor2")
processor3 = Processor3Op(self, name="processor3")
notifier = NotifierOp(self, name="notifier")
writer = WriterOp(self, name="writer")
# Define the workflow
self.add_flow(reader1, processor1, {("image", "image1"), ("image", "image2"), ("metadata", "metadata")})
self.add_flow(reader2, processor2, {("roi", "roi")})
self.add_flow(processor1, processor2, {("image", "image")})
self.add_flow(processor1, writer, {("image", "image")})
self.add_flow(processor2, notifier)
self.add_flow(processor2, processor3)
self.add_flow(processor3, writer, {("seg_image", "seg_image")})
You can build your C++ application using CMake, by calling find_package(holoscan)
in your CMakeLists.txt
to load the SDK libraries. Your executable will need to link against:
holoscan::core
any operator defined outside your
main.cpp
which you wish to use in your app workflow, such as:SDK built-in operators under the
holoscan::ops
namespaceoperators created separately in your project with
add_library
operators imported externally using with
find_library
orfind_package
Listing 1
# Your CMake project
cmake_minimum_required(VERSION 3.20)
project(my_project CXX)
# Finds the holoscan SDK
find_package(holoscan REQUIRED CONFIG PATHS "/opt/nvidia/holoscan")
# Create an executable for your application
add_executable(my_app main.cpp)
# Link your application against holoscan::core and any existing operators you'd like to use
target_link_libraries(my_app
PRIVATE
holoscan::core
holoscan::ops::<some_built_in_operator_target>
<some_other_operator_target>
<...>
)
Once your CMakeLists.txt
is ready in <src_dir>
, you can build in <build_dir>
with the command line below. You can optionally pass Holoscan_ROOT
if the SDK installation you’d like to use differs from the PATHS
given to find_package(holoscan)
above.
# Configure
cmake -S <src_dir> -B <build_dir> -D Holoscan_ROOT="/opt/nvidia/holoscan"
# Build
cmake --build <build_dir> -j
You can then run your application by running <build_dir>/my_app
.
Python applications do not require building. Simply ensure that:
The python module is installed in your
dist-packages
or is listed under thePYTHONPATH
env variable so you can importholoscan.core
and any built-in operator you might need inholoscan.operators
.Any external operators are available in modules in your
dist-packages
or contained inPYTHONPATH
.
While python applications do not need to be built, they might depend on operators that wrap C++ operators. All python operators built-in in the SDK already ship with the python bindings pre-built. Follow this section if you are wrapping C++ operators yourself to use in your python application.
You can then run your application by running python3 my_app.py
.