Graph

class onnx_graphsurgeon.Graph(nodes: Sequence[Node] = None, inputs: Sequence[Tensor] = None, outputs: Sequence[Tensor] = None, name=None, doc_string=None, opset=None, import_domains=None, producer_name: str = None, producer_version: str = None, functions: Sequence[Function] = None)

Bases: object

Represents a graph containing nodes and tensors.

Parameters:
  • nodes (Sequence[Node]) – A list of the nodes in this graph.

  • inputs (Sequence[Tensor]) – A list of graph input Tensors.

  • outputs (Sequence[Tensor]) – A list of graph output Tensors.

  • name (str) – The name of the graph. Defaults to “onnx_graphsurgeon_graph”.

  • doc_string (str) – A doc_string for the graph. Defaults to “”.

  • opset (int) – The ONNX opset to use when exporting this graph.

  • producer_name (str) – The name of the tool used to generate the model. Defaults to “”.

  • producer_version (str) – The version of the generating tool. Defaults to “”.

static register(opsets=None)

Registers a function with the Graph class for the specified group of opsets. After registering the function, it can be accessed like a normal member function.

For example:

@Graph.register()
def add(self, a, b):
    return self.layer(op="Add", inputs=[a, b], outputs=["add_out_gs"])

graph.add(a, b)
Parameters:

opsets (Sequence[int]) – A group of opsets for which to register the function. Multiple functions with the same name may be registered simultaneously if they are registered for different opsets. Registering a function with a duplicate name for the same opsets will overwrite any function previously registered for those opsets. By default, the function is registered for all opsets.

node_ids()

Returns a context manager that supplies unique integer IDs for Nodes in the Graph.

For example:

with graph.node_ids():
    assert graph.nodes[0].id != graph.nodes[1].id
Returns:

A context manager that supplies unique integer IDs for Nodes.

Return type:

NodeIDAdder

subgraphs(recursive=False)

Convenience function to iterate over all subgraphs which are contained in this graph. Subgraphs are found in the attributes of ONNX control flow nodes such as ‘If’ and ‘Loop’.

Parameters:

recursive (bool) – Whether to recursively search this graph’s subgraphs for more subgraphs. Defaults to False.

Returns:

A generator which iterates over the subgraphs contained in this graph.

cleanup(remove_unused_node_outputs=False, recurse_subgraphs=True, remove_unused_graph_inputs=False, recurse_functions=True)

Removes unused nodes and tensors from the graph. A node or tensor is considered unused if it does not contribute to any of the graph outputs.

Additionally, any producer nodes of graph input tensors, as well as consumer nodes of graph output tensors that are not in the graph, are removed from the graph.

Note: This function will never modify graph output tensors.

Parameters:
  • remove_unused_node_outputs (bool) – Whether to remove unused output tensors of nodes. This will never remove empty-tensor (i.e. optional, but omitted) outputs. Defaults to False.

  • recurse_subgraphs (bool) – Whether to recursively cleanup subgraphs. Defaults to True.

  • remove_unused_graph_inputs (bool) – Whether to remove unused graph inputs. Defaults to False.

  • recurse_functions (bool) – Whether to also clean up this graph’s local functions. Defaults to True.

Returns:

self

toposort(recurse_subgraphs=True, recurse_functions=True, mode='full')

Topologically sort the graph in place.

Parameters:
  • recurse_subgraphs (bool) – Whether to recursively topologically sort subgraphs. Only applicable when mode=”full” or mode=”nodes”. Defaults to True.

  • recurse_functions (bool) – Whether to topologically sort the nodes of this graph’s functions. Only applicable when mode=”full” or mode=”nodes”. Defaults to True.

  • mode (str) – Whether to reorder this graph’s list of nodes, list of functions, or both. Possible values: - “full”: Topologically sort the list of nodes and the list of functions. - “nodes”: Only sort the list of nodes. - “functions”: Only sort the list of functions. Defaults to “full”.

Returns:

self

tensors(check_duplicates=False)

Creates a tensor map of all the tensors used by this graph by walking over all nodes. Empty tensors are omitted from this map.

Tensors are guaranteed to be in order of the nodes in the graph. Hence, if the graph is topologically sorted, the tensor map will be too.

Parameters:

check_duplicates (bool) – Whether to fail if multiple tensors with the same name are encountered.

Raises:

OnnxGraphSurgeonException – If check_duplicates is True and multiple distinct tensors in the graph share the same name.

Returns:

A mapping of tensor names to tensors.

Return type:

OrderedDict[str, Tensor]

fold_constants(fold_shapes=True, recurse_subgraphs=True, partitioning=None, error_ok=True, flatten_subgraphs=True, size_threshold=None, should_exclude_node=None, recurse_functions=True)

Folds constants in-place in the graph. The graph’s nodes and functions must be topologically sorted prior to calling this function (see toposort()).

This function will not remove constants after folding them. In order to get rid of these hanging nodes, you can run the cleanup() function.

Note: Due to how this function is implemented, the graph must be exportable to ONNX, and evaluable in ONNX-Runtime. Additionally, ONNX-Runtime must be installed.

Parameters:
  • fold_shapes (bool) – Whether to fold Shape nodes in the graph. This requires shapes to be inferred in the graph, and can only fold static shapes. Defaults to True.

  • recurse_subgraphs (bool) – Whether to recursively fold constants in subgraphs. Defaults to True.

  • partitioning (Union[str, None]) –

    Whether/How to partition the graph so that errors in folding one part of a model do not affect other parts. Available modes are:

    • None: Do not partition the graph. If inference fails, no constants are folded.

    • ”basic”: Partition the graph. If inference fails in one partition, other partitions will

      remain unaffected.

    • ”recursive”: Parition the graph recursively. If inference fails in a partition, the partition

      will be further paritioned.

    Defaults to None.

  • error_ok (bool) – Whether inference errors should be suppressed. When this is False, any errors encountered during inference will be re-raised. Defaults to True.

  • flatten_subgraphs (bool) – Whether to flatten subgraphs where possible. For example, If nodes with a constant condition can be flattened into the parent graph.

  • size_threshold (int) – The maximum size threshold, in bytes, for which to fold constants. Any tensors larger than this value will not be folded. Set to None to disable the size threshold and always fold constants. For example, some models may apply ops like Tile or Expand to constants, which can result in very large tensors. Rather than pre-computing those constants and bloating the model size, it may be desirable to skip folding them and allow them to be computed at runtime. Defaults to None.

  • should_exclude_node (Callable[[gs.Node], bool]) – A callable that accepts an onnx-graphsurgeon node from the graph and reports whether it should be excluded from folding. This is only called for nodes which are otherwise foldable. Note that preventing a node from being folded also prevents its consumers from being folded. Defaults to a callable that always returns False.

  • recurse_functions (bool) – Whether to fold constants in this graph’s Functions. Defaults to True.

Returns:

self

layer(inputs=None, outputs=None, *args, **kwargs)

Creates a node, adds it to this graph, and optionally creates its input and output tensors.

The input and output lists can include various different types:

  • Tensor:

    Any Tensors provided will be used as-is in the inputs/outputs of the node created. Therefore, you must ensure that the provided Tensors have unique names.

  • str:

    If a string is provided, this function will generate a new tensor using the string to generate a name. It will append an index to the end of the provided string to guarantee unique names.

  • numpy.ndarray:

    If a NumPy array is provided, this function will generate a Constant tensor using the name prefix: “onnx_graphsurgeon_constant”, and append an index to the end of the prefix to guarantee unique names.

  • Union[List[Number], Tuple[Number]]:

    If a list or tuple of numbers (int or float) is provided, this function will generate a Constant tensor using the name prefix: “onnx_graphsurgeon_lst_constant”, and append an index to the end of the prefix to guarantee unique names. The values of the tensor will be a 1D array containing the specified values. The datatype will be either np.float32 or np.int64.

Parameters:
  • inputs (List[Union[Tensor, str, numpy.ndarray]]) – The list of inputs

  • outputs (List[Union[Tensor, str, numpy.ndarray]]) – The list of outputs

  • args/kwargs – These are passed directly to the constructor of Node

Returns:

The output tensors of the node

Return type:

List[Tensor]

copy(tensor_map: OrderedDict[str, Tensor] | None = None)

Copy the graph.

This makes copies of all nodes and tensors in the graph, but will not do a deep-copy of weights or attributes (with the exception of Graph attributes, which will be copied using their copy method).

Parameters:

tensor_map (OrderedDict[str, Tensor]) – A mapping of tensor names to tensors from the outer graph. This should be None if this is the outer-most graph.

Returns:

A copy of the graph.

Return type:

Graph