Graph(nodes: Sequence[onnx_graphsurgeon.ir.node.Node] = None, inputs: Sequence[onnx_graphsurgeon.ir.tensor.Tensor] = None, outputs: Sequence[onnx_graphsurgeon.ir.tensor.Tensor] = None, name=None, doc_string=None, opset=None, import_domains=None)¶
Represents a graph containing nodes and tensors.
nodes (Sequence[Node]) – A list of the nodes in this graph.
inputs (Sequence[Tensor]) – A list of graph input Tensors.
outputs (Sequence[Tensor]) – A list of graph output Tensors.
name (str) – The name of the graph. Defaults to “onnx_graphsurgeon_graph”.
doc_string (str) – A doc_string for the graph. Defaults to “”.
opset (int) – The ONNX opset to use when exporting this graph.
Registers a function with the Graph class for the specified group of opsets. After registering the function, it can be accessed like a normal member function.
@Graph.register() def add(self, a, b): return self.layer(op="Add", inputs=[a, b], outputs=["add_out_gs"]) graph.add(a, b)
opsets (Sequence[int]) – A group of opsets for which to register the function. Multiple functions with the same name may be registered simultaneously if they are registered for different opsets. Registering a function with a duplicate name for the same opsets will overwrite any function previously registered for those opsets. By default, the function is registered for all opsets.
Returns a context manager that supplies unique integer IDs for Nodes in the Graph.
with graph.node_ids(): assert graph.nodes.id != graph.nodes.id
A context manager that supplies unique integer IDs for Nodes.
- Return type
cleanup(remove_unused_node_outputs=False, recurse_subgraphs=True, remove_unused_graph_inputs=False)¶
Removes unused nodes and tensors from the graph. A node or tensor is considered unused if it does not contribute to any of the graph outputs.
Additionally, any producer nodes of graph input tensors, as well as consumer nodes of graph output tensors that are not in the graph, are removed from the graph.
Note: This function will never modify graph output tensors.
remove_unused_node_outputs (bool) – Whether to remove unused output tensors of nodes. This will never remove empty-tensor (i.e. optional, but omitted) outputs. Defaults to False.
recurse_subgraphs (bool) – Whether to recursively cleanup subgraphs. Defaults to True.
remove_unused_graph_inputs (bool) – Whether to remove unused graph inputs. Defaults to False.
Topologically sort the graph in place.
recurse_subgraphs (bool) – Whether to recursively topologically sort subgraphs. Defaults to True.
Creates a tensor map of all the tensors used by this graph by walking over all nodes. Empty tensors are omitted from this map.
Tensors are guaranteed to be in order of the nodes in the graph. Hence, if the graph is topologically sorted, the tensor map will be too.
check_duplicates (bool) – Whether to fail if multiple tensors with the same name are encountered.
OnnxGraphSurgeonException – If check_duplicates is True and multiple distinct tensors in the graph share the same name.
A mapping of tensor names to tensors.
- Return type
fold_constants(fold_shapes=True, recurse_subgraphs=True, partitioning=None, error_ok=True)¶
Folds constants in-place in the graph. The graph must be topologically sorted prior to calling this function (see toposort()).
This function will not remove constants after folding them. In order to get rid of these hanging nodes, you can run the cleanup() function.
Note: Due to how this function is implemented, the graph must be exportable to ONNX, and evaluable in ONNX-Runtime. Additionally, ONNX-Runtime must be installed.
fold_shapes (bool) – Whether to fold Shape nodes in the graph. This requires shapes to be inferred in the graph, and can only fold static shapes. Defaults to True.
recurse_subgraphs (bool) – Whether to recursively fold constants in subgraphs. Defaults to True.
partitioning (Union[str, None]) –
Whether/How to partition the graph so that errors in folding one part of a model do not affect other parts. Available modes are:
None: Do not partition the graph. If inference fails, no constants are folded.
- ”basic”: Partition the graph. If inference fails in one partition, other partitions will
- ”recursive”: Parition the graph recursively. If inference fails in a partition, the partition
will be further paritioned.
Defaults to None.
error_ok (bool) – Whether inference errors should be suppressed. When this is enabled, any errors encountered during inference will be re-raised. Defaults to True.
layer(inputs=, outputs=, *args, **kwargs)¶
Creates a node, adds it to this graph, and optionally creates its input and output tensors.
The input and output lists can include various different types:
Tensor: Any Tensors provided will be used as-is in the inputs/outputs of the node created.
If a string is provided, this function will generate a new tensor using the string to generate a name. It will append an index to the end of the provided string to attempt to avoid duplicate tensor names, but since this doesn’t guarantee that the name will be unique, you should try to ensure that the string provided is as unique as possible. To avoid problems with duplicate names, you can generate names yourself and provide
If a NumPy array is provided, this function will generate a Constant tensor using the name prefix: “onnx_graphsurgeon_constant”
If a list or tuple of numbers (int or float) is provided, this function will generate a Constant tensor using the name prefix: “onnx_graphsurgeon_lst_constant”. The values of the tensor will be a 1D array containing the specified values. The datatype will be either np.float32 or np.int64.
copy(tensor_map: OrderedDict[str, Tensor] = None)¶
Copy the graph.
This makes copies of all nodes and tensors in the graph, but will not do a deep-copy of weights or attributes (with the exception of
Graphattributes, which will be copied using their