Loaders
Module: polygraphy.backend.tf
- class OptimizeGraph(graph)[source]
Bases:
BaseLoader
Functor that freezes a TensorFlow graph, and folds constants.
Freezes a TensorFlow graph and folds constants.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
- call_impl()[source]
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- optimize_graph(graph)
Immediately evaluated functional variant of
OptimizeGraph
.Freezes a TensorFlow graph and folds constants.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- class GraphFromKeras(path)[source]
Bases:
BaseLoader
Functor that loads a TensorFlow model from Keras.
Loads a TensorFlow model from Keras.
- Parameters:
path (Union[str, h5py.File]) – A path to the saved model, or the file object.
- call_impl()[source]
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- graph_from_keras(path)
Immediately evaluated functional variant of
GraphFromKeras
.Loads a TensorFlow model from Keras.
- Parameters:
path (Union[str, h5py.File]) – A path to the saved model, or the file object.
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- class GraphFromFrozen(path)[source]
Bases:
BaseLoader
Functor that loads a TensorFlow frozen model.
Loads a TensorFlow frozen model.
- Parameters:
path (Union[str, tf.Graph, tf.GraphDef]) – A path to the frozen model, or a frozen TensorFlow graph or graphdef.
- call_impl()[source]
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- graph_from_frozen(path)
Immediately evaluated functional variant of
GraphFromFrozen
.Loads a TensorFlow frozen model.
- Parameters:
path (Union[str, tf.Graph, tf.GraphDef]) – A path to the frozen model, or a frozen TensorFlow graph or graphdef.
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- class GraphFromCkpt(dir, name=None)[source]
Bases:
BaseLoader
Functor that loads a TensorFlow model from a checkpoint. Note that in order to use checkpoints, you must NOT use subprocesses in the Comparator.
Loads a TensorFlow model from a checkpoint.
- Parameters:
dir (str) – Path to a directory containing checkpoints.
name (str) – The name of the checkpoint to load, not including the file extension. For example, to load model.meta, the argument would be model.
- call_impl()[source]
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- graph_from_ckpt(dir, name=None)
Immediately evaluated functional variant of
GraphFromCkpt
.Loads a TensorFlow model from a checkpoint.
- Parameters:
dir (str) – Path to a directory containing checkpoints.
name (str) – The name of the checkpoint to load, not including the file extension. For example, to load model.meta, the argument would be model.
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- class UseTfTrt(graph, max_workspace_size=None, fp16=None, int8=None, max_batch_size=None, is_dynamic_op=False, minimum_segment_size=None)[source]
Bases:
BaseLoader
[UNTESTED] Functor that optimizes a TensorFlow model using TF-TRT.
Optimizes a TensorFlow model using TF-TRT.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
max_workspace_size (int) – The maximum workspace size.
fp16 (bool) – Whether to run in FP16 mode.
max_batch_size (int) – The maximum batch size.
- call_impl()[source]
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- use_tf_trt(graph, max_workspace_size=None, fp16=None, int8=None, max_batch_size=None, is_dynamic_op=False, minimum_segment_size=None)
Immediately evaluated functional variant of
UseTfTrt
.Optimizes a TensorFlow model using TF-TRT.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
max_workspace_size (int) – The maximum workspace size.
fp16 (bool) – Whether to run in FP16 mode.
max_batch_size (int) – The maximum batch size.
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- class ModifyGraphOutputs(graph, outputs=None)[source]
Bases:
BaseLoader
Functor that modifies outputs of a TensorFlow graph.
Modifies outputs of a TensorFlow graph.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
outputs (List[str]) – Names of output tensors. If provided, this will override the outputs determined by the loader. If a value of constants.MARK_ALL is used instead of a list, all tensors in the network are marked.
- call_impl()[source]
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- modify_graph_outputs(graph, outputs=None)
Immediately evaluated functional variant of
ModifyGraphOutputs
.Modifies outputs of a TensorFlow graph.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
outputs (List[str]) – Names of output tensors. If provided, this will override the outputs determined by the loader. If a value of constants.MARK_ALL is used instead of a list, all tensors in the network are marked.
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- class SaveGraph(graph, path=None, tensorboard_dir=None, engine_dir=None)[source]
Bases:
BaseLoader
Functor that writes out artifacts from a TensorFlow graph.
Writes out artifacts from a TensorFlow Graph.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
path (str) – Path at which to save the frozen graphdef.
tensorboard_dir (str) – The directory in which to write TensorBoard visualizations.
engine_dir (str) – The directory in which to save TF-TRT engines,
- call_impl()[source]
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- save_graph(graph, path=None, tensorboard_dir=None, engine_dir=None)
Immediately evaluated functional variant of
SaveGraph
.Writes out artifacts from a TensorFlow Graph.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
path (str) – Path at which to save the frozen graphdef.
tensorboard_dir (str) – The directory in which to write TensorBoard visualizations.
engine_dir (str) – The directory in which to save TF-TRT engines,
- Returns:
The TensorFlow graph, and the names of its outputs.
- Return type:
Tuple[tf.Graph, Sequence[str]]
- class CreateConfig(gpu_memory_fraction=None, allow_growth=None, use_xla=None)[source]
Bases:
BaseLoader
Functor that creates a TensorFlow config.
Creates a TensorFlow config.
- Parameters:
gpu_memory_fraction (float) – The fraction of GPU memory that will be made available to TensorFlow. This should be a value between 0.0 and 1.0.
allow_growth (bool) – Whether to allow GPU memory allocated by TensorFlow to grow.
use_xla (bool) – Whether to attempt to enable XLA.
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- create_config(gpu_memory_fraction=None, allow_growth=None, use_xla=None)
Immediately evaluated functional variant of
CreateConfig
.Creates a TensorFlow config.
- Parameters:
gpu_memory_fraction (float) – The fraction of GPU memory that will be made available to TensorFlow. This should be a value between 0.0 and 1.0.
allow_growth (bool) – Whether to allow GPU memory allocated by TensorFlow to grow.
use_xla (bool) – Whether to attempt to enable XLA.
- Returns:
The TensorFlow config.
- Return type:
tf.ConfigProto
- class SessionFromGraph(graph, config=None)[source]
Bases:
BaseLoader
Functor that creates a TensorFlow session that can be used for inference.
Creates a TensorFlow session.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
config (Union[tf.ConfigProto, Callable() -> tf.ConfigProto]) – A TensorFlow ConfigProto or a callable that returns one.
- __call__(*args, **kwargs)
Invokes the loader by forwarding arguments to
call_impl
.Note:
call_impl
should not be called directly - use this function instead.
- session_from_graph(graph, config=None)
Immediately evaluated functional variant of
SessionFromGraph
.Creates a TensorFlow session.
- Parameters:
graph (Union[Tuple[tf.Graph, Sequence[str]], Callable() -> Tuple[tf.Graph, Sequence[str]]]) – A tuple containing a TensorFlow graph and output names or a callable that returns one.
config (Union[tf.ConfigProto, Callable() -> tf.ConfigProto]) – A TensorFlow ConfigProto or a callable that returns one.
- Returns:
The TensorFlow session.
- Return type:
tf.Session