# IInt8LegacyCalibrator¶

class tensorrt.IInt8LegacyCalibrator(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → None

Extends the IInt8Calibrator class.

Variables: quantile – float The quantile (between 0 and 1) that will be used to select the region maximum when the quantile method is in use. See the user guide for more details on how the quantile is used. regression_cutoff – float The fraction (between 0 and 1) of the maximum used to define the regression cutoff when using regression to determine the region maximum. See the user guide for more details on how the regression cutoff is used
get_algorithm(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → tensorrt.tensorrt.CalibrationAlgoType

Signals that this is the legacy calibrator.

Returns: CalibrationAlgoType.LEGACY_CALIBRATION
get_batch(self: tensorrt.tensorrt.IInt8LegacyCalibrator, names: List[str]) → object

Get a batch of input for calibration. The batch size of the input must match the batch size returned by get_batch_size() .

A possible implementation may look like this:

def get_batch(names):
try:
# Assume self.batches is a generator that provides batch data.
data = next(self.batches)
# Assume that self.device_input is a device buffer allocated by the constructor.
cuda.memcpy_htod(self.device_input, data)
return [int(self.device_input)]
except StopIteration:
# When we're out of batches, we return either [] or None.
# This signals to TensorRT that there is no calibration data remaining.
return None

Parameters: names – The names of the network inputs for each object in the bindings array. A list of device memory pointers set to the memory containing each network input data, or an empty list if there are no more batches for calibration. You can allocate these device buffers with pycuda, for example, and then cast them to int to retrieve the pointer.
get_batch_size(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → int

Get the batch size used for calibration batches.

Returns: The batch size.
get_quantile(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → float
get_regression_cutoff(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → float
read_calibration_cache(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → buffer

Calibration is potentially expensive, so it can be useful to generate the calibration data once, then use it on subsequent builds of the network. The cache includes the regression cutoff and quantile values used to generate it, and will not be used if these do not match the settings of the current calibrator. However, the network should also be recalibrated if its structure changes, or the input data set changes, and it is the responsibility of the application to ensure this.

Reading a cache is just like reading any other file in Python. For example, one possible implementation is:

def read_calibration_cache(self):
# If there is a cache, use it instead of calibrating again. Otherwise, implicitly return None.
if os.path.exists(self.cache_file):
with open(self.cache_file, "rb") as f:

Returns: A cache object or None if there is no data.
read_histogram_cache(self: tensorrt.tensorrt.IInt8LegacyCalibrator, arg0: int) → capsule

Load a histogram. Histogram generation is potentially expensive, so it can be useful to generate the histograms once, then use them when exploring the space of calibrations. The histograms should be regenerated if the network structure changes, or the input data set changes, and it is the responsibility of the application to ensure this. See the user guide for more details on how the regression cutoff is used

Parameters: length – The length of the cached data, that should be set by the called function. If there is no data, this should be zero. The cache or None if there is no cache.
write_calibration_cache(self: tensorrt.tensorrt.IInt8LegacyCalibrator, cache: object) → None

Save a calibration cache.

Writing a cache is just like writing any other buffer in Python. For example, one possible implementation is:

def write_calibration_cache(self, cache):
with open(self.cache_file, "wb") as f:
f.write(cache)

Parameters: cache – The calibration cache to write.
write_histogram_cache(self: tensorrt.tensorrt.IInt8LegacyCalibrator, arg0: capsule, arg1: int) → None

Save a histogram cache.

Parameters: data – The data to cache. length – The length in bytes of the data to cache.