IInt8EntropyCalibrator

class tensorrt.IInt8EntropyCalibrator(self: tensorrt.tensorrt.IInt8EntropyCalibrator) → None

Extends the IInt8Calibrator class.

This is the preferred calibrator, as it is less complicated than the legacy calibrator and produces better results.

get_algorithm(self: tensorrt.tensorrt.IInt8EntropyCalibrator) → tensorrt.tensorrt.CalibrationAlgoType
get_batch(self: tensorrt.tensorrt.IInt8EntropyCalibrator, bindings: List[capsule], names: List[str]) → object

Get a batch of input for calibration. The batch size of the input must match the batch size returned by batch_size() .

Parameters:
  • bindings – An array of device memory objects that must be set to the memory containing each network input data.
  • names – The names of the network inputs for each object in the bindings array.
Returns:

False if there are no more batches for calibration.

get_batch_size(self: tensorrt.tensorrt.IInt8EntropyCalibrator) → int
read_calibration_cache(self: tensorrt.tensorrt.IInt8EntropyCalibrator, length: int) → capsule

Load a calibration cache.

Calibration is potentially expensive, so it can be useful to generate the calibration data once, then use it on subsequent builds of the network. The cache includes the regression cutoff and quantile values used to generate it, and will not be used if these do not batch the settings of the current calibrator. However, the network should also be recalibrated if its structure changes, or the input data set changes, and it is the responsibility of the application to ensure this.

Parameters:length – The length of the cached data, that should be set by the called function. If there is no data, this should be zero.
Returns:A cache object or None if there is no data.
write_calibration_cache(self: tensorrt.tensorrt.IInt8EntropyCalibrator, data: capsule, length: int) → None

Save a calibration cache.

Parameters:
  • data – The data to cache.
  • length – The length in bytes of the data to cache.