IInt8EntropyCalibrator(self: tensorrt.tensorrt.IInt8EntropyCalibrator) → None¶
This is the Legacy Entropy calibrator. It is less complicated than the legacy calibrator and produces better results.
get_algorithm(self: tensorrt.tensorrt.IInt8EntropyCalibrator) → tensorrt.tensorrt.CalibrationAlgoType¶
Signals that this is the entropy calibrator.
get_batch(self: tensorrt.tensorrt.IInt8EntropyCalibrator, names: List[str]) → object¶
Get a batch of input for calibration. The batch size of the input must match the batch size returned by
A possible implementation may look like this:
def get_batch(names): try: # Assume self.batches is a generator that provides batch data. data = next(self.batches) # Assume that self.device_input is a device buffer allocated by the constructor. cuda.memcpy_htod(self.device_input, data) return [int(self.device_input)] except StopIteration: # When we're out of batches, we return either  or None. # This signals to TensorRT that there is no calibration data remaining. return None
Parameters: names – The names of the network inputs for each object in the bindings array. Returns: A
listof device memory pointers set to the memory containing each network input data, or an empty
listif there are no more batches for calibration. You can allocate these device buffers with pycuda, for example, and then cast them to
intto retrieve the pointer.
get_batch_size(self: tensorrt.tensorrt.IInt8EntropyCalibrator) → int¶
Get the batch size used for calibration batches.
Returns: The batch size.
read_calibration_cache(self: tensorrt.tensorrt.IInt8EntropyCalibrator) → buffer¶
Load a calibration cache.
Calibration is potentially expensive, so it can be useful to generate the calibration data once, then use it on subsequent builds of the network. The cache includes the regression cutoff and quantile values used to generate it, and will not be used if these do not match the settings of the current calibrator. However, the network should also be recalibrated if its structure changes, or the input data set changes, and it is the responsibility of the application to ensure this.
Reading a cache is just like reading any other file in Python. For example, one possible implementation is:
def read_calibration_cache(self): # If there is a cache, use it instead of calibrating again. Otherwise, implicitly return None. if os.path.exists(self.cache_file): with open(self.cache_file, "rb") as f: return f.read()
Returns: A cache object or None if there is no data.
write_calibration_cache(self: tensorrt.tensorrt.IInt8EntropyCalibrator, cache: object) → None¶
Save a calibration cache.
Writing a cache is just like writing any other buffer in Python. For example, one possible implementation is:
def write_calibration_cache(self, cache): with open(self.cache_file, "wb") as f: f.write(cache)
Parameters: cache – The calibration cache to write.