VPI - Vision Programming Interface

3.0 Release

Fisheye Distortion Correction

Overview

This sample application performs a fisheye lens calibration using input images taken with the same camera/lens. Then it uses Remap and the calibration data to correct fisheye lens distortion of these images and save the result to disk. The mapping used for distortion correction is VPI_FISHEYE_EQUIDISTANT, which maps straight lines in the scene to straight lines in the corrected image.

Lens Calibration

Lens calibration uses a set of images taken by the same camera/lens, each one showing a checkerboard pattern in a different position, so that taken collectively, the checkerboard appears in almost entire field of view. The more images, the more accurate the calibration will be, but typically 10 to 15 images suffice.

VPI samples include a set of input images that can be used. They are found in /opt/nvidia/vpi3/samples/assets/fisheye directory.

To create a set of calibration images for a given lens, do the following:

  1. Print a checkerboard pattern on a piece of paper. VPI provides in samples' assets directory one 10x7 checkerboard file that can be used, named checkerboard_10x7.pdf.
  2. Mount the fisheye lens on a camera.
  3. With the camera in a fixed position, take several pictures showing the checkerboard in different positions, covering a good part of the field of view.

Instructions

The command line parameters are:

-c W,H [-s win] <image1> [image2] [image3] ...

where

  • -c W,H: specifies the number of squares the checkerboard pattern has horizontally (W) and vertically (H).
  • -s win: (optional) the width of a window around each internal vertex of the checkerboard (point where 4 squares meet) to be used in a vertex position refinement stage. The actual vertex position will be searched within this window. If this parameter is omitted, the refinement stage will be skipped.
  • imageN: set of calibration images

Here's one invocation example:

  • C++
    ./vpi_sample_11_fisheye -c 10,7 -s 22 ../assets/fisheye/*.jpg
  • Python
    python3 main.py -c 10,7 -s 22 ../assets/fisheye/*.jpg

This will correct the included set of calibration images, all captured using the checkerboard pattern also included. It's using a 22x22 window around each checkerboard internal vertex to refine the vertex position.

Results

Here are some input and output images produced by the sample application:

InputCorrected

Source Code

For convenience, here's the code that is also installed in the samples directory.

Language:
27 import sys
28 import vpi
29 import numpy as np
30 from argparse import ArgumentParser
31 import cv2
32 
33 # ============================
34 # Parse command line arguments
35 
36 parser = ArgumentParser()
37 parser.add_argument('-c', metavar='W,H', required=True,
38  help='Checkerboard with WxH squares')
39 
40 parser.add_argument('-s', metavar='win', type=int,
41  help='Search window width around checkerboard verted used in refinement, default is 0 (disable refinement)')
42 
43 parser.add_argument('images', nargs='+',
44  help='Input images taken with a fisheye lens camera')
45 
46 args = parser.parse_args();
47 
48 # Parse checkerboard size
49 try:
50  cbSize = np.array([int(x) for x in args.c.split(',')])
51 except ValueError:
52  exit("Error parsing checkerboard information")
53 
54 # =========================================
55 # Calculate fisheye calibration from images
56 
57 # OpenCV expects number of interior vertices in the checkerboard,
58 # not number of squares. Let's adjust for that.
59 vtxCount = cbSize-1
60 
61 # -------------------------------------------------
62 # Determine checkerboard coordinates in image space
63 
64 imgSize = None
65 corners2D = []
66 
67 for imgName in args.images:
68  # Load input image and do some sanity check
69  img = cv2.imread(imgName)
70  curImgSize = (img.shape[1], img.shape[0])
71 
72  if imgSize == None:
73  imgSize = curImgSize
74  elif imgSize != curImgSize:
75  exit("All images must have the same size")
76 
77  # Find the checkerboard pattern on the image, saving the 2D
78  # coordinates of checkerboard vertices in cbVertices.
79  # Vertex is the point where 4 squares (2 white and 2 black) meet.
80  found, corners = cv2.findChessboardCorners(img, tuple(vtxCount), flags=cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_NORMALIZE_IMAGE)
81  if found:
82  # Needs to perform further corner refinement?
83  if args.s != None and args.s >= 2:
84  criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.0001)
85  imgGray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
86  corners = cv2.cornerSubPix(imgGray, corners, (args.s//2, args.s//2), (-1,-1), criteria)
87  corners2D.append(corners)
88  else:
89  exit("Warning: checkerboard pattern not found in image {}".format(input))
90 
91 # Create the vector that stores 3D coordinates for each checkerboard pattern on a space
92 # where X and Y are orthogonal and run along the checkerboard sides, and Z==0 in all points on
93 # checkerboard.
94 cbCorners = np.zeros((1, vtxCount[0]*vtxCount[1], 3))
95 cbCorners[0,:,:2] = np.mgrid[0:vtxCount[0], 0:vtxCount[1]].T.reshape(-1,2)
96 corners3D = [cbCorners.reshape(-1,1,3) for i in range(len(corners2D))]
97 
98 # ---------------------------------------------
99 # Calculate fisheye lens calibration parameters
100 camMatrix = np.eye(3)
101 coeffs = np.zeros((4,))
102 rms, camMatrix, coeffs, rvecs, tvecs = cv2.fisheye.calibrate(corners3D, corners2D, imgSize, camMatrix, coeffs, flags=cv2.fisheye.CALIB_FIX_SKEW)
103 
104 # Print out calibration results
105 print("rms error: {}".format(rms))
106 print("Fisheye coefficients: {}".format(coeffs))
107 print("Camera matrix:")
108 print(camMatrix)
109 
110 # ======================
111 # Undistort input images
112 
113 # Create an uniform grid
114 grid = vpi.WarpGrid(imgSize)
115 
116 # Create undistort warp map from the calibration parameters and the grid
117 undist_map = vpi.WarpMap.fisheye_correction(grid,
118  K=camMatrix[0:2,:], X=np.eye(3,4), coeffs=coeffs,
119  mapping=vpi.FisheyeMapping.EQUIDISTANT)
120 
121 # Go through all input images,
122 idx=0
123 for imgName in args.images:
124  # Load input image and do some sanity check
125  img = cv2.imread(imgName)
126 
127  # Using the CUDA backend,
128  with vpi.Backend.CUDA:
129  # Convert image to NV12_ER, apply the undistortion map and convert image back to RGB8
130  imgCorrected = vpi.asimage(img).convert(vpi.Format.NV12_ER).remap(undist_map, interp=vpi.Interp.CATMULL_ROM).convert(vpi.Format.RGB8)
131 
132  # Write undistorted image to disk
133  cv2.imwrite("undistort_python{}_{:03d}.jpg".format(sys.version_info[0],idx), imgCorrected.cpu())
134  idx += 1
29 #include <opencv2/core/version.hpp>
30 
31 #if CV_MAJOR_VERSION >= 3
32 # include <opencv2/imgcodecs.hpp>
33 #else
34 # include <opencv2/highgui/highgui.hpp>
35 #endif
36 
37 #include <opencv2/calib3d/calib3d.hpp>
38 #include <opencv2/imgproc/imgproc.hpp>
39 #include <vpi/OpenCVInterop.hpp>
40 
41 #include <vpi/Image.h>
43 #include <vpi/Status.h>
44 #include <vpi/Stream.h>
46 #include <vpi/algo/Remap.h>
47 
48 #include <iostream>
49 #include <sstream>
50 
51 #define CHECK_STATUS(STMT) \
52  do \
53  { \
54  VPIStatus status = (STMT); \
55  if (status != VPI_SUCCESS) \
56  { \
57  char buffer[VPI_MAX_STATUS_MESSAGE_LENGTH]; \
58  vpiGetLastStatusMessage(buffer, sizeof(buffer)); \
59  std::ostringstream ss; \
60  ss << vpiStatusGetName(status) << ": " << buffer; \
61  throw std::runtime_error(ss.str()); \
62  } \
63  } while (0);
64 
65 static void PrintUsage(const char *progname, std::ostream &out)
66 {
67  out << "Usage: " << progname << " <-c W,H> [-s win] <image1> [image2] [image3] ...\n"
68  << " where,\n"
69  << " W,H\tcheckerboard with WxH squares\n"
70  << " win\tsearch window width around checkerboard vertex used\n"
71  << "\tin refinement, default is 0 (disable refinement)\n"
72  << " imageN\tinput images taken with a fisheye lens camera" << std::endl;
73 }
74 
75 static char *my_basename(char *path)
76 {
77 #ifdef WIN32
78  char *name = strrchr(path, '\\');
79 #else
80  char *name = strrchr(path, '/');
81 #endif
82  if (name != NULL)
83  {
84  return name;
85  }
86  else
87  {
88  return path;
89  }
90 }
91 
92 struct Params
93 {
94  cv::Size vtxCount; // Number of internal vertices the checkerboard has
95  int searchWinSize; // search window size around the checkerboard vertex for refinement.
96  std::vector<const char *> images; // input image names.
97 };
98 
99 static Params ParseParameters(int argc, char *argv[])
100 {
101  Params params = {};
102 
103  cv::Size cbSize;
104 
105  for (int i = 1; i < argc; ++i)
106  {
107  if (argv[i][0] == '-')
108  {
109  if (strlen(argv[i] + 1) == 1)
110  {
111  switch (argv[i][1])
112  {
113  case 'h':
114  PrintUsage(my_basename(argv[0]), std::cout);
115  return {};
116 
117  case 'c':
118  if (i == argc - 1)
119  {
120  throw std::invalid_argument("Option -c must be followed by checkerboard width and height");
121  }
122 
123  if (sscanf(argv[++i], "%d,%d", &cbSize.width, &cbSize.height) != 2)
124  {
125  throw std::invalid_argument("Error parsing checkerboard information");
126  }
127 
128  // OpenCV expects number of interior vertices in the checkerboard,
129  // not number of squares. Let's adjust for that.
130  params.vtxCount.width = cbSize.width - 1;
131  params.vtxCount.height = cbSize.height - 1;
132  break;
133 
134  case 's':
135  if (i == argc - 1)
136  {
137  throw std::invalid_argument("Option -s must be followed by search window size");
138  }
139  if (sscanf(argv[++i], "%d", &params.searchWinSize) != 1)
140  {
141  throw std::invalid_argument("Error parsing search window size");
142  }
143  if (params.searchWinSize < 0)
144  {
145  throw std::invalid_argument("Search window size must be >= 0");
146  }
147  break;
148 
149  default:
150  throw std::invalid_argument(std::string("Option -") + (argv[i] + 1) + " not recognized");
151  }
152  }
153  else
154  {
155  throw std::invalid_argument(std::string("Option -") + (argv[i] + 1) + " not recognized");
156  }
157  }
158  else
159  {
160  params.images.push_back(argv[i]);
161  }
162  }
163 
164  if (params.images.empty())
165  {
166  throw std::invalid_argument("At least one image must be defined");
167  }
168 
169  if (cbSize.width <= 3 || cbSize.height <= 3)
170  {
171  throw std::invalid_argument("Checkerboard size must have at least 3x3 squares");
172  }
173 
174  if (params.searchWinSize == 1)
175  {
176  throw std::invalid_argument("Search window size must be 0 (default) or >= 2");
177  }
178 
179  return params;
180 }
181 
182 int main(int argc, char *argv[])
183 {
184  // OpenCV image that will be wrapped by a VPIImage.
185  // Define it here so that it's destroyed *after* wrapper is destroyed
186  cv::Mat cvImage;
187 
188  // VPI objects that will be used
189  VPIStream stream = NULL;
190  VPIPayload remap = NULL;
191  VPIImage tmpIn = NULL, tmpOut = NULL;
192  VPIImage vimg = nullptr;
193 
194  int retval = 0;
195 
196  try
197  {
198  // First parse command line paramers
199  Params params = ParseParameters(argc, argv);
200  if (params.images.empty()) // user just wanted the help message?
201  {
202  return 0;
203  }
204 
205  // Where to store checkerboard 2D corners of each input image.
206  std::vector<std::vector<cv::Point2f>> corners2D;
207 
208  // Store image size. All input images must have same size.
209  cv::Size imgSize = {};
210 
211  for (unsigned i = 0; i < params.images.size(); ++i)
212  {
213  // Load input image and do some sanity check
214  cv::Mat img = cv::imread(params.images[i]);
215  if (img.empty())
216  {
217  throw std::runtime_error("Can't read " + std::string(params.images[i]));
218  }
219 
220  if (imgSize == cv::Size{})
221  {
222  imgSize = img.size();
223  }
224  else if (imgSize != img.size())
225  {
226  throw std::runtime_error("All images must have same size");
227  }
228 
229  // Find the checkerboard pattern on the image, saving the 2D
230  // coordinates of checkerboard vertices in cbVertices.
231  // Vertex is the point where 4 squares (2 white and 2 black) meet.
232  std::vector<cv::Point2f> cbVertices;
233 
234  if (findChessboardCorners(img, params.vtxCount, cbVertices,
235  cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE))
236  {
237  // Needs to perform further corner refinement?
238  if (params.searchWinSize >= 2)
239  {
240  cv::Mat gray;
241  cvtColor(img, gray, cv::COLOR_BGR2GRAY);
242 
243  cornerSubPix(gray, cbVertices, cv::Size(params.searchWinSize / 2, params.searchWinSize / 2),
244  cv::Size(-1, -1),
245  cv::TermCriteria(cv::TermCriteria::EPS + cv::TermCriteria::COUNT, 30, 0.0001));
246  }
247 
248  // save this image's 2D vertices in vector
249  corners2D.push_back(std::move(cbVertices));
250  }
251  else
252  {
253  std::cerr << "Warning: checkerboard pattern not found in image " << params.images[i] << std::endl;
254  }
255  }
256 
257  // Create the vector that stores 3D coordinates for each checkerboard pattern on a space
258  // where X and Y are orthogonal and run along the checkerboard sides, and Z==0 in all points on
259  // checkerboard.
260  std::vector<cv::Point3f> initialCheckerboard3DVertices;
261  for (int i = 0; i < params.vtxCount.height; ++i)
262  {
263  for (int j = 0; j < params.vtxCount.width; ++j)
264  {
265  // since we're not interested in extrinsic camera parameters,
266  // we can assume that checkerboard square size is 1x1.
267  initialCheckerboard3DVertices.emplace_back(static_cast<float>(j), static_cast<float>(i), 0.0f);
268  }
269  }
270 
271  // Initialize a vector with initial checkerboard positions for all images
272  std::vector<std::vector<cv::Point3f>> corners3D(corners2D.size(), initialCheckerboard3DVertices);
273 
274  // Camera intrinsic parameters, initially identity (will be estimated by calibration process).
275  using Mat3 = cv::Matx<double, 3, 3>;
276  Mat3 camMatrix = Mat3::eye();
277 
278  // stores the fisheye model coefficients.
279  std::vector<double> coeffs(4);
280 
281  // VPI currently doesn't support skew parameter on camera matrix, make sure
282  // calibration process fixes it to 0.
283  int flags = cv::fisheye::CALIB_FIX_SKEW;
284 
285  // Run calibration
286  {
287  cv::Mat rvecs, tvecs; // stores rotation and translation for each camera, not needed now.
288  double rms = cv::fisheye::calibrate(corners3D, corners2D, imgSize, camMatrix, coeffs, rvecs, tvecs, flags);
289  printf("rms error: %lf\n", rms);
290  }
291 
292  // Output calibration result.
293  printf("Fisheye coefficients: %lf %lf %lf %lf\n", coeffs[0], coeffs[1], coeffs[2], coeffs[3]);
294 
295  printf("Camera matrix:\n");
296  printf("[%lf %lf %lf; %lf %lf %lf; %lf %lf %lf]\n", camMatrix(0, 0), camMatrix(0, 1), camMatrix(0, 2),
297  camMatrix(1, 0), camMatrix(1, 1), camMatrix(1, 2), camMatrix(2, 0), camMatrix(2, 1), camMatrix(2, 2));
298 
299  // Now use VPI to undistort the input images:
300 
301  // Allocate a dense map.
302  VPIWarpMap map = {};
303  map.grid.numHorizRegions = 1;
304  map.grid.numVertRegions = 1;
305  map.grid.regionWidth[0] = imgSize.width;
306  map.grid.regionHeight[0] = imgSize.height;
307  map.grid.horizInterval[0] = 1;
308  map.grid.vertInterval[0] = 1;
309  CHECK_STATUS(vpiWarpMapAllocData(&map));
310 
311  // Initialize the fisheye lens model with the coefficients given by calibration procedure.
312  VPIFisheyeLensDistortionModel distModel = {};
313  distModel.mapping = VPI_FISHEYE_EQUIDISTANT;
314  distModel.k1 = coeffs[0];
315  distModel.k2 = coeffs[1];
316  distModel.k3 = coeffs[2];
317  distModel.k4 = coeffs[3];
318 
319  // Fill up the camera intrinsic parameters given by camera calibration procedure.
321  for (int i = 0; i < 2; ++i)
322  {
323  for (int j = 0; j < 3; ++j)
324  {
325  K[i][j] = camMatrix(i, j);
326  }
327  }
328 
329  // Camera extrinsics is be identity.
330  VPICameraExtrinsic X = {};
331  X[0][0] = X[1][1] = X[2][2] = 1;
332 
333  // Generate a warp map to undistort an image taken from fisheye lens with
334  // given parameters calculated above.
335  vpiWarpMapGenerateFromFisheyeLensDistortionModel(K, X, K, &distModel, &map);
336 
337  // Create the Remap payload for undistortion given the map generated above.
338  CHECK_STATUS(vpiCreateRemap(VPI_BACKEND_CUDA, &map, &remap));
339 
340  // Now that the remap payload is created, we can destroy the warp map.
341  vpiWarpMapFreeData(&map);
342 
343  // Create a stream where operations will take place. We're using CUDA
344  // processing.
345  CHECK_STATUS(vpiStreamCreate(VPI_BACKEND_CUDA, &stream));
346 
347  // Temporary input and output images in NV12 format.
348  CHECK_STATUS(vpiImageCreate(imgSize.width, imgSize.height, VPI_IMAGE_FORMAT_NV12_ER, 0, &tmpIn));
349  CHECK_STATUS(vpiImageCreate(imgSize.width, imgSize.height, VPI_IMAGE_FORMAT_NV12_ER, 0, &tmpOut));
350 
351  // For each input image,
352  for (unsigned i = 0; i < params.images.size(); ++i)
353  {
354  // Read it from disk.
355  cvImage = cv::imread(params.images[i]);
356  assert(!cvImage.empty());
357 
358  // Wrap it into a VPIImage
359  if (vimg == nullptr)
360  {
361  // Now create a VPIImage that wraps it.
362  CHECK_STATUS(vpiImageCreateWrapperOpenCVMat(cvImage, 0, &vimg));
363  }
364  else
365  {
366  CHECK_STATUS(vpiImageSetWrappedOpenCVMat(vimg, cvImage));
367  }
368 
369  // Convert BGR -> NV12
370  CHECK_STATUS(vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, vimg, tmpIn, NULL));
371 
372  // Undistorts the input image.
373  CHECK_STATUS(vpiSubmitRemap(stream, VPI_BACKEND_CUDA, remap, tmpIn, tmpOut, VPI_INTERP_CATMULL_ROM,
374  VPI_BORDER_ZERO, 0));
375 
376  // Convert the result NV12 back to BGR, writing back to the input image.
377  CHECK_STATUS(vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, tmpOut, vimg, NULL));
378 
379  // Wait until conversion finishes.
380  CHECK_STATUS(vpiStreamSync(stream));
381 
382  // Since vimg is wrapping the OpenCV image, the result is already there.
383  // We just have to save it to disk.
384  char buf[64];
385  snprintf(buf, sizeof(buf), "undistort_%03d.jpg", i);
386  imwrite(buf, cvImage);
387  }
388  }
389  catch (std::exception &e)
390  {
391  std::cerr << "Error: " << e.what() << std::endl;
392  PrintUsage(my_basename(argv[0]), std::cerr);
393 
394  retval = 1;
395  }
396 
397  vpiStreamDestroy(stream);
398  vpiPayloadDestroy(remap);
399  vpiImageDestroy(tmpIn);
400  vpiImageDestroy(tmpOut);
401  vpiImageDestroy(vimg);
402 
403  return retval;
404 }
Declares functions that handle image format conversion.
#define VPI_IMAGE_FORMAT_NV12_ER
YUV420sp 8-bit pitch-linear format with full range.
Definition: ImageFormat.h:206
Functions and structures for dealing with VPI images.
Declares functions to generate warp maps based on common lens distortion models.
Functions for handling OpenCV interoperability with VPI.
Declares functions that implement the Remap algorithm.
Declaration of VPI status codes handling functions.
Declares functions dealing with VPI streams.
VPIStatus vpiSubmitConvertImageFormat(VPIStream stream, uint64_t backend, VPIImage input, VPIImage output, const VPIConvertImageFormatParams *params)
Converts the image contents to the desired format, with optional scaling and offset.
void vpiImageDestroy(VPIImage img)
Destroy an image instance.
struct VPIImageImpl * VPIImage
A handle to an image.
Definition: Types.h:256
VPIStatus vpiImageCreate(int32_t width, int32_t height, VPIImageFormat fmt, uint64_t flags, VPIImage *img)
Create an empty image instance with the specified flags.
VPIFisheyeMapping mapping
Mapping between pixel angle and pixel distance to image center.
VPIStatus vpiWarpMapGenerateFromFisheyeLensDistortionModel(const VPICameraIntrinsic Kin, const VPICameraExtrinsic X, const VPICameraIntrinsic Kout, const VPIFisheyeLensDistortionModel *distModel, VPIWarpMap *warpMap)
Generates a mapping that corrects image distortions caused by fisheye lenses.
float VPICameraExtrinsic[3][4]
Camera extrinsic matrix.
Definition: Types.h:646
float VPICameraIntrinsic[2][3]
Camera intrinsic matrix.
Definition: Types.h:633
@ VPI_FISHEYE_EQUIDISTANT
Specifies the equidistant fisheye mapping.
Holds coefficients for fisheye lens distortion model.
VPIStatus vpiImageCreateWrapperOpenCVMat(const cv::Mat &mat, VPIImageFormat fmt, uint64_t flags, VPIImage *img)
Wraps a cv::Mat in an VPIImage with the given image format.
VPIStatus vpiImageSetWrappedOpenCVMat(VPIImage img, const cv::Mat &mat)
Redefines the wrapped cv::Mat of an existing VPIImage wrapper.
struct VPIPayloadImpl * VPIPayload
A handle to an algorithm payload.
Definition: Types.h:268
void vpiPayloadDestroy(VPIPayload payload)
Deallocates the payload object and all associated resources.
VPIStatus vpiSubmitRemap(VPIStream stream, uint64_t backend, VPIPayload payload, VPIImage input, VPIImage output, VPIInterpolationType interp, VPIBorderExtension border, uint64_t flags)
Submits a Remap operation to the stream.
VPIStatus vpiCreateRemap(uint64_t backends, const VPIWarpMap *warpMap, VPIPayload *payload)
Create a payload for Remap algorithm.
struct VPIStreamImpl * VPIStream
A handle to a stream.
Definition: Types.h:250
VPIStatus vpiStreamSync(VPIStream stream)
Blocks the calling thread until all submitted commands in this stream queue are done (queue is empty)...
void vpiStreamDestroy(VPIStream stream)
Destroy a stream instance and deallocate all HW resources.
VPIStatus vpiStreamCreate(uint64_t flags, VPIStream *stream)
Create a stream instance.
@ VPI_BACKEND_CUDA
CUDA backend.
Definition: Types.h:93
@ VPI_BORDER_ZERO
All pixels outside the image are considered to be zero.
Definition: Types.h:278
@ VPI_INTERP_CATMULL_ROM
Catmull-Rom cubic interpolation.
int8_t numHorizRegions
Number of regions horizontally.
Definition: WarpGrid.h:159
VPIWarpGrid grid
Warp grid control point structure definition.
Definition: WarpMap.h:91
int16_t horizInterval[VPI_WARPGRID_MAX_HORIZ_REGIONS_COUNT]
Horizontal spacing between control points within a given region.
Definition: WarpGrid.h:174
int8_t numVertRegions
Number of regions vertically.
Definition: WarpGrid.h:162
int16_t vertInterval[VPI_WARPGRID_MAX_VERT_REGIONS_COUNT]
Vertical spacing between control points within a given region.
Definition: WarpGrid.h:180
int16_t regionWidth[VPI_WARPGRID_MAX_HORIZ_REGIONS_COUNT]
Width of each region.
Definition: WarpGrid.h:165
int16_t regionHeight[VPI_WARPGRID_MAX_VERT_REGIONS_COUNT]
Height of each region.
Definition: WarpGrid.h:168
void vpiWarpMapFreeData(VPIWarpMap *warpMap)
Deallocates the warp map control points allocated by vpiWarpMapAllocData.
VPIStatus vpiWarpMapAllocData(VPIWarpMap *warpMap)
Allocates the warp map's control point array for a given warp grid.
Defines the mapping between input and output images' pixels.
Definition: WarpMap.h:88