VPI - Vision Programming Interface

3.2 Release

Fisheye Distortion Correction

Overview

This sample application performs a fisheye lens calibration using input images taken with the same camera/lens. Then it uses Remap and the calibration data to correct fisheye lens distortion of these images and save the result to disk. The mapping used for distortion correction is VPI_FISHEYE_EQUIDISTANT, which maps straight lines in the scene to straight lines in the corrected image.

Lens Calibration

Lens calibration uses a set of images taken by the same camera/lens, each one showing a checkerboard pattern in a different position, so that taken collectively, the checkerboard appears in almost entire field of view. The more images, the more accurate the calibration will be, but typically 10 to 15 images suffice.

VPI samples include a set of input images that can be used. They are found in /opt/nvidia/vpi3/samples/assets/fisheye directory.

Note
Fisheye python sample only works for OpenCV version <= 4.9

To create a set of calibration images for a given lens, do the following:

  1. Print a checkerboard pattern on a piece of paper. VPI provides in samples' assets directory one 10x7 checkerboard file that can be used, named checkerboard_10x7.pdf.
  2. Mount the fisheye lens on a camera.
  3. With the camera in a fixed position, take several pictures showing the checkerboard in different positions, covering a good part of the field of view.

Instructions

The command line parameters are:

-c W,H [-s win] <image1> [image2] [image3] ...

where

  • -c W,H: specifies the number of squares the checkerboard pattern has horizontally (W) and vertically (H).
  • -s win: (optional) the width of a window around each internal vertex of the checkerboard (point where 4 squares meet) to be used in a vertex position refinement stage. The actual vertex position will be searched within this window. If this parameter is omitted, the refinement stage will be skipped.
  • imageN: set of calibration images

Here's one invocation example:

  • C++
    ./vpi_sample_11_fisheye -c 10,7 -s 22 ../assets/fisheye/*.jpg
  • Python
    python3 main.py -c 10,7 -s 22 ../assets/fisheye/*.jpg

This will correct the included set of calibration images, all captured using the checkerboard pattern also included. It's using a 22x22 window around each checkerboard internal vertex to refine the vertex position.

Results

Here are some input and output images produced by the sample application:

InputCorrected

Source Code

For convenience, here's the code that is also installed in the samples directory.

Language:
27 import sys
28 import vpi
29 import numpy as np
30 from argparse import ArgumentParser
31 import cv2
32 
33 # ============================
34 # Parse command line arguments
35 
36 # Changes in fisheye camera calibration function in OpenCV 4.10, leads to corrupted output.
37 # Get the major and minor version numbers
38 version = cv2.__version__.split('.')
39 major = int(version[0])
40 minor = int(version[1])
41 
42 # Check if the version is 4.10 or higher
43 if major * 100 + minor >= 410:
44  raise Exception("OpenCV >= 4.10 isn't supported")
45 
46 parser = ArgumentParser()
47 parser.add_argument('-c', metavar='W,H', required=True,
48  help='Checkerboard with WxH squares')
49 
50 parser.add_argument('-s', metavar='win', type=int,
51  help='Search window width around checkerboard verted used in refinement, default is 0 (disable refinement)')
52 
53 parser.add_argument('images', nargs='+',
54  help='Input images taken with a fisheye lens camera')
55 
56 args = parser.parse_args();
57 
58 # Parse checkerboard size
59 try:
60  cbSize = np.array([int(x) for x in args.c.split(',')])
61 except ValueError:
62  exit("Error parsing checkerboard information")
63 
64 # =========================================
65 # Calculate fisheye calibration from images
66 
67 # OpenCV expects number of interior vertices in the checkerboard,
68 # not number of squares. Let's adjust for that.
69 vtxCount = cbSize-1
70 
71 # -------------------------------------------------
72 # Determine checkerboard coordinates in image space
73 
74 imgSize = None
75 corners2D = []
76 
77 for imgName in args.images:
78  # Load input image and do some sanity check
79  img = cv2.imread(imgName)
80  curImgSize = (img.shape[1], img.shape[0])
81 
82  if imgSize == None:
83  imgSize = curImgSize
84  elif imgSize != curImgSize:
85  exit("All images must have the same size")
86 
87  # Find the checkerboard pattern on the image, saving the 2D
88  # coordinates of checkerboard vertices in cbVertices.
89  # Vertex is the point where 4 squares (2 white and 2 black) meet.
90  found, corners = cv2.findChessboardCorners(img, tuple(vtxCount), flags=cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_NORMALIZE_IMAGE)
91  if found:
92  # Needs to perform further corner refinement?
93  if args.s != None and args.s >= 2:
94  criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.0001)
95  imgGray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
96  corners = cv2.cornerSubPix(imgGray, corners, (args.s//2, args.s//2), (-1,-1), criteria)
97  corners2D.append(corners)
98  else:
99  exit("Warning: checkerboard pattern not found in image {}".format(input))
100 
101 # Create the vector that stores 3D coordinates for each checkerboard pattern on a space
102 # where X and Y are orthogonal and run along the checkerboard sides, and Z==0 in all points on
103 # checkerboard.
104 cbCorners = np.zeros((1, vtxCount[0]*vtxCount[1], 3))
105 cbCorners[0,:,:2] = np.mgrid[0:vtxCount[0], 0:vtxCount[1]].T.reshape(-1,2)
106 corners3D = [cbCorners.reshape(-1,1,3) for i in range(len(corners2D))]
107 
108 # ---------------------------------------------
109 # Calculate fisheye lens calibration parameters
110 camMatrix = np.eye(3)
111 coeffs = np.zeros((4,))
112 rms, camMatrix, coeffs, rvecs, tvecs = cv2.fisheye.calibrate(corners3D, corners2D, imgSize, camMatrix, coeffs, flags=cv2.fisheye.CALIB_FIX_SKEW)
113 
114 # Print out calibration results
115 print("rms error: {}".format(rms))
116 print("Fisheye coefficients: {}".format(coeffs))
117 print("Camera matrix:")
118 print(camMatrix)
119 
120 # ======================
121 # Undistort input images
122 
123 # Create an uniform grid
124 grid = vpi.WarpGrid(imgSize)
125 
126 # Create undistort warp map from the calibration parameters and the grid
127 undist_map = vpi.WarpMap.fisheye_correction(grid,
128  K=camMatrix[0:2,:], X=np.eye(3,4), coeffs=coeffs,
129  mapping=vpi.FisheyeMapping.EQUIDISTANT)
130 
131 # Go through all input images,
132 idx=0
133 for imgName in args.images:
134  # Load input image and do some sanity check
135  img = cv2.imread(imgName)
136 
137  # Using the CUDA backend,
138  with vpi.Backend.CUDA:
139  # Convert image to NV12_ER, apply the undistortion map and convert image back to RGB8
140  imgCorrected = vpi.asimage(img).convert(vpi.Format.NV12_ER).remap(undist_map, interp=vpi.Interp.CATMULL_ROM).convert(vpi.Format.RGB8)
141 
142  # Write undistorted image to disk
143  cv2.imwrite("undistort_python{}_{:03d}.jpg".format(sys.version_info[0],idx), imgCorrected.cpu())
144  idx += 1
29 #include <opencv2/core/version.hpp>
30 
31 #if CV_MAJOR_VERSION >= 3
32 # include <opencv2/imgcodecs.hpp>
33 #else
34 # include <opencv2/highgui/highgui.hpp>
35 #endif
36 
37 // Changes in fisheye camera calibration function in OpenCV 4.10, leads to corrupted output.
38 #if CV_VERSION_MAJOR * 100 + CV_VERSION_MINOR >= 410
39 # error "OpenCV >= 4.10 isn't supported"
40 #endif
41 
42 #include <opencv2/calib3d/calib3d.hpp>
43 #include <opencv2/imgproc/imgproc.hpp>
44 #include <vpi/OpenCVInterop.hpp>
45 
46 #include <vpi/Image.h>
48 #include <vpi/Status.h>
49 #include <vpi/Stream.h>
51 #include <vpi/algo/Remap.h>
52 
53 #include <iostream>
54 #include <sstream>
55 
56 #define CHECK_STATUS(STMT) \
57  do \
58  { \
59  VPIStatus status = (STMT); \
60  if (status != VPI_SUCCESS) \
61  { \
62  char buffer[VPI_MAX_STATUS_MESSAGE_LENGTH]; \
63  vpiGetLastStatusMessage(buffer, sizeof(buffer)); \
64  std::ostringstream ss; \
65  ss << vpiStatusGetName(status) << ": " << buffer; \
66  throw std::runtime_error(ss.str()); \
67  } \
68  } while (0);
69 
70 static void PrintUsage(const char *progname, std::ostream &out)
71 {
72  out << "Usage: " << progname << " <-c W,H> [-s win] <image1> [image2] [image3] ...\n"
73  << " where,\n"
74  << " W,H\tcheckerboard with WxH squares\n"
75  << " win\tsearch window width around checkerboard vertex used\n"
76  << "\tin refinement, default is 0 (disable refinement)\n"
77  << " imageN\tinput images taken with a fisheye lens camera" << std::endl;
78 }
79 
80 static char *my_basename(char *path)
81 {
82 #ifdef WIN32
83  char *name = strrchr(path, '\\');
84 #else
85  char *name = strrchr(path, '/');
86 #endif
87  if (name != NULL)
88  {
89  return name;
90  }
91  else
92  {
93  return path;
94  }
95 }
96 
97 struct Params
98 {
99  cv::Size vtxCount; // Number of internal vertices the checkerboard has
100  int searchWinSize; // search window size around the checkerboard vertex for refinement.
101  std::vector<const char *> images; // input image names.
102 };
103 
104 static Params ParseParameters(int argc, char *argv[])
105 {
106  Params params = {};
107 
108  cv::Size cbSize;
109 
110  for (int i = 1; i < argc; ++i)
111  {
112  if (argv[i][0] == '-')
113  {
114  if (strlen(argv[i] + 1) == 1)
115  {
116  switch (argv[i][1])
117  {
118  case 'h':
119  PrintUsage(my_basename(argv[0]), std::cout);
120  return {};
121 
122  case 'c':
123  if (i == argc - 1)
124  {
125  throw std::invalid_argument("Option -c must be followed by checkerboard width and height");
126  }
127 
128  if (sscanf(argv[++i], "%d,%d", &cbSize.width, &cbSize.height) != 2)
129  {
130  throw std::invalid_argument("Error parsing checkerboard information");
131  }
132 
133  // OpenCV expects number of interior vertices in the checkerboard,
134  // not number of squares. Let's adjust for that.
135  params.vtxCount.width = cbSize.width - 1;
136  params.vtxCount.height = cbSize.height - 1;
137  break;
138 
139  case 's':
140  if (i == argc - 1)
141  {
142  throw std::invalid_argument("Option -s must be followed by search window size");
143  }
144  if (sscanf(argv[++i], "%d", &params.searchWinSize) != 1)
145  {
146  throw std::invalid_argument("Error parsing search window size");
147  }
148  if (params.searchWinSize < 0)
149  {
150  throw std::invalid_argument("Search window size must be >= 0");
151  }
152  break;
153 
154  default:
155  throw std::invalid_argument(std::string("Option -") + (argv[i] + 1) + " not recognized");
156  }
157  }
158  else
159  {
160  throw std::invalid_argument(std::string("Option -") + (argv[i] + 1) + " not recognized");
161  }
162  }
163  else
164  {
165  params.images.push_back(argv[i]);
166  }
167  }
168 
169  if (params.images.empty())
170  {
171  throw std::invalid_argument("At least one image must be defined");
172  }
173 
174  if (cbSize.width <= 3 || cbSize.height <= 3)
175  {
176  throw std::invalid_argument("Checkerboard size must have at least 3x3 squares");
177  }
178 
179  if (params.searchWinSize == 1)
180  {
181  throw std::invalid_argument("Search window size must be 0 (default) or >= 2");
182  }
183 
184  return params;
185 }
186 
187 int main(int argc, char *argv[])
188 {
189  // OpenCV image that will be wrapped by a VPIImage.
190  // Define it here so that it's destroyed *after* wrapper is destroyed
191  cv::Mat cvImage;
192 
193  // VPI objects that will be used
194  VPIStream stream = NULL;
195  VPIPayload remap = NULL;
196  VPIImage tmpIn = NULL, tmpOut = NULL;
197  VPIImage vimg = nullptr;
198 
199  int retval = 0;
200 
201  try
202  {
203  // First parse command line paramers
204  Params params = ParseParameters(argc, argv);
205  if (params.images.empty()) // user just wanted the help message?
206  {
207  return 0;
208  }
209 
210  // Where to store checkerboard 2D corners of each input image.
211  std::vector<std::vector<cv::Point2f>> corners2D;
212 
213  // Store image size. All input images must have same size.
214  cv::Size imgSize = {};
215 
216  for (unsigned i = 0; i < params.images.size(); ++i)
217  {
218  // Load input image and do some sanity check
219  cv::Mat img = cv::imread(params.images[i]);
220  if (img.empty())
221  {
222  throw std::runtime_error("Can't read " + std::string(params.images[i]));
223  }
224 
225  if (imgSize == cv::Size{})
226  {
227  imgSize = img.size();
228  }
229  else if (imgSize != img.size())
230  {
231  throw std::runtime_error("All images must have same size");
232  }
233 
234  // Find the checkerboard pattern on the image, saving the 2D
235  // coordinates of checkerboard vertices in cbVertices.
236  // Vertex is the point where 4 squares (2 white and 2 black) meet.
237  std::vector<cv::Point2f> cbVertices;
238 
239  if (findChessboardCorners(img, params.vtxCount, cbVertices,
240  cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE))
241  {
242  // Needs to perform further corner refinement?
243  if (params.searchWinSize >= 2)
244  {
245  cv::Mat gray;
246  cvtColor(img, gray, cv::COLOR_BGR2GRAY);
247 
248  cornerSubPix(gray, cbVertices, cv::Size(params.searchWinSize / 2, params.searchWinSize / 2),
249  cv::Size(-1, -1),
250  cv::TermCriteria(cv::TermCriteria::EPS + cv::TermCriteria::COUNT, 30, 0.0001));
251  }
252 
253  // save this image's 2D vertices in vector
254  corners2D.push_back(std::move(cbVertices));
255  }
256  else
257  {
258  std::cerr << "Warning: checkerboard pattern not found in image " << params.images[i] << std::endl;
259  }
260  }
261 
262  // Create the vector that stores 3D coordinates for each checkerboard pattern on a space
263  // where X and Y are orthogonal and run along the checkerboard sides, and Z==0 in all points on
264  // checkerboard.
265  std::vector<cv::Point3f> initialCheckerboard3DVertices;
266  for (int i = 0; i < params.vtxCount.height; ++i)
267  {
268  for (int j = 0; j < params.vtxCount.width; ++j)
269  {
270  // since we're not interested in extrinsic camera parameters,
271  // we can assume that checkerboard square size is 1x1.
272  initialCheckerboard3DVertices.emplace_back(static_cast<float>(j), static_cast<float>(i), 0.0f);
273  }
274  }
275 
276  // Initialize a vector with initial checkerboard positions for all images
277  std::vector<std::vector<cv::Point3f>> corners3D(corners2D.size(), initialCheckerboard3DVertices);
278 
279  // Camera intrinsic parameters, initially identity (will be estimated by calibration process).
280  using Mat3 = cv::Matx<double, 3, 3>;
281  Mat3 camMatrix = Mat3::eye();
282 
283  // stores the fisheye model coefficients.
284  std::vector<double> coeffs(4);
285 
286  // VPI currently doesn't support skew parameter on camera matrix, make sure
287  // calibration process fixes it to 0.
288  int flags = cv::fisheye::CALIB_FIX_SKEW;
289 
290  // Run calibration
291  {
292  cv::Mat rvecs, tvecs; // stores rotation and translation for each camera, not needed now.
293  double rms = cv::fisheye::calibrate(corners3D, corners2D, imgSize, camMatrix, coeffs, rvecs, tvecs, flags);
294  printf("rms error: %lf\n", rms);
295  }
296 
297  // Output calibration result.
298  printf("Fisheye coefficients: %lf %lf %lf %lf\n", coeffs[0], coeffs[1], coeffs[2], coeffs[3]);
299 
300  printf("Camera matrix:\n");
301  printf("[%lf %lf %lf; %lf %lf %lf; %lf %lf %lf]\n", camMatrix(0, 0), camMatrix(0, 1), camMatrix(0, 2),
302  camMatrix(1, 0), camMatrix(1, 1), camMatrix(1, 2), camMatrix(2, 0), camMatrix(2, 1), camMatrix(2, 2));
303 
304  // Now use VPI to undistort the input images:
305 
306  // Allocate a dense map.
307  VPIWarpMap map = {};
308  map.grid.numHorizRegions = 1;
309  map.grid.numVertRegions = 1;
310  map.grid.regionWidth[0] = imgSize.width;
311  map.grid.regionHeight[0] = imgSize.height;
312  map.grid.horizInterval[0] = 1;
313  map.grid.vertInterval[0] = 1;
314  CHECK_STATUS(vpiWarpMapAllocData(&map));
315 
316  // Initialize the fisheye lens model with the coefficients given by calibration procedure.
317  VPIFisheyeLensDistortionModel distModel = {};
318  distModel.mapping = VPI_FISHEYE_EQUIDISTANT;
319  distModel.k1 = coeffs[0];
320  distModel.k2 = coeffs[1];
321  distModel.k3 = coeffs[2];
322  distModel.k4 = coeffs[3];
323 
324  // Fill up the camera intrinsic parameters given by camera calibration procedure.
326  for (int i = 0; i < 2; ++i)
327  {
328  for (int j = 0; j < 3; ++j)
329  {
330  K[i][j] = camMatrix(i, j);
331  }
332  }
333 
334  // Camera extrinsics is be identity.
335  VPICameraExtrinsic X = {};
336  X[0][0] = X[1][1] = X[2][2] = 1;
337 
338  // Generate a warp map to undistort an image taken from fisheye lens with
339  // given parameters calculated above.
340  vpiWarpMapGenerateFromFisheyeLensDistortionModel(K, X, K, &distModel, &map);
341 
342  // Create the Remap payload for undistortion given the map generated above.
343  CHECK_STATUS(vpiCreateRemap(VPI_BACKEND_CUDA, &map, &remap));
344 
345  // Now that the remap payload is created, we can destroy the warp map.
346  vpiWarpMapFreeData(&map);
347 
348  // Create a stream where operations will take place. We're using CUDA
349  // processing.
350  CHECK_STATUS(vpiStreamCreate(VPI_BACKEND_CUDA, &stream));
351 
352  // Temporary input and output images in NV12 format.
353  CHECK_STATUS(vpiImageCreate(imgSize.width, imgSize.height, VPI_IMAGE_FORMAT_NV12_ER, 0, &tmpIn));
354  CHECK_STATUS(vpiImageCreate(imgSize.width, imgSize.height, VPI_IMAGE_FORMAT_NV12_ER, 0, &tmpOut));
355 
356  // For each input image,
357  for (unsigned i = 0; i < params.images.size(); ++i)
358  {
359  // Read it from disk.
360  cvImage = cv::imread(params.images[i]);
361  assert(!cvImage.empty());
362 
363  // Wrap it into a VPIImage
364  if (vimg == nullptr)
365  {
366  // Now create a VPIImage that wraps it.
367  CHECK_STATUS(vpiImageCreateWrapperOpenCVMat(cvImage, 0, &vimg));
368  }
369  else
370  {
371  CHECK_STATUS(vpiImageSetWrappedOpenCVMat(vimg, cvImage));
372  }
373 
374  // Convert BGR -> NV12
375  CHECK_STATUS(vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, vimg, tmpIn, NULL));
376 
377  // Undistorts the input image.
378  CHECK_STATUS(vpiSubmitRemap(stream, VPI_BACKEND_CUDA, remap, tmpIn, tmpOut, VPI_INTERP_CATMULL_ROM,
379  VPI_BORDER_ZERO, 0));
380 
381  // Convert the result NV12 back to BGR, writing back to the input image.
382  CHECK_STATUS(vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, tmpOut, vimg, NULL));
383 
384  // Wait until conversion finishes.
385  CHECK_STATUS(vpiStreamSync(stream));
386 
387  // Since vimg is wrapping the OpenCV image, the result is already there.
388  // We just have to save it to disk.
389  char buf[64];
390  snprintf(buf, sizeof(buf), "undistort_%03d.jpg", i);
391  imwrite(buf, cvImage);
392  }
393  }
394  catch (std::exception &e)
395  {
396  std::cerr << "Error: " << e.what() << std::endl;
397  PrintUsage(my_basename(argv[0]), std::cerr);
398 
399  retval = 1;
400  }
401 
402  vpiStreamDestroy(stream);
403  vpiPayloadDestroy(remap);
404  vpiImageDestroy(tmpIn);
405  vpiImageDestroy(tmpOut);
406  vpiImageDestroy(vimg);
407 
408  return retval;
409 }
Declares functions that handle image format conversion.
#define VPI_IMAGE_FORMAT_NV12_ER
YUV420sp 8-bit pitch-linear format with full range.
Definition: ImageFormat.h:222
Functions and structures for dealing with VPI images.
Declares functions to generate warp maps based on common lens distortion models.
Functions for handling OpenCV interoperability with VPI.
Declares functions that implement the Remap algorithm.
Declaration of VPI status codes handling functions.
Declares functions dealing with VPI streams.
VPIStatus vpiSubmitConvertImageFormat(VPIStream stream, uint64_t backend, VPIImage input, VPIImage output, const VPIConvertImageFormatParams *params)
Converts the image contents to the desired format, with optional scaling and offset.
void vpiImageDestroy(VPIImage img)
Destroy an image instance.
struct VPIImageImpl * VPIImage
A handle to an image.
Definition: Types.h:256
VPIStatus vpiImageCreate(int32_t width, int32_t height, VPIImageFormat fmt, uint64_t flags, VPIImage *img)
Create an empty image instance with the specified flags.
VPIFisheyeMapping mapping
Mapping between pixel angle and pixel distance to image center.
VPIStatus vpiWarpMapGenerateFromFisheyeLensDistortionModel(const VPICameraIntrinsic Kin, const VPICameraExtrinsic X, const VPICameraIntrinsic Kout, const VPIFisheyeLensDistortionModel *distModel, VPIWarpMap *warpMap)
Generates a mapping that corrects image distortions caused by fisheye lenses.
float VPICameraExtrinsic[3][4]
Camera extrinsic matrix.
Definition: Types.h:668
float VPICameraIntrinsic[2][3]
Camera intrinsic matrix.
Definition: Types.h:655
@ VPI_FISHEYE_EQUIDISTANT
Specifies the equidistant fisheye mapping.
Holds coefficients for fisheye lens distortion model.
VPIStatus vpiImageCreateWrapperOpenCVMat(const cv::Mat &mat, VPIImageFormat fmt, uint64_t flags, VPIImage *img)
Wraps a cv::Mat in an VPIImage with the given image format.
VPIStatus vpiImageSetWrappedOpenCVMat(VPIImage img, const cv::Mat &mat)
Redefines the wrapped cv::Mat of an existing VPIImage wrapper.
struct VPIPayloadImpl * VPIPayload
A handle to an algorithm payload.
Definition: Types.h:268
void vpiPayloadDestroy(VPIPayload payload)
Deallocates the payload object and all associated resources.
VPIStatus vpiSubmitRemap(VPIStream stream, uint64_t backend, VPIPayload payload, VPIImage input, VPIImage output, VPIInterpolationType interp, VPIBorderExtension border, uint64_t flags)
Submits a Remap operation to the stream.
VPIStatus vpiCreateRemap(uint64_t backends, const VPIWarpMap *warpMap, VPIPayload *payload)
Create a payload for Remap algorithm.
struct VPIStreamImpl * VPIStream
A handle to a stream.
Definition: Types.h:250
VPIStatus vpiStreamSync(VPIStream stream)
Blocks the calling thread until all submitted commands in this stream queue are done (queue is empty)...
void vpiStreamDestroy(VPIStream stream)
Destroy a stream instance and deallocate all HW resources.
VPIStatus vpiStreamCreate(uint64_t flags, VPIStream *stream)
Create a stream instance.
@ VPI_BACKEND_CUDA
CUDA backend.
Definition: Types.h:93
@ VPI_BORDER_ZERO
All pixels outside the image are considered to be zero.
Definition: Types.h:278
@ VPI_INTERP_CATMULL_ROM
Catmull-Rom cubic interpolation.
int8_t numHorizRegions
Number of regions horizontally.
Definition: WarpGrid.h:159
VPIWarpGrid grid
Warp grid control point structure definition.
Definition: WarpMap.h:91
int16_t horizInterval[VPI_WARPGRID_MAX_HORIZ_REGIONS_COUNT]
Horizontal spacing between control points within a given region.
Definition: WarpGrid.h:174
int8_t numVertRegions
Number of regions vertically.
Definition: WarpGrid.h:162
int16_t vertInterval[VPI_WARPGRID_MAX_VERT_REGIONS_COUNT]
Vertical spacing between control points within a given region.
Definition: WarpGrid.h:180
int16_t regionWidth[VPI_WARPGRID_MAX_HORIZ_REGIONS_COUNT]
Width of each region.
Definition: WarpGrid.h:165
int16_t regionHeight[VPI_WARPGRID_MAX_VERT_REGIONS_COUNT]
Height of each region.
Definition: WarpGrid.h:168
void vpiWarpMapFreeData(VPIWarpMap *warpMap)
Deallocates the warp map control points allocated by vpiWarpMapAllocData.
VPIStatus vpiWarpMapAllocData(VPIWarpMap *warpMap)
Allocates the warp map's control point array for a given warp grid.
Defines the mapping between input and output images' pixels.
Definition: WarpMap.h:88