VPI - Vision Programming Interface

1.2 Release

Fisheye Distortion Correction

Overview

This sample application performs a fisheye lens calibration using input images taken with the same camera/lens. Then it uses Remap and the calibration data to correct fisheye lens distortion of these images and save the result to disk. The mapping used for distortion correction is VPI_FISHEYE_EQUIDISTANT, which maps straight lines in the scene to straight lines in the corrected image.

Lens Calibration

Lens calibration uses a set of images taken by the same camera/lens, each one showing a checkerboard pattern in a different position, so that taken collectively, the checkerboard appears in almost entire field of view. The more images, the more accurate the calibration will be, but typically 10 to 15 images suffice.

Note
On Ubuntu 16.04, the sample code requires OpenCV >= 2.4.10, which isn't available using apt.

VPI samples include a set of input images that can be used. They are found in /opt/nvidia/vpi1/samples/assets/fisheye directory.

To create a set of calibration images for a given lens, do the following:

  1. Print a checkerboard pattern on a piece of paper. VPI provides in samples' assets directory one 10x7 checkerboard file that can be used, named checkerboard_10x7.pdf.
  2. Mount the fisheye lens on a camera.
  3. With the camera in a fixed position, take several pictures showing the checkerboard in different positions, covering a good part of the field of view.

Instructions

The command line parameters are:

-c W,H [-s win] <image1> [image2] [image3] ...

where

  • -c W,H: specifies the number of squares the checkerboard pattern has horizontally (W) and vertically (H).
  • -s win: (optional) the width of a window around each internal vertex of the checkerboard (point where 4 squares meet) to be used in a vertex position refinement stage. The actual vertex position will be searched within this window. If this parameter is omitted, the refinement stage will be skipped.
  • imageN: set of calibration images

Here's one invocation example:

  • C++
    ./vpi_sample_11_fisheye -c 10,7 -s 22 ../assets/fisheye/*.jpg
  • Python
    python main.py -c 10,7 -s 22 ../assets/fisheye/*.jpg

This will correct the included set of calibration images, all captured using the checkerboard pattern also included. It's using a 22x22 window around each checkerboard internal vertex to refine the vertex position.

Results

Here are some input and output images produced by the sample application:

InputCorrected

Source Code

For convenience, here's the code that is also installed in the samples directory.

Language:
27 import cv2
28 import sys
29 import vpi
30 import numpy as np
31 from argparse import ArgumentParser
32 
33 # ============================
34 # Parse command line arguments
35 
36 parser = ArgumentParser()
37 parser.add_argument('-c', metavar='W,H', required=True,
38  help='Checkerboard with WxH squares')
39 
40 parser.add_argument('-s', metavar='win', type=int,
41  help='Search window width around checkerboard verted used in refinement, default is 0 (disable refinement)')
42 
43 parser.add_argument('images', nargs='+',
44  help='Input images taken with a fisheye lens camera')
45 
46 args = parser.parse_args();
47 
48 # Parse checkerboard size
49 try:
50  cbSize = np.array([int(x) for x in args.c.split(',')])
51 except ValueError:
52  exit("Error parsing checkerboard information")
53 
54 # =========================================
55 # Calculate fisheye calibration from images
56 
57 # OpenCV expects number of interior vertices in the checkerboard,
58 # not number of squares. Let's adjust for that.
59 vtxCount = cbSize-1
60 
61 # -------------------------------------------------
62 # Determine checkerboard coordinates in image space
63 
64 imgSize = None
65 corners2D = []
66 
67 for imgName in args.images:
68  # Load input image and do some sanity check
69  img = cv2.imread(imgName)
70  curImgSize = (img.shape[1], img.shape[0])
71 
72  if imgSize == None:
73  imgSize = curImgSize
74  elif imgSize != curImgSize:
75  exit("All images must have the same size")
76 
77  # Find the checkerboard pattern on the image, saving the 2D
78  # coordinates of checkerboard vertices in cbVertices.
79  # Vertex is the point where 4 squares (2 white and 2 black) meet.
80  found, corners = cv2.findChessboardCorners(img, tuple(vtxCount), flags=cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_NORMALIZE_IMAGE)
81  if found:
82  # Needs to perform further corner refinement?
83  if args.s != None and args.s >= 2:
84  criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.0001)
85  imgGray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
86  corners = cv2.cornerSubPix(imgGray, corners, (args.s//2, args.s//2), (-1,-1), criteria)
87  corners2D.append(corners)
88  else:
89  exit("Warning: checkerboard pattern not found in image {}".format(input))
90 
91 # Create the vector that stores 3D coordinates for each checkerboard pattern on a space
92 # where X and Y are orthogonal and run along the checkerboard sides, and Z==0 in all points on
93 # checkerboard.
94 cbCorners = np.zeros((1, vtxCount[0]*vtxCount[1], 3))
95 cbCorners[0,:,:2] = np.mgrid[0:vtxCount[0], 0:vtxCount[1]].T.reshape(-1,2)
96 corners3D = [cbCorners.reshape(-1,1,3) for i in range(len(corners2D))]
97 
98 # ---------------------------------------------
99 # Calculate fisheye lens calibration parameters
100 camMatrix = np.eye(3)
101 coeffs = np.zeros((4,))
102 rms, camMatrix, coeffs, rvecs, tvecs = cv2.fisheye.calibrate(corners3D, corners2D, imgSize, camMatrix, coeffs, flags=cv2.fisheye.CALIB_FIX_SKEW)
103 
104 # Print out calibration results
105 print("rms error: {}".format(rms))
106 print("Fisheye coefficients: {}".format(coeffs))
107 print("Camera matrix:")
108 print(camMatrix)
109 
110 # ======================
111 # Undistort input images
112 
113 # Create an uniform grid
114 grid = vpi.WarpGrid(imgSize)
115 
116 # Create undistort warp map from the calibration parameters and the grid
117 undist_map = vpi.WarpMap.fisheye_correction(grid,
118  K=camMatrix[0:2,:], X=np.eye(3,4), coeffs=coeffs,
119  mapping=vpi.FisheyeMapping.EQUIDISTANT)
120 
121 # Go through all input images,
122 idx=0
123 for imgName in args.images:
124  # Load input image and do some sanity check
125  img = cv2.imread(imgName)
126 
127  # Using the CUDA backend,
128  with vpi.Backend.CUDA:
129  # Convert image to NV12_ER, apply the undistortion map and convert image back to RGB8
130  imgCorrected = vpi.asimage(img).convert(vpi.Format.NV12_ER).remap(undist_map, interp=vpi.Interp.CATMULL_ROM).convert(vpi.Format.RGB8)
131 
132  # Write undistorted image to disk
133  cv2.imwrite("undistort_python{}_{:03d}.jpg".format(sys.version_info[0],idx), imgCorrected.cpu())
134  idx += 1
135 
136 # vim: ts=8:sw=4:sts=4:et:ai
29 #include <opencv2/core/version.hpp>
30 
31 #if CV_MAJOR_VERSION >= 3
32 # include <opencv2/imgcodecs.hpp>
33 #else
34 # include <opencv2/highgui/highgui.hpp>
35 #endif
36 
37 #include <opencv2/calib3d/calib3d.hpp>
38 #include <opencv2/imgproc/imgproc.hpp>
39 #include <vpi/OpenCVInterop.hpp>
40 
41 #include <string.h> // for basename(3) that doesn't modify its argument
42 #include <unistd.h> // for getopt
43 #include <vpi/Image.h>
45 #include <vpi/Status.h>
46 #include <vpi/Stream.h>
48 #include <vpi/algo/Remap.h>
49 
50 #include <iostream>
51 #include <sstream>
52 
53 #define CHECK_STATUS(STMT) \
54  do \
55  { \
56  VPIStatus status = (STMT); \
57  if (status != VPI_SUCCESS) \
58  { \
59  char buffer[VPI_MAX_STATUS_MESSAGE_LENGTH]; \
60  vpiGetLastStatusMessage(buffer, sizeof(buffer)); \
61  std::ostringstream ss; \
62  ss << vpiStatusGetName(status) << ": " << buffer; \
63  throw std::runtime_error(ss.str()); \
64  } \
65  } while (0);
66 
67 static void PrintUsage(const char *progname, std::ostream &out)
68 {
69  out << "Usage: " << progname << " <-c W,H> [-s win] <image1> [image2] [image3] ...\n"
70  << " where,\n"
71  << " W,H\tcheckerboard with WxH squares\n"
72  << " win\tsearch window width around checkerboard vertex used\n"
73  << "\tin refinement, default is 0 (disable refinement)\n"
74  << " imageN\tinput images taken with a fisheye lens camera" << std::endl;
75 }
76 
77 struct Params
78 {
79  cv::Size vtxCount; // Number of internal vertices the checkerboard has
80  int searchWinSize; // search window size around the checkerboard vertex for refinement.
81  std::vector<const char *> images; // input image names.
82 };
83 
84 static Params ParseParameters(int argc, char *argv[])
85 {
86  Params params = {};
87 
88  cv::Size cbSize;
89 
90  opterr = 0;
91  int opt;
92  while ((opt = getopt(argc, argv, "hc:s:")) != -1)
93  {
94  switch (opt)
95  {
96  case 'h':
97  PrintUsage(basename(argv[0]), std::cout);
98  return {};
99 
100  case 'c':
101  if (sscanf(optarg, "%d,%d", &cbSize.width, &cbSize.height) != 2)
102  {
103  throw std::invalid_argument("Error parsing checkerboard information");
104  }
105 
106  // OpenCV expects number of interior vertices in the checkerboard,
107  // not number of squares. Let's adjust for that.
108  params.vtxCount.width = cbSize.width - 1;
109  params.vtxCount.height = cbSize.height - 1;
110  break;
111 
112  case 's':
113  if (sscanf(optarg, "%d", &params.searchWinSize) != 1)
114  {
115  throw std::invalid_argument("Error parseing search window size");
116  }
117  if (params.searchWinSize < 0)
118  {
119  throw std::invalid_argument("Search window size must be >= 0");
120  }
121  break;
122  case '?':
123  throw std::invalid_argument(std::string("Option -") + (char)optopt + " not recognized");
124  }
125  }
126 
127  for (int i = optind; i < argc; ++i)
128  {
129  params.images.push_back(argv[i]);
130  }
131 
132  if (params.images.empty())
133  {
134  throw std::invalid_argument("At least one image must be defined");
135  }
136 
137  if (cbSize.width <= 3 || cbSize.height <= 3)
138  {
139  throw std::invalid_argument("Checkerboard size must have at least 3x3 squares");
140  }
141 
142  if (params.searchWinSize == 1)
143  {
144  throw std::invalid_argument("Search window size must be 0 (default) or >= 2");
145  }
146 
147  return params;
148 }
149 
150 int main(int argc, char *argv[])
151 {
152  // OpenCV image that will be wrapped by a VPIImage.
153  // Define it here so that it's destroyed *after* wrapper is destroyed
154  cv::Mat cvImage;
155 
156  // VPI objects that will be used
157  VPIStream stream = NULL;
158  VPIPayload remap = NULL;
159  VPIImage tmpIn = NULL, tmpOut = NULL;
160  VPIImage vimg = nullptr;
161 
162  int retval = 0;
163 
164  try
165  {
166  // First parse command line paramers
167  Params params = ParseParameters(argc, argv);
168  if (params.images.empty()) // user just wanted the help message?
169  {
170  return 0;
171  }
172 
173  // Where to store checkerboard 2D corners of each input image.
174  std::vector<std::vector<cv::Point2f>> corners2D;
175 
176  // Store image size. All input images must have same size.
177  cv::Size imgSize = {};
178 
179  for (unsigned i = 0; i < params.images.size(); ++i)
180  {
181  // Load input image and do some sanity check
182  cv::Mat img = cv::imread(params.images[i]);
183  if (img.empty())
184  {
185  throw std::runtime_error("Can't read " + std::string(params.images[i]));
186  }
187 
188  if (imgSize == cv::Size{})
189  {
190  imgSize = img.size();
191  }
192  else if (imgSize != img.size())
193  {
194  throw std::runtime_error("All images must have same size");
195  }
196 
197  // Find the checkerboard pattern on the image, saving the 2D
198  // coordinates of checkerboard vertices in cbVertices.
199  // Vertex is the point where 4 squares (2 white and 2 black) meet.
200  std::vector<cv::Point2f> cbVertices;
201 
202  if (findChessboardCorners(img, params.vtxCount, cbVertices,
203  cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE))
204  {
205  // Needs to perform further corner refinement?
206  if (params.searchWinSize >= 2)
207  {
208  cv::Mat gray;
209  cvtColor(img, gray, cv::COLOR_BGR2GRAY);
210 
211  cornerSubPix(gray, cbVertices, cv::Size(params.searchWinSize / 2, params.searchWinSize / 2),
212  cv::Size(-1, -1),
213  cv::TermCriteria(cv::TermCriteria::EPS + cv::TermCriteria::COUNT, 30, 0.0001));
214  }
215 
216  // save this image's 2D vertices in vector
217  corners2D.push_back(std::move(cbVertices));
218  }
219  else
220  {
221  std::cerr << "Warning: checkerboard pattern not found in image " << params.images[i] << std::endl;
222  }
223  }
224 
225  // Create the vector that stores 3D coordinates for each checkerboard pattern on a space
226  // where X and Y are orthogonal and run along the checkerboard sides, and Z==0 in all points on
227  // checkerboard.
228  std::vector<cv::Point3f> initialCheckerboard3DVertices;
229  for (int i = 0; i < params.vtxCount.height; ++i)
230  {
231  for (int j = 0; j < params.vtxCount.width; ++j)
232  {
233  // since we're not interested in extrinsic camera parameters,
234  // we can assume that checkerboard square size is 1x1.
235  initialCheckerboard3DVertices.emplace_back(j, i, 0);
236  }
237  }
238 
239  // Initialize a vector with initial checkerboard positions for all images
240  std::vector<std::vector<cv::Point3f>> corners3D(corners2D.size(), initialCheckerboard3DVertices);
241 
242  // Camera intrinsic parameters, initially identity (will be estimated by calibration process).
243  using Mat3 = cv::Matx<double, 3, 3>;
244  Mat3 camMatrix = Mat3::eye();
245 
246  // stores the fisheye model coefficients.
247  std::vector<double> coeffs(4);
248 
249  // VPI currently doesn't support skew parameter on camera matrix, make sure
250  // calibration process fixes it to 0.
251  int flags = cv::fisheye::CALIB_FIX_SKEW;
252 
253  // Run calibration
254  {
255  cv::Mat rvecs, tvecs; // stores rotation and translation for each camera, not needed now.
256  double rms = cv::fisheye::calibrate(corners3D, corners2D, imgSize, camMatrix, coeffs, rvecs, tvecs, flags);
257  printf("rms error: %lf\n", rms);
258  }
259 
260  // Output calibration result.
261  printf("Fisheye coefficients: %lf %lf %lf %lf\n", coeffs[0], coeffs[1], coeffs[2], coeffs[3]);
262 
263  printf("Camera matrix:\n");
264  printf("[%lf %lf %lf; %lf %lf %lf; %lf %lf %lf]\n", camMatrix(0, 0), camMatrix(0, 1), camMatrix(0, 2),
265  camMatrix(1, 0), camMatrix(1, 1), camMatrix(1, 2), camMatrix(2, 0), camMatrix(2, 1), camMatrix(2, 2));
266 
267  // Now use VPI to undistort the input images:
268 
269  // Allocate a dense map.
270  VPIWarpMap map = {};
271  map.grid.numHorizRegions = 1;
272  map.grid.numVertRegions = 1;
273  map.grid.regionWidth[0] = imgSize.width;
274  map.grid.regionHeight[0] = imgSize.height;
275  map.grid.horizInterval[0] = 1;
276  map.grid.vertInterval[0] = 1;
277  CHECK_STATUS(vpiWarpMapAllocData(&map));
278 
279  // Initialize the fisheye lens model with the coefficients given by calibration procedure.
280  VPIFisheyeLensDistortionModel distModel = {};
281  distModel.mapping = VPI_FISHEYE_EQUIDISTANT;
282  distModel.k1 = coeffs[0];
283  distModel.k2 = coeffs[1];
284  distModel.k3 = coeffs[2];
285  distModel.k4 = coeffs[3];
286 
287  // Fill up the camera intrinsic parameters given by camera calibration procedure.
289  for (int i = 0; i < 2; ++i)
290  {
291  for (int j = 0; j < 3; ++j)
292  {
293  K[i][j] = camMatrix(i, j);
294  }
295  }
296 
297  // Camera extrinsics is be identity.
298  VPICameraExtrinsic X = {};
299  X[0][0] = X[1][1] = X[2][2] = 1;
300 
301  // Generate a warp map to undistort an image taken from fisheye lens with
302  // given parameters calculated above.
303  vpiWarpMapGenerateFromFisheyeLensDistortionModel(K, X, K, &distModel, &map);
304 
305  // Create the Remap payload for undistortion given the map generated above.
306  CHECK_STATUS(vpiCreateRemap(VPI_BACKEND_CUDA, &map, &remap));
307 
308  // Now that the remap payload is created, we can destroy the warp map.
309  vpiWarpMapFreeData(&map);
310 
311  // Create a stream where operations will take place. We're using CUDA
312  // processing.
313  CHECK_STATUS(vpiStreamCreate(VPI_BACKEND_CUDA, &stream));
314 
315  // Temporary input and output images in NV12 format.
316  CHECK_STATUS(vpiImageCreate(imgSize.width, imgSize.height, VPI_IMAGE_FORMAT_NV12_ER, 0, &tmpIn));
317  CHECK_STATUS(vpiImageCreate(imgSize.width, imgSize.height, VPI_IMAGE_FORMAT_NV12_ER, 0, &tmpOut));
318 
319  // For each input image,
320  for (unsigned i = 0; i < params.images.size(); ++i)
321  {
322  // Read it from disk.
323  cvImage = cv::imread(params.images[i]);
324  assert(!cvImage.empty());
325 
326  // Wrap it into a VPIImage
327  if (vimg == nullptr)
328  {
329  // Now create a VPIImage that wraps it.
330  CHECK_STATUS(vpiImageCreateOpenCVMatWrapper(cvImage, 0, &vimg));
331  }
332  else
333  {
334  CHECK_STATUS(vpiImageSetWrappedOpenCVMat(vimg, cvImage));
335  }
336 
337  // Convert BGR -> NV12
338  CHECK_STATUS(vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, vimg, tmpIn, NULL));
339 
340  // Undistorts the input image.
341  CHECK_STATUS(vpiSubmitRemap(stream, VPI_BACKEND_CUDA, remap, tmpIn, tmpOut, VPI_INTERP_CATMULL_ROM,
342  VPI_BORDER_ZERO, 0));
343 
344  // Convert the result NV12 back to BGR, writing back to the input image.
345  CHECK_STATUS(vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, tmpOut, vimg, NULL));
346 
347  // Wait until conversion finishes.
348  CHECK_STATUS(vpiStreamSync(stream));
349 
350  // Since vimg is wrapping the OpenCV image, the result is already there.
351  // We just have to save it to disk.
352  char buf[64];
353  snprintf(buf, sizeof(buf), "undistort_%03d.jpg", i);
354  imwrite(buf, cvImage);
355  }
356  }
357  catch (std::exception &e)
358  {
359  std::cerr << "Error: " << e.what() << std::endl;
360  PrintUsage(basename(argv[0]), std::cerr);
361 
362  retval = 1;
363  }
364 
365  vpiStreamDestroy(stream);
366  vpiPayloadDestroy(remap);
367  vpiImageDestroy(tmpIn);
368  vpiImageDestroy(tmpOut);
369  vpiImageDestroy(vimg);
370 
371  return retval;
372 }
373 
374 // vim: ts=8:sw=4:sts=4:et:ai
Declares functions that handle image format conversion.
Functions and structures for dealing with VPI images.
Declares functions to generate warp maps based on common lens distortion models.
Functions for handling OpenCV interoperability with VPI.
Declares functions that implement the Remap algorithm.
Declaration of VPI status codes handling functions.
Declares functions dealing with VPI streams.
VPIStatus vpiSubmitConvertImageFormat(VPIStream stream, uint32_t backend, VPIImage input, VPIImage output, const VPIConvertImageFormatParams *params)
Converts the image contents to the desired format, with optional scaling and offset.
@ VPI_IMAGE_FORMAT_NV12_ER
YUV420sp 8-bit pitch-linear format with full range.
Definition: ImageFormat.h:194
void vpiImageDestroy(VPIImage img)
Destroy an image instance.
struct VPIImageImpl * VPIImage
A handle to an image.
Definition: Types.h:215
VPIStatus vpiImageCreate(int32_t width, int32_t height, VPIImageFormat fmt, uint32_t flags, VPIImage *img)
Create an empty image instance with the specified flags.
VPIStatus vpiWarpMapGenerateFromFisheyeLensDistortionModel(const VPICameraIntrinsic Kin, const VPICameraExtrinsic X, const VPICameraIntrinsic Kout, const VPIFisheyeLensDistortionModel *distModel, VPIWarpMap *warpMap)
Generates a mapping that corrects image distortions caused by fisheye lenses.
float VPICameraExtrinsic[3][4]
Camera extrinsic matrix.
Definition: Types.h:434
float VPICameraIntrinsic[2][3]
Camera intrinsic matrix.
Definition: Types.h:421
@ VPI_FISHEYE_EQUIDISTANT
Specifies the equidistant fisheye mapping.
Holds coefficients for fisheye lens distortion model.
VPIStatus vpiImageSetWrappedOpenCVMat(VPIImage img, const cv::Mat &mat)
Redefines the wrapped cv::Mat of an existing VPIImage wrapper.
VPIStatus vpiImageCreateOpenCVMatWrapper(const cv::Mat &mat, VPIImageFormat fmt, uint32_t flags, VPIImage *img)
Wraps a cv::Mat in an VPIImage with the given image format.
struct VPIPayloadImpl * VPIPayload
A handle to an algorithm payload.
Definition: Types.h:227
void vpiPayloadDestroy(VPIPayload payload)
Deallocates the payload object and all associated resources.
VPIStatus vpiSubmitRemap(VPIStream stream, uint32_t backend, VPIPayload payload, VPIImage input, VPIImage output, VPIInterpolationType interp, VPIBorderExtension border, uint32_t flags)
Submits the Remap operation to the stream associated with the payload.
VPIStatus vpiCreateRemap(uint32_t backends, const VPIWarpMap *warpMap, VPIPayload *payload)
Create a payload for Remap algorithm.
struct VPIStreamImpl * VPIStream
A handle to a stream.
Definition: Types.h:209
VPIStatus vpiStreamSync(VPIStream stream)
Blocks the calling thread until all submitted commands in this stream queue are done (queue is empty)...
void vpiStreamDestroy(VPIStream stream)
Destroy a stream instance and deallocate all HW resources.
VPIStatus vpiStreamCreate(uint32_t flags, VPIStream *stream)
Create a stream instance.
@ VPI_BACKEND_CUDA
CUDA backend.
Definition: Types.h:93
@ VPI_BORDER_ZERO
All pixels outside the image are considered to be zero.
Definition: Types.h:237
@ VPI_INTERP_CATMULL_ROM
Catmull-Rom cubic interpolation.
int8_t numHorizRegions
Number of regions horizontally.
Definition: WarpGrid.h:158
VPIWarpGrid grid
Warp grid control point structure definition.
Definition: WarpMap.h:91
int16_t horizInterval[VPI_WARPGRID_MAX_HORIZ_REGIONS_COUNT]
Horizontal spacing between control points within a given region.
Definition: WarpGrid.h:163
int8_t numVertRegions
Number of regions vertically.
Definition: WarpGrid.h:159
int16_t vertInterval[VPI_WARPGRID_MAX_VERT_REGIONS_COUNT]
Vertical spacing between control points within a given region.
Definition: WarpGrid.h:165
int16_t regionWidth[VPI_WARPGRID_MAX_HORIZ_REGIONS_COUNT]
Width of each region.
Definition: WarpGrid.h:161
int16_t regionHeight[VPI_WARPGRID_MAX_VERT_REGIONS_COUNT]
Height of each region.
Definition: WarpGrid.h:162
void vpiWarpMapFreeData(VPIWarpMap *warpMap)
Deallocates the warp map control points allocated by vpiWarpMapAllocData.
VPIStatus vpiWarpMapAllocData(VPIWarpMap *warpMap)
Allocates the warp map's control point array for a given warp grid.
Defines the mapping between input and output images' pixels.
Definition: WarpMap.h:88