Ok, so i’m tracking a floor with a webcam with a custom super wide angle lens. Fortunate, the floor consists of tiles, so i can mark the floor quite exactly.
So i’ve created like 80 points on the floor which correspond to 80 points mapped the right way.
The problem is that i cant find any solution for the interpolation of the points.
added this image to show the problem
I would like to know the position of the red dot on the floor…
i know the x and y position of both of the points, so it a solution where i could create a matrix of points, would be great.
it seems you have a pretty big lens distortion in there. you’ll need to undistort the image or even better undistort the position of the points, then it should be much easier.
opencv has utilities for that, first you’ll need to calculate the distortion coefficients of your lens by using:
you’ll need to print a checkerboard pattern and move it in front of the camera till you get enough points to calculate the distortion. it’s kind of hard though to get them and you’ll need to adjust them manually a bit once you’ve done the checkerboard calibration.
there’s a whole chapter in the opencv book and there’s an example in the forum:
only the example in the forums is for opencv 1.0 and in opencv 1.1 theres one more coefficient for the radial distortion and works much better for cameras with a big radial distortion like in your case.
that will return two matrices, an intrinsic matrix and the distortion coefficients. with that you can use cvUndistortPoints to get the position of the points without distortion, so you’ll get something that resembles more the original disposition of the points without that curvature that you see in the image now.
once you have that it should be just a matter of measuring the floor and do some simple multiplication to know the real measures.
ok, thanks for that. But why can’t i just use my markers as coordinates for my calibration instead of the chessboard? And besides; why can’t i just compute some interpolation instead of trying to undistort a full webcamstream? That would just be slow and unnecessary right?
the undistortion of the points is not linear and needs 7 coefficients: 3 for radial distortion, 2 for tangential and 2 for the center (for this last ones you can usually take the center of the image in pixels). if you can make an equation system out of your points to solve for that coefficients you’re almost done. opencv does that automatically with the checkerboard pattern, that’s why i was suggesting to use it. perhaps you can feed the functions for the checkerboard pattern with your own points but it usually needs for several points of view since it also need to estimate the pose of the camera to solve the distortion coefficients.
and yes, you can undistort just the points instead of the whole image using cvUndistortPoints: that function gets the two matrixes returned by cvCalibrateCamera2 + all your points and returns the corrected positions for all the points, for doing that it uses an iterative method that looks like this:
double fx = focalX;
double fy = focalY;
double ifx = 1./fx;
double ify = 1./fy;
double cx = centerX;
double cy = centerY;
double x, y, x0, y0;
x = pos.x;
y = pos.y;
x0 = x = (x - cx)*ifx;
y0 = y = (y - cy)*ify;
// compensate distortion iteratively
for( int j = 0; j < 5; j++ )
double r2 = x*x + y*y;
double icdist = 1./(1 + ((radialDist3*r2 + radialDistY)*r2 + radialDistX)*r2);
double deltaX = 2*tangentDistX*x*y + tangentDistY*(r2 + 2*x*x);
double deltaY = tangentDistX*(r2 + 2*y*y) + 2*tangentDistY*x*y;
x = (x0 - deltaX)*icdist;
y = (y0 - deltaY)*icdist;
pos.x = x* focalX + centerX ;
pos.y = y* focalY + centerY ;
where apart from the distortion coefficients you need the focal distance in X and Y relative to the size of the image in pixels.
of course all this is if you need to be really precise, if not, as you have many points and know the dimension of the tiles a linear interpolation for the point with the 4 points that define the tile it is in can be enough.