Camera Calibration (Distortion)

Hey OF,

Working on some multi-touch stuff. Instead of starting a thread on all things multi-touch I just wanted to ask some stuff about camera calibration & distortion.

Here is a video on how I calibrate camera to video projection. In the scenario of touch surfaces sometimes I use wide angle lenses. This causes a major lens distortion and offset of blob to touch tracking. I was looking into opencv Camera Calibration and 3D Reconstruction
interesting stuff here:


void cvUndistort2( const CvArr* src, CvArr* dst,  
                   const CvMat* intrinsic_matrix,  
                   const CvMat* distortion_coeffs );  

looks like opencv is all over this. Anyone have any suggestion on how to get started or better techniques for making this calibration solid.


I know stefanix has done this before – I’ll ping him about this.

I usually work with just quad warping, but I could see in small spaces / wide angle un-distortion is super important

my partner golan has also done some reaserch on this, will cue him on this. We’ve used some custom code, written with help from paul bourke, to undistort lenses. we did it w/ intel’s IPP to remap pixels, but maybe it could be done with opencv’s remap… he might know alot about what’s possible.


Thanks Zach I saw some of stefanix work its looking great. This is very rough but getting there:…-face-beta/

Would love to try any suggestion you have. Thanks a ton OF is really getting there. Thanks.


I’d be super interested too in suggestions, code… (specially if it has to do with IPP :slight_smile:

We have a huge table and we don’t really want to use two cameras…

thanks a lot.

In the latest version of the ofOpenCV addon stefanix has added this function:

void ofCvImage::undistort( float radialDistX, float radialDistY,  
                           float tangentDistX, float tangentDistY,  
                           float focalX, float focalY,  
                           float centerX, float centerY )  
    float camIntrinsics[] = { focalX, 0, centerX, 0, focalY, centerY, 0, 0, 1 };  
    float distortionCoeffs[] = { radialDistX, radialDistY, tangentDistX, tangentDistY };  
    cvUnDistortOnce( cvImage, cvImageTemp, camIntrinsics, distortionCoeffs, 1 );  

There are two new methods in the One is the undistort method chris mentioned. The other is the remap method. For remap you first have to setup two displacement IplImage object.

In the past I have been using something like this:

void initUndistort( float radialDistX, float radialDistY,  
                    float tangentDistX, float tangentDistY,  
                    float focalX, float focalY,  
                    float centerX, float centerY,  
                    IplImage* undistortMapX, IplImage* undistortMapY )  
    float camIntrinsics[] = { focalX, 0, centerX, 0, focalY, centerY, 0, 0, 1 };  
    float distortionCoeffs[] = { radialDistX, radialDistY, tangentDistX, tangentDistY };  
    cvInitUndistortMap( camM, dist, undistortMapX, undistortMapY );  

and then you can use undistortMapX and undistortMapY like this

myOfCvImage.remap( undistortMapX, undistortMapY );  

remap(…) is very efficient when you also need to do some other transformation. To add some translation to the distortion maps you could simply do something like:

    cvAddS( undistortMapX, cvScalarAll( -translateX ), undistortMapX );  
    cvAddS( undistortMapY, cvScalarAll( -translateY ), undistortMapY );  

For both undistort(…) and remap(…) you need to know focal length, radial distortion and tangential distortion. You can either manually play with the values or use a calibration chessboard patter to figure these out. I probably can dig out some code for the whole chessboard calibration thing if anybody is interested. For my multitouch projects I usually had slightly better result hand tuning the coefficients, though.

It’s not overly intuitive how the intrinsic camera parameters affect the distortion. The distortion coefficients and the focal length are interrelated. Basically the higher the focal length is the less effective the distortion numbers will become.

centerX, centerY should generally be in the center of the frame (eg 160, 120). For the focal length it’s not absolutely necessary to match the camera lens but probably a good starting point. To get a hang for the distortion coefficients you probably want to set them all to 0 and then individually increase them slightly.

Happy Hacking,

This is awesome man thanks for all the feedback, I can’t wait to try this. I give it a go and post some thoughts.

Thanks again.


thanks a ton for the code and great explanation.

We’ve made a little app for calibrating the “hand tuning way” and, although we’re getting better results, it’s proving really difficult to get a reasonable precision. We’re using first the undistort and then a quad warp… is that correct/possible?

also any tips, suggestions… on how to achieve this, are super welcome. otherwise we’ll go for the chessboard.

btw, I’ve found that the touchlib has now a barrel distortion filter, and a calibrating app that could be helpful

thanks a lot!

So did some research and it look like the chess board is the best…

Trying to build a class for this calibration but man there is little or no documentation/examples of cvFindChessboardCorners(…)

this is what i got so far. I understad how its going to work I think…
Get the corners then pass those a matrix3x3 and vec4 to cvRemap(…)

Based on a class ofCalibration and i have a video input -> gray image.

ofCalibration cab;  
int ofFindChessboardCorners( const void* image, CvSize pattern_size,   
CvPoint2D32f* corners, int* corner_count=NULL, int flags=CV_CALIB_CB_ADAPTIVE_THRESH ) {  
			int c = cvFindChessboardCorners(image, pattern_size, corners, corner_count, flags);  

Just a test to see if it will compile
then i pass the following in update…

CvPoint2D32f* corners;  
int* corner_count;  
ofCvGrayscaleImage input;  
input = grayImg.getPixels();  
int c = cab.ofFindChessboardCorners(input.getCvImage(), cvSize(7, 7), corners, corner_count, NULL);  

This crashed amazingly
Any help would be killer.

PS: Im putting this all in a ofTouch class that Ill post.


Did it say why it crashed?
Usually in Debug Mode you will get a message that gives a little clue to the reasons behind the crash. It could be that corners needed to be allocated first.

cvFindChessboardCorners(): make sure you allocate memories for the corner points. i tried regular 8x8 board, 4x7 and 7x4, the number of internal corners of the row or col doesn’t matter. however, the version of opencv matters. the release version always return 0 ( i guess it’s because some optimizers are not installed in my computer). the debug version works finally.

Could that be it?

I got it to compile :slight_smile:

Im get the same info as you said.

in setup:

pattern_size	       = cvSize(7,7);  
corner_count	      = 0;    
pattern_was_found  = 0;  
corners			=(CvPoint2D32f*)malloc((pattern_size.width*pattern_size.height + 1) * sizeof(CvPoint2D32f));  

in update

(reading a video File)  
pattern_was_found = cvFindChessboardCorners(bwImg.getCvImage(), pattern_size, corners, corner_count, CV_CALIB_CB_ADAPTIVE_THRESH);  
cout << "FOUND: " << pattern_was_found << endl;  

This is what the image looks like:

Im using the optimized version of openCV, not sure what you mean by debug version.

Here is some example code to do the calibration:

(watch out for version woes)

happy hacking,

wow that’s right in time!

we’ve tried different approaches which didn’t work, and yesterday we decided to try this chessboard thing, so you’ve definitely made our day.

thanks a ton!

Thanks a ton, this is so helpful ohh man! Thanks again for the killer work.

Hey I found something really cool. If you want to use ofcvcameracalibration to get the camera coefs and you are doing something like blobtracking where you just need to find x,y point undistorted but dont care about undistorting the image due to cpu drainage you can use this

void Undistort  
(double inPoint[2],double camMatrix[9],double dist[4],double outPoint  
double u, v;  
u0 = camMatrix[2], v0 = camMatrix[5],fx=camMatrix[0],fy=camMatrix[4];  
double _fx = 1.0/fx, _fy = 1.0/fy;  
double k1=dist[0],k2=dist[1],p1=dist[2],p2=dist[3];  
        u = inPoint[0];  
        v = inPoint[1];  
        double y = (v - v0)*_fy;  
        double y2 = y*y;  
        double ky = 1 + (k1 + k2*y2)*y2;  
        double k2y = 2*k2*y2;  
        double _2p1y = 2*p1*y;  
        double _3p1y2 = 3*p1*y2;  
        double p2y2 = p2*y2;  
        double x = (u - u0)*_fx;  
        double x2 = x*x;  
        double kx = (k1 + k2*x2)*x2;  
        double d = kx + ky + k2y*x2;  
        double _u = fx*(x*(d + _2p1y) + p2y2 + (3*p2)*x2) + u0;  
        double _v = fy*(y*(d + (2*p2)*x) + _3p1y2 + p1*x2) + v0;  
        outPoint[0] = _u;  
        outPoint[1] = _v;  
void Distort(double inPoint[2],double camMatrix[9],double dist  
[4],double outPoint[2])  
        double R[9];  
        double fx = camMatrix[0];  
        double fy = camMatrix[4];  
        double cx = camMatrix[2];  
        double cy = camMatrix[5];  
        double x = (inPoint[0]-cx)/fx;  
        double y = (inPoint[1]-cy)/fy;  
        double r2, r4, a1, a2, a3, cdist;  
        double xd, yd;  
        r2 = x*x + y*y;  
        r4 = r2*r2;  
        a1 = 2*x*y;  
        a2 = r2 + 2*x*x;  
        a3 = r2 + 2*y*y;  
        cdist = 1 + dist[0]*r2 + dist[1]*r4;  
        xd = x*cdist + dist[2]*a1 + dist[3]*a2;  
        yd = y*cdist + dist[2]*a3 + dist[3]*a1;  
        outPoint[0] = xd*fx + cx;  
        outPoint[1] = yd*fy + cy;  
        outPoint[2] = 1.0;  

Its in c but can easily be worked into OF and definitely into ofcvcameracalibration. It takes in point, intrinsic_matrix, dist_coeffs and the out coordinates container.

[edit] posted a clean updated code that fits into the addon in the next page.


ok here it is. In the header add

ofPoint undistortPoints(float _x, float _y);  
ofPoint distortPoints(float _x, float _y);  

and in the cpp add

ofPoint ofCvCameraCalibration::undistortPoints(float _x, float _y) {  
	float u0 = camIntrinsics[2],  
			v0 = camIntrinsics[5],  
			fx = camIntrinsics[0],  
			fy = camIntrinsics[4];   
	float _fx = 1.0/fx,  
			_fy = 1.0/fy;   
	float k1 = distortionCoeffs[0],  
			k2 = distortionCoeffs[1],  
			p1 = distortionCoeffs[2],  
			p2 = distortionCoeffs[3];   
	float y			= (_y - v0)*_fy;   
	float y2		= y*y;   
	float ky		= 1 + (k1 + k2*y2)*y2;   
	float k2y		= 2*k2*y2;   
	float _2p1y		= 2*p1*y;   
	float _3p1y2	= 3*p1*y2;   
	float p2y2		= p2*y2;   
	float x		= (_x - u0)*_fx;   
	float x2	= x*x;   
	float kx	= (k1 + k2*x2)*x2;   
	float d		= kx + ky + k2y*x2;   
	float _u	= fx*(x*(d + _2p1y) + p2y2 + (3*p2)*x2) + u0;   
	float _v	= fy*(y*(d + (2*p2)*x) + _3p1y2 + p1*x2) + v0;   
	return ofPoint(_u, _v);	  
ofPoint ofCvCameraCalibration::distortPoints(float _x, float _y) {  
	float R[9];   
	float fx = camIntrinsics[0];   
	float fy = camIntrinsics[4];   
	float cx = camIntrinsics[2];   
	float cy = camIntrinsics[5];   
	float x = (_x - cx)/fx;   
	float y = (_y - cy)/fy;   
	float r2, r4, a1, a2, a3, cdist;   
	float xd, yd;   
	r2 = x*x + y*y;   
	r4 = r2*r2;   
	a1 = 2*x*y;   
	a2 = r2 + 2*x*x;   
	a3 = r2 + 2*y*y;   
	cdist	= 1 + distortionCoeffs[0]*r2 + distortionCoeffs[1]*r4;   
	xd		= x*cdist + distortionCoeffs[2]*a1 + distortionCoeffs[3]*a2;   
	yd		= y*cdist + distortionCoeffs[2]*a3 + distortionCoeffs[3]*a1;   
	float _u = xd*fx + cx;   
	float _v = yd*fy + cy;   
	return ofPoint(_u, _v);  

I think I can do that in the return. Anyways hope that makes it into the next release.


thanks for the code!

I did something similar for Laser Tag - warping the whole image was too slow for older machines so the alternate method masked the camera image with the quad and undistorted the centroid coordinates of the largest blob inside the quad. I think doing it this way gave a 10-15 fps improvement - so it makes quite a difference! :smiley:

For most things it was just as accurate doing it the ‘quick’ way but warping the whole image made a big difference when the laser point was really small as the warping would increase its size.

Well with this it will be even faster because all you have to do is undistort once so that you get all the correct camIntrinsics and distortionCoeffs, then just get y,z. But this is definetly not a thing to do for a whole image so you can draw it. This is an example of how I would use it. I would have a blob tracker and have this in its setup mode. If you have like a wide angle lens this would come in handy. Setup mode would ask you to place a checkerboard pattern in front of the camera and take a few snapshots. Then it would calculate all the numbers. Now you can have a function like:

ofPoint camToscreen( ofxCvBlob * blob ) {  
    return calibrate.undistortPoints(blob->centroid.x, blob->centroid.y);  

This would give you the correct undistorted x,y coordinates without actually doing the heavy lifting of undistorting the pixels in the image.



Nice ding.

I think that’s worth trying out.

I’d really like to be able to do this without using the checkerboard though.


have been trying this and after some time going crazy trying to know why it wasn’t working, i realized that cvFindChessCorners won’t find the chessboard corners with the opencv 1.1 included in 006, went back to opencv 1.0 and everything works. have tried both under linux and mac with the same result.