opencv and solvePnP?

I’m working with some points that I’m transforming using cv::solvePnP() to get a rough estimation of planar pose and it’s going pretty strangely so far. I’m puzzled about what exactly solvePnP() returns (it looks like Eulerian angles) and how to use those in the OF GL view. I’ve tried something like:

  
  
            if(hasRunPNP) {  
                cv::solvePnP(modelPts, imgPts, m, distortion_coefficients, outR, outT, true);  
            } else {  
                cv::solvePnP(modelPts, imgPts, m, distortion_coefficients, outR, outT);  
                hasRunPNP = true;  
            }  
            // really simplified smoothing              
            if(planarRotationVecBuffer.size() > 20)   
                planarRotationVecBuffer.erase(planarRotationVecBuffer.begin());  
  
            planarRotationVecBuffer.push_back(outR);  
            planarRotationVec.zeros(3, 1, CV_64F);  
              
            for ( int i = 0; i < planarRotationVecBuffer.size(); i++) {  
                planarRotationVec += planarRotationVecBuffer[i];  
            }  
              
            // trying this out but passing the result into glMultMatrix or glLoadMatrix looks insane  
            planarRotationVec = planarRotationVec/planarRotationVecBuffer.size();  
            Rodrigues(planarRotationVec,rotM);  
              
            // euler angles to matrix  
            m2[0] = rotM.ptr()[0];  
            m2[1] = -rotM.ptr()[3];  
            m2[2] = -rotM.ptr()[6];  
            m2[3] = 0;  
            m2[4] = rotM.ptr()[1];  
            m2[5] = -rotM.ptr()[4];  
            m2[6] = -rotM.ptr()[7];  
            m2[7] = 0;  
            m2[8] = rotM.ptr()[2];  
            m2[9] = -rotM.ptr()[5];  
            m2[10] = -rotM.ptr()[8];  
            m2[11] = 0;  
            m2[12] = outT.ptr()[0];  
            m2[13] = -outT.ptr()[1];  
            m2[14] = -outT.ptr()[2];  
            m2[15] = 1;  
  
  

Curious if anyone has worked with this before. I know that it can work outside of OF, but I can’t get it to work in OF.

Ok, I’m closer, but still stumped on the maths:

  
  
double R[3][3];   
            memcpy(R, cv::Mat(rotM.inv()).data, sizeof(double)*9);  
              
            if (R[2][0] != -1 && R[2][0] != 1) {  
                theta1 = -asin(R[2][0]);  
                theta2 = CV_PI-theta1;  
                psi1 = atan2(R[2][1] / cos(theta1) , R[2][2] / cos(theta1));  
                psi2 = atan2(R[2][1] / cos(theta2) , R[2][2] / cos(theta2));  
                phi1 = atan2(R[1][0] / cos(theta1), R[0][0] / cos(theta1));  
                phi2 = atan2(R[1][0] / cos(theta2), R[0][0] / cos(theta2));  
            } else {  
                phi = 0; //can set to 0   
                if (R[2][0] == -1) {  
                    theta = CV_PI/2.0;  
                    psi = phi + atan2(R[0][1], R[0][2]);  
                } else {  
                    theta = -CV_PI/2.0;  
                    psi = -phi + atan2(-R[0][1], -R[0][2]);  
                }  
            }  
  
  

and then:

  
        ofRotateX(-psi2 / TWO_PI * 180.0);  
        ofRotateY(-phi2 / TWO_PI * 180.0);  
        ofRotateZ(-theta2 / TWO_PI * 180.0);  

Still off though, it’s rotating one X and Z correctly, but not Y :confused:

have you resolved it?

Kinda. I don’t have the code handy any more but there’s a few things I didn’t understand about what solvePnP returns that are cleared up in these SO posts: http://stackoverflow.com/questions/14515200/python-opencv-solvepnp-yields-wrong-translation-vector and http://stackoverflow.com/questions/17423302/opencv-solvepnp-tvec-units-and-axes-directions

if I can add some resources:

http://morethantechnical.googlecode.com/svn/trunk/OpenCVAR/opticalFlow.cpp (there is also a blog post somewhere)
http://ksimek.github.io/2012/08/14/decompose/
http://strawlab.org/2011/11/05/augmented-reality-with-OpenGL/

my problem is that solvePnP if I use my cameraMatrix and distortion coefficent returns zero vectors: [0 0 0], [0 0 0] for tvec, rvec.

But if I use an empty cv::Mat() as camera matrix or distortion coefficent it returns something (numbers != 0 that should be something)…
Don’t really know why, probab too few points…

how many points do you use? and how do you find out modelPts and imgPts?