OpenCV MoCap [MOVED]

I moved the following from another thread, as it wasn’t related to the thread.

Ok, so as far as I can understand you should be able to have the x and y coordinates of the markers since you already have tracked them.

The registerViewport is a function from OpenNI, I don’t know if libfreenect has anything similar. What it does is that it adjusts the RGB image so its pixels coincide with the depth pixels.
try out the ofxOpenNI addon.

So once you have enabled registerVieport method and know the x and y position of your markers, the depth (z value) is just the value of the pixel at x,y from the depth image. The depth image is a 16 bit grayscale image and its pixel values represent milimeters.

If you post your code I could provide you with more specific help.

Well,I am posting my code here but yet i have not started coding for x,y,z coordinates so the code is till the tracking of the marker.

  
  
#include <opencv/cv.h>  
#include <opencv/highgui.h>  
#include <stdio.h>  
#include "libfreenect_cv.h"  
  
  
  
void drawOptFlowMap(const CvMat* flow, CvMat* cflowmap, int step,  
                    double scale, CvScalar color)  
{  
    int x, y;  
    for( y = 0; y < cflowmap->rows; y += step)  
        for( x = 0; x < cflowmap->cols; x += step)  
        {  
            CvPoint2D32f fxy = CV_MAT_ELEM(*flow, CvPoint2D32f, y, x);  
            cvLine(cflowmap, cvPoint(x,y), cvPoint(cvRound(x+fxy.x), cvRound(y+fxy.y)),  
                 color, 1, 8, 0);  
            cvCircle(cflowmap, cvPoint(x,y), 2, color, -1, 8, 0);  
        }  
}  
  
  
  
  
IplImage* GetThresholdedImage(IplImage* img)  
{  
    // Convert the image into an HSV image  
	IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);      
	cvCvtColor(img, imgHSV, CV_BGR2HSV);  
	IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);	  
	cvInRangeS(imgHSV, cvScalar(20, 100, 100,0), cvScalar(30, 255, 255,0), imgThreshed);  // Passing the hue ranges of yellow color  
         
	cvReleaseImage(&imgHSV);  
  
    return imgThreshed;  
}    
  
  
int main(int argc, char **argv)  
{  
   CvMat* prevgray = 0, *gray = 0, *flow = 0, *cflow = 0;  
  
	while (cvWaitKey(10) < 0) {          
           int firstFrame = gray == 0;  
  
		IplImage *image = freenect_sync_get_rgb_cv(0);  
		if (!image) {  
		    printf("Error: Kinect not connected?\n");  
		    return -1;  
		}  
		cvCvtColor(image, image, CV_RGB2BGR);  
		IplImage *depth = freenect_sync_get_depth_cv(0);  
		if (!depth) {  
		    printf("Error: Kinect not connected?\n");  
		    return -1;  
		}  
		IplImage* imgYellowThresh = GetThresholdedImage(image);  
		cvShowImage("Color Tracking", imgYellowThresh);  
				  
if(!gray)  
        {  
            gray = cvCreateMat(imgYellowThresh->height, imgYellowThresh->width, CV_8UC1);  
            prevgray = cvCreateMat(gray->rows, gray->cols,gray->type);  
            flow = cvCreateMat(gray->rows, gray->cols, CV_32FC2);  
            cflow = cvCreateMat(gray->rows, gray->cols, CV_8UC3);  
        }  
		cvCopy(imgYellowThresh, gray,0);  
	  
  
if(!firstFrame )  
        {  
            cvCalcOpticalFlowFarneback(prevgray,gray, flow, 0.5, 3, 15, 3, 5, 1.2, 0);  
            cvCvtColor(prevgray, cflow, CV_GRAY2BGR);  
            drawOptFlowMap(flow, cflow, 16, 1.5, CV_RGB(0, 255, 0));  
            cvShowImage("Flow", cflow);  
        }  
        if(cvWaitKey(30)>=0)  
            break;  
        {  
        CvMat* temp;      
        CV_SWAP(prevgray, gray, temp);  
        }  
 }  
	  
	return 0;  
   
}   
  
  

Why are you using just optical flow?
have you tried out contour detection? It should be much more staight forward than OpFlow. Maybe adding OpFlow just as a way to refine your data and check its consistency.
With contour detection (or blob detection, in this case work for the same matter) you get the position of each contour and you can easilly track them.
Which version of OpenCV are you using?
Maybe later this week I can help you, as I have some stuff running that with a few modifications will work as I describe.

Best!

yes,i am working on contour and i will apply contour detection after color filter as my color filter is detecting color from brown to yellow so i will restrict it with shape as well.I was applying optical flow because my instructor told me to do it and i also thought that it will help in detecting motion of marker.Opencv version is 2.3.1.Well, i also tried blob tracking algorithm but i usually found them in C++ and i want to work with C.Those which i got in C caused segmentation fault which i tried to resolve but couldn’t do it.Well,Is your running demo in C.I cannot shift to any other language or use any other driver as OPENNI because i have very few days left to submit my project and i will be really thankful if you help me at your earliest.

Hi, my demo is written for OF, so C++;
OpenCV 2.3.1 has support for directly grabbing images from the kinect through openNI.
OpFlow can help you as a method to check the correspondence between contours of one frame and the previous one. Just for guidance, you might want to take a look at a this code
http://code.google.com/p/simple-kinect-touch/
although its purpose is different it implements a lot of the things that you need (contour detection and correspondence, kinect capture, GUI) all directly through OpenCv.
It was written by a friend of mine and I’ve done a OF version of it (yet I haven’t published it).
I’ve done blob detection directly via OpenCv, why you began to look for other code for it?
I’ll upload my OF demo for kinect cap via OpenCv to my github and let you know when it’s done.

Cheers!