# Creating bounding box around area of vectors based on optical flow

I’m using ofxopenCv to calculate the optical flow of a videoGrabber. I am finding this more accurate than using the contour finder.

Currently, I have a program that draws the optical flow of the video with vectors when the motion passes a certain threshold. What I want to do is set each area of motion vectors into a bounding box and then get the centroid of each area of motion and track that motion in a variable. I was looking at polyline, but I couldn’t get it working.

This is part of my draw function where the if statement at the end only draws vectors based on the threshold.
The full code can be seen in the attachment.
Any ideas would be appreciated.

``````if (calculatedFlow){

ofSetColor( 255, 255, 255 );
video.draw( 0, 0);

int w = gray1.width;
int h = gray1.height;

//Input images + optical flow
ofPushMatrix();
ofScale( 4, 4 );

//Optical flow
float *flowXPixels = flowX.getPixelsAsFloats();
float *flowYPixels = flowY.getPixelsAsFloats();
ofSetColor( 0, 0, 255 );
for (int y=0; y<h; y+=5) {
for (int x=0; x<w; x+=5) {
float fx = flowXPixels[ x + w * y ];
float fy = flowYPixels[ x + w * y ];
//Draw only long vectors
if ( fabs( fx ) + fabs( fy ) > .5 ) {
//Draws vectors if past threshold of 0.5
ofDrawRectangle( x-0.5, y-0.5, 1, 1 );
ofDrawLine( x, y, x + fx, y + fy );
}
}
}
}
``````

opticalCode.zip (2.7 KB)

I resolved a bit of this in my update function by creating a new `ofxCvFloatImage` that added both axes of motion together.

``````if ( gray2.bAllocated ) {
Mat img1( gray1.getCvImage() );  //Create OpenCV images
Mat img2( gray2.getCvImage() );
Mat flow;                        //Image for flow
//Computing optical flow (visit https://goo.gl/jm1Vfr for explanation of parameters)
calcOpticalFlowFarneback( img1, img2, flow, 0.7, 3, 11, 5, 5, 1.1, 0 );
//Split flow into separate images
vector<Mat> flowPlanes;
split( flow, flowPlanes );
//Copy float planes to ofxCv images flowX and flowY
IplImage iplX( flowPlanes );
flowX = &iplX;
IplImage iplY( flowPlanes );
flowY = &iplY;
}

flowX += flowY;
flowXY = flowX;

flXY = flowXY.getFloatPixelsRef();

XY.setFromPixels(flXY); //XY is an ofImage
``````

I then can draw my XY image in my draw function and then perform `ofxCvContourFinder` on that image. But what I have noticed is that although the new image shows areas of motion as white it only shows it in one direction in the X and Y axis.

Does anyone know why this would be the case?

I relised this is becasue of the negitive values that are created depending on the direction of the motion. I needed to make the values absolute. I didn’t know how to call abs() on an image. But I stumbled across the abs() function in the cv::Mat class so I created a new version of flow where I got the absolute value.

``````vector<Mat> flowPlanes;
Mat newFlow;
newFlow = abs(flow); //abs flow so values are absolute. Allows tracking in both directions!
split( newFlow, flowPlanes );``````