openCV blobs speed

Hi, I’m new winch OF and in the forum.
I´m trying to use the acceleration of blobs comparing theirs actual centroid X & Y positions with the ones in the previous loop (same that p55 way: mouseX - pmouseX )
i want to use dynamic arrays to do this but i don´t know how .

My P55 background lead me to this:

for (int i = 0; i < contourFinder.nBlobs; i++){  
		float ptsX = (contourFinder.blobs[i].centroid.x*videoScaleX);  
		float ptsY = (contourFinder.blobs[i].centroid.y*videoScaleY);  
		float ptsVelX = (contourFinder.blobs[i].centroid.x*videoScaleX)- prevPtsX[i];  
		float ptsVelY = (contourFinder.blobs[i].centroid.y*videoScaleY)- prevPtsY[i];  
		float prevPtsX[contourFinder.nBlobs];  
		float prevPtsY[contourFinder.nBlobs];  
		prevPtsX[i] = (contourFinder.blobs[i].centroid.x*videoScaleX) ;  
		prevPtsY[i] = (contourFinder.blobs[i].centroid.y*videoScaleY) ;	  

em, it seems nots working

which is the proper way to do this?
where i can look for more information about this?

thanks to all, sorry for my english.

I’ve solved it!

using ofxVectorMath

if (contourFinder.nBlobs<MAX_N_BLOBS) {  
for (int i = 0; i < contourFinder.nBlobs; i++){  
		float ptsX = (contourFinder.blobs[i].centroid.x*videoScaleX);  
		float ptsY = (contourFinder.blobs[i].centroid.y*videoScaleY);  
		//difference between actual & past centroid pos.  
		float ptsVelX = (contourFinder.blobs[i].centroid.x - prevPts[i].x);    
		float ptsVelY = (contourFinder.blobs[i].centroid.y - prevPts[i].y);  
		//saving actual centroid pos.	  
		prevPts[i].x = (contourFinder.blobs[i].centroid.x) ;  
		prevPts[i].y = (contourFinder.blobs[i].centroid.y) ;  

I have declare in testApp.h

#define MAX_N_BLOBS      30  

ofxVec3f prevPts[MAX_N_BLOBS];  

I working with a limit number of blobs but i’ve reached more or less what i was expecting.

The problem with this, though, is that it assumes the blob #10 in this array is the same, but moved blob #10 in the previous array. The way that OpenCV creates its list of blobs would most likely prevent this technique from being effective.

To keep track of blobs, you’ll want to do collision detection between blobs on this frame and the last. Each blob will need to record an ID and its last updated frame to really track it. You’ll want a basic blob structure to track this data:

struct blob  
    int frame;  
    float height;  
    int ID;  
    float pX;  
    float pY;  
    float width;  
    float x;  
    float y;  

Keep a vector list of active blobs. Keep a frame counter going – start at 0 and just increment by 1 on every frame. And keep track of blob IDs by creating an ID integer which you will increment every time you add a new blob.

On every frame, get blobs from your video, as usual. Iterate through all the blobs from OpenCV and collision detect them against the blobs in your list.

If an OpenCV blob is colliding with a blob in your list, copy your list blob’s x, y to its pX, pY and the OpenCV blob’s x, y (centroid) to your list blob’s x, y. Update the blob’s frame number with the application’s current frame (will explain why you need to do this in a moment).

If the OpenCV blob does not collide with anything in your list, increment your ID counter and add a new blob to the list. Be sure to assign the frame number, x & y position, and width and height (which you can get from the bounds). I like to copy the outline data, too, but keep it simple for now.

So, you now have a list complete with new and updated blobs. To remove blobs from your list that are no longer on the screen, simply iterate through and erase blobs where the “frame” property is less than the current frame. Per the techniques above, all current blobs for this frame will have a “frame” property that matches your frame counter. If a blob’s “frame” value is less than the current frame, it wasn’t updated or added on this frame – it’s not on the screen.

So NOW you have a current list of active blobs. New blobs will not have a pX, pY value, but updated blobs will. Comparing each blob’s x & y with its pX & pY will give you angle and velocity and whatever other information you can surmise from two points.

Additionally, this technique removes limits on the number of blobs you can track. If you work with a vector, you don’t have to cap it.

oh, thanks, really.
very good explanation!
it seems too much for me now. i’m very new in coding
i’m going to study how to do that.