In a lot of projects I did in openFrameworks there is openCV and blobs involved. Almost always I needed to write something to track these blobs, even though they move for example. A lot of times it’s also important that they can disappear for whatever reason for a short time, and have the system remember them.
In my current project I decided to put this functionality in something more reusable and I made this ofxBlobsManager. It will:
By storing blobs and comparing them with new blobs it can track moving blobs.
Doing that it will give blobs id’s.
It can give sequential id’s (every new blob gets a higher id) or it can find the lowest possible id’s.
It has a easy optional debug draw.
It has a “max undetected time” so it will remember blobs that disappear for a moment.
Optionally normalize the movement, making it less “shaky”.
The addon is hosted on Github:
At the moment the tracking is done simply by comparing the positions of the new blobs with the old ones and having a maximum distance. I am open for all suggestions, especially “Pull requests”.
Please inform me about issues, comments, feature requests on Github’s-issue-tracker.
I’ll also add it to http://openframeworks.info
This is great and i’m sure it will save a lot of duplicated work.
One of the things i also added to my own system was (if the blob was no longer in the local area defined by a bounding circle) to use the last stored velocity of the blob and then if the blob was lost use that to predict the probable next position of the blob and check that area also.
I also store the general area, and area variance, of a blob and then use that to decide if this is still the same blob. I think you could also store area variance velocity (not sure of the correct math term for that) and predict the area of the blob on the current frame from the area it was plus the variance it had.
I also smooth my blobs movement by averaging, however i wondered whether to use a spline curve instead; what may be nice about that is that (depending on the spline curve used) the next position cannot radically depart from the last position and so is predictable…however i have never sat down and thought through how this would actually work.
Just some thoughts.
Thnx for your input. Checking on a second predicted location based on velocity sounds very interesting. I am quite curious if it would improve detection or that it might cause more unwanted merges. I will add it as a feature request so that I or someone else can try this.
I am not sure what you mean by “area variance”. Do you mean checking the area of blob to determine if it is still the same?
I would like to invite you to try your spline curve and velocity ideas, it’s quite easy to fork the project. Then if it works the way you want it to you can add your fork as a pull request and I can merge it in my version. This way the community has easy access to the latest version. Of course I will make sure to add you to the credits somehow.
On the “area variance” my feeling is that the computed area of a blob would not change dramatically over a frame (let’s assume at least an FPS of 15) as the body doesn’t move like that. As such one would store the area of the tracked blob (let’s say upon initial indication that it is indeed a blob we want to track) at frame N and then at frame N+1 as a secondary check we would measure the new area against the stored area. if the change is dramatic then it may not be the same blob. Also if we store the change in area (as an acceleration) in frame N then we can possibly predict the area of the tracked blob N+1.
Each variable of the blob that we tracked would get a weighting in the final decision on whether this blob is indeed our tracked blob or not. Sounds a lot but it is all easy code i believe. The weightings are a heuristic however i would guess that position would be weighted the most significant by far.
Does that make sense?
The “predicted location” would be weighted in the same way. In the console games industry this method is used often as a part of a collision detection system.
As for the spline curve…that is more of a thought. It would be some sort of feedback loop and more difficult to achieve.
I’ll checkout the code as soon as i get free time.
Sounds like a great check, although of course not always applicable. When using a Kinect and a depth range, blob sizes can vary very fast if they move out of this area. But of course we can add it as a option.
I did work on it a little bit, for example adding a simple openCV example and removing pointers from it’s interface, making sure people don’t have to learn pointers before being able to use it.
For Visual Studio user the #import directive cannot be used to include header files it can only be used to import a type library (.tlb or .odl) as such the following files with throw a fatal error C1083: Cannot open type library file, “ofxBlobsManager.cpp” and “ofxStoredBlobVO.cpp”, during compiling.
Replace “#import” with “#include” in the files mentioned above