Smoothing polyline over several frames

hey all,

Im using the result of the contourfinder from a kinect image for interactive visuals.
however, even after polyline smoothing, the result is often jumpy and we would like to smooth the contour “over time”, or over multiple frames.

For humans, thats easy. Take two contours and draw a line between them.
For computers, thats a lot less trivial matter.

currently, i’m iterating over the old contour and finding the closest point on the new contour, followed by linear interpollation. Quite a stupid brute-force algorithm and very cpu expensive: O(n²*distanceCalc) for a contour of n points. Each contour is about 500 vertices equally spaced (after kinect-projector calibration), and it should work for multiple contours in one image (eg 3 persons walking around).

I’d be happy with ideas that would do it in O(n*log(n)) distance calculations.
But its hard to optimize:

  • As the start point of the contour is not always the same (the highest point, sometimes a hand, sometimes a head) you cant search only the x nearest points
  • Skeleton data is not available as we are working in the 1 to 8m depth range.
  • For searching the actual closest point, all contour points have to be examined.

any ideas? I believe the ‘searching the closest point’ approach might not be optimal.

In kyle’s code of ofxCV i’ve found references to
cv::matchShapes - similarity between two contours
cv::estimateRigidTransform? subdivision-based estimation for outline-flow?

I think they might serve my purpose, but I lack the knowledge and the time to investigate (unless I know its the way to go). Otherwise i’d just try to optimise the closest point approach to get results, both inaccurate and cpu-intensive.

Comments appreciated! Thanks!

1 Like


tried using the closest point approach, but didn’t work out that well - only produced usable results when not much movement was detected. The approach is very prone to noise which is exactly what i’m trying to filter out :frowning: However, when the user is moving very slowly, the result is much more “peaceful”, much more so then just smoothing polylines.

playing with a new idea: curve fitting.
I’d add all points of the last x contours into one big pile and then try to find the line that minimizes the distance between all points.

Any idea how to do this using OF?


cv::matchShapes just gives you a similarity value, not an actual “match”, unfortunately.

cv::estimateRigidTransform will give you the rigid transformation between to polylines. my idea was to do this on subdivisions of a polyline, recursively, until you find a collection of rigid subdivisions… but i haven’t written this yet because i’m not sure it would be fast enough (or even work at all).

because your algorithm is bottlenecked by the time to find a nearest point, i recommend reading about quadtrees for fast nearest neighbor matching.


I’ve used OpenCv’s BFMatcher in oF for matching points.
Not for contours but for points representing ‘features’. But I guess you could use this technique for contours as well.

Maybe FLANN is an option?

Hi Kyle & Wim,
Thanks for the answers.

I tried using FLANN and other flow detection algorithms for tracking ‘previous points’ over several frames but they weren’t realtime enough for tracking the (calibrated) contour images. I had to re-add all features each frame to the contourlines because they clustered up, and re-adding/training took most of the time. So, I believe the solution would be point-based instead of of image/flow based.

I found this image in the PCL blog which accuratly describes the problem (img a & b):

The approach is fitting NURBS using 2D B-splines.
This blog is about the implementation in PCL using opennurbs:

Now I doubt that this approach would work for realtime data (each frame up to 5 datasets of ± 2500 points). However, libcinder featers bspline approximation ( least-squares B-Spline curve fitting), so i’m gonna dig through their source to try it out.

However it got me thinking, together with both suggestions above.
The idea came from the Opheim line simplification algorithm but on clusters instead of single points:

I’m going to try the following:

  • create a quadtree-like structure for placing polyline points (and their next point directions)
  • select the first vertex of the last polyline.

For each “selected location”

  • search the x-nearset neighboors for that vertex, calculate average direction vector & average location of the resulting cluster
  • append to the center of the cluster a weighted mean of:
  • the average direction of the clusters’ vertices
  • the direction of the closest vertex of the last added polyline (prevent ending up in oblivion)
  • restart above with the new location as selected location

I believe this might actually work in realtime depending on the nearest-neighboor bottleneck.

There is some very good possibilities for optimization here:

  • only the oldest polyline shoud be removed from the quadtree; only one polyline should be added each frame
  • one can make bigger jumps by enlarging the cluster and adding a multiple of the average cluster’s direciton.
  • The same structure can also be used to get the “flow” of the polylines at each cluster location

But first i’m gonna try the cinders’ B-Spline curve fitting, who knows it might just work.

Suggestions and comments still appreciated, maybee i’m making it much to difficult :slight_smile:

Sounds like a good approach. But not so easy to tackle. For me at least :slight_smile:

If you could reduce the number of points in some way (12.000 points each frame is a lot) and on a small note: to increase framerate for my OpenCV stuff in oF I build it from source in release mode and then build the oF project in release mode as well.
It got me a great performance boost with brute force matching nearest neighbours stuff.

Maybe even using the GPU?
Space Partitioning: Kd-Tree [2] – using WebGL

Should run faster in OpenGL than in WebGL.

Isn’t the release build of opencv included in the OF package? There are both debug and release libs in the ofxOpenCv addon; so I assumed they were the optimized versions?

Unfortunately I cannot use opengl as the whole input processing runs in a separate thread (@ 30 fps), with the output double-buffered to the main rendering thread. AFAIK you can’t use opengl calls from other then the main thread. That way I still have lots of resources for the main visualization @ 60fps.

However as there is lots of openCV work in the input thread, could you elaborate on the opencv trick?

That GLSL link is really impressive work!

For your search of the closest point, did you consider converting the “new countour” into a kd-tree, and search on that? That alone should give you better than n2 speed, it seems.


Thanks for the suggestion! I didnt use it but managed to get realtime with lesser detail; but the problem was the result wasn’t satisfactory. The approach amplifies the noise I’m trying to avoid; it even adds noise when the users are moving fast (“previous” contour points not linked to the same body part). I hope the clustering/smoothing might solve that.

I don’t know if it’s helpful but I’ve got some spline examples here:

I have just added some very messy code which is a port from cinder of their bsplinefit (which orignally comes from wildmagic library).

the code has a section (besides alot of copy and paste) that looks like this:

BSpline3f spline = fitBSpline( temp.getVertices(), 3, temp.getVertices().size() *0.8);

    BSpline3f spline = fitBSpline( temp.getVertices(), 3, temp.getVertices().size() *0.8);

where it fits the bspline to a polyline.

then, I iterate through it for drawing:

    int count =  ofMap(mouseX, 0, ofGetWidth(), 3, 10000, true);
    for (int i = 0; i < count; i++){
        mesh.addVertex(spline.getPosition( i / (float)count));

I’m messing with the count based on mouse just to show how you can resample the bspline…

memo interpolation also looks really useful / promising: