TimeMap#3 with recoil performance group

hi there

i finally got a page together on the last recoil performance i programmed with jonas jongejan.

it was a long ride and took us through opengl viewport voodo, bullet physics simulation, multisampling, utf string handling over osc, parallel processing on an 8 core mac pro, ftgl font rendering and much more.

there’s links to further info down the page, where you will find a full video documentation of the piece and a google code project, that i will now update.

best / ole

timemap #3

_dance performance
recoil performance group, spring 2009

an interactive video scenography for a recoil performance group dance performance/installation. developed with choreographer tina tarpgaard and programmer jonas jongejan.

ten minute edit of the full performance, the first act is done in Isadora, the second in openFrameworks


Act II
a full edit of second act, where all tracking and rendering is done using openFrameworks



time map is a dance performance that is sampled, manipulated and reconstructed, as if you where in a film set and part of a live editing process.
in this time displaced universe 3 people meet. ottilia 1886, rose 2008, keem 2258 – each of them seeking to find justice.
the three dancers have worked from a base of true stories from past, present and future.
being placed in past, present and future their stories are portrayed visually, physically and dramatically.
they meet in the court room in a battle with words – from taking orders to making them fly across the screens.


ole kristensen and jonas jongejan created the videoscenography using infrared motion tracking implemented in openframeworks. all video and lights were cued from qlab using midi to isadora through to a grandma node, our own custom osc server, motion capture and opengl renderer. during the performance all text was typed live by the author gritt uldall on stage. the choreographer had control over some of the scenes from her own laptop. thus further osc controlpanels for text editing and choreographer interaction were made using maxmsp. some parts of the first act were filmed and edited on stage during second act for an epilogue shown in the lobby.




while the code is full of all kinds of nice trickeries, the applications themselves are not of much use outside the scope of our performance. it will probably not run on your machine, first of all it needs a collection of more or less obscure hardware, secondly it works tightly together with other third party apps such as isadora, maxmsp, grabberraster, qlab and even a pc-based grandma lighting node - and i haven’t even listed the c++ lib dependencies

… and the code is messy too - so read if you please, compile if you can - but don’t expect it to compute


This looks great! I am particularly interested in Bullet integration - did you get arbitary meshes working? With Collada?



that was the most difficult part - we ended up using the btConvexHullShape, and just fed a list of points from the blob detection directly into the shape at each frame.

ideally we would have loved to have some sort of dynamic, and persistent concave outline, that would dynamically scale to the blob detection, but we gave up and ended with the rough solution:

btConvexHullShape * silhouette1Shape = new btConvexHullShape();  
	if(camera[0].contourFinder.nBlobs > 0){  
		for(int i=0;i<camera[0].contourFinder.nBlobs;i++){  
			for(int j=0;j<camera[0].contourFinder.blobs[i].nPts;j++){  
				mult *= -1;  

  • oh and another note - we used live tracking data in the bullet world, the silhouette blob of the dancer, that we had to collide with the words. as the blob would change and not get ‘real’ pushes from the falling words, in some ways it was an infinitely solid object in the bullet world. so we simply recreated it in every frame from the blob as shown above. it made some jittering collisions, that we decided to live with…

that is also the reason we did not know how to use optmisations as collada, though we might have created the files on the fly (an 8core mac pro is fast), but we never got to experiment with that and went for the easy solution, as the premiere deadline was also a factor :slight_smile: