light writing video software

I’m wondering if there’s any example OF project or interest about “faked” longtime exposure images/video effects.
I mean the kind of effect that the Lichtfaktor crew uses at their events: http://www.lichtfaktor.eu/

At ars electronica 08, they had a setup with a digital video camera, laptop and a digital video projector in the night. You could literally write with light sources like LEDs in front of the camera while the projector illuminated a wall with all the movements and showed a very nice light effect, as if there had been many long time exposures after one another. But in fact, they used just the video stream of a SDTV camera and processed it live inside the laptop which then fed the projector.
It’s a little difficult to describe, but you can see the setup shortly around 1:00 in this video:

http://www.flickr.com/photos/lichtfaktor/2899534361/

Back then, I talked to the programmer of lichtfaktor in Linz, but he refuses to make his light writing software open source. But I guess it probably wouldn’t take long to build similar and maybe even superior (because it could be open :wink: software with OF. Right? Anyone interested or already begun?

Ciao, Oli
p.s. to be a little more clear: the many still images you see at the lichtfaktor webpages don’t need the software I’m talking about (there are also many stop motion videos, made from still images). I only mean the live video imitation of this longtime exposure effect of moving light sources.

the theory is easy, you just accumulate successive frames into a buffer using additive blending. the trick is to use a buffer with pixel colour components described by floats (ie 32*3=96 bits per pixel) rather than just unsigned chars (8*3=24bpp) - this gives you much smoother accumulation over time.

so you start by allocating an array of floats to be your accumulator array, for example if your incoming video image is 640x480, and you want 3 floats per pixel for RGB, you’d specify it like this:

  
float accumulator[640*480*3];  

then for each incoming video frame, convert each pixel of three unsigned chars to three floats and use additive blending to put it in the accumulator. eg:

  
  
// note - this code is untested so i don't know if it even compiles :-)  
ofImage input = // fetch from live video source..  
  
unsigned char* input_pixels = input.getPixels();  
// Y  
for ( int i=0; i<input.height; i++ )  
{  
  // X  
  for ( int j=0; j<input.width; j++ )  
  {  
    // calculate base index for this pixel  
    int base_index = (i*input.width+j) * 3;  
    // go through RGB components  
    for ( int k = 0; k<3; k++ )  
    {  
      accumulator[base_index+k] += (float)input_pixels[base_index+k];  
      // don't let it get above 255.0f  
      if ( accumulator[base_index+k] >255.0f )  
        accumulator[base_index+k] = 255.0f );  
    }  
  }  
}  

then to display you convert floats back to unsigned chars:

  
  
unsigned char output_pixels[input.width*input.height];  
// Y  
for ( int i=0; i<input.height; i++ )  
{  
  // X  
  for ( int j=0; j<input.width; j++ )  
  {  
    int base_index = (i*input.width+j) * 3;  
    // go through RGB components  
    for ( int k = 0; k<3; k++ )  
    {  
      // cast from float to unsigned char, losing extra precision  
      output_pixels[base_index+k] = (unsigned char)accumulator[base_index+k];  
    }  
  }  
}  
ofImage output;  
output.setFromPixels( input.width, input.height, output_pixels, OF_IMAGE_COLOR );  
  

and then just draw the output.

the problem you will run into soon is that the image will max out and become pure white (all pixel components 255.0f) over time. to clear it you can set all the values in the accumulator buffer to 0 periodically (perhaps make a keypress to do this); or alternatively you can change the blending amount. for example, when you’re accumulating, instead of simply

  
accumulator[base_index+k] += (float)input_pixels[base_index+k];  

, you could use

  
  
float FADE_FACTOR = 0.9f;  
// ...  
// (now inside for loop)  
      accumulator[base_index+k] = accumulator[base_index+k]*FADE_FACTOR+(float)input_pixels[base_index+k];  

this way your accumulator is being reduced in intensity to FADE_FACTOR percent (0.9f = 90%) each frame, so you shouldn’t max out so easily. (probably this value is too low, though - try 0.95f or even 0.99f or 0.999f).

hope this helps :slight_smile:
d

i just found this old thread via the search function (totally different context), but this sounded interesting, so i had to stop. thanks damian for the nice explaination.

i don’t know, if you, ozel, solved the problem. if not, you might take a look at this.

i think the keywords for further googling are “floating point (fp16 or fp32) blending high dynamic range (hdr) rendering opengl …”. you’ll find lots of stuff on gamedev.net, developer.nvidia.com, opengl.org, etc.

*hth*
didi

You can take a look at Quase-Cinema 2 source (http://www.quasecinema.org), it has light paint features.
Damian’s code is more sofisticated, thou. Thanks, Damian, I’ll study it!
The way I did was to capture the previous output frame and openGL mix it with the new camera frame.