Layered video, alter opacity of video when motion detected- where to start?

Hi,

I am working on my first OF project and first C++ project. I want to create an interactive projection with two layers of video. I want both layers of video to loop and be projected one on top of the other but I want the top layer to only be visible when a camera positioned towards the viewers of the projection detects blobs or movement. Is it possible?

I would appreciate any suggestions as to other projects to look at for learning how to get started or tutorials I should do. Also, like I said, this is my first time with OF and C++, can anyone recommend any books for learning for a very beginner? I mean VERY beginner. I got the Programming Interactivity book and am starting that now but would like any suggestions for other good places to learn OF from the bottom up.

Thank you very much in advance for any suggestions to point me in the right direction.
-mbvcloud

Check the examples on apps/addonsExamples/opencvExample, also check the apps/examples/movieGrabberExample, and finally check apps/examples/moviePlayerExample this should give you a quick overview on how you can achieve your goal :slight_smile:

OK! Thanks very much for your reply and the info. I appreciate it.

Some further thoughts on alejandro’s tips about projects…

You probably want to make your top layer have a blend mode that allows for alpha. By detecting the difference between current and past frame of a given blob in one or all vectors (eg, right/left, or right/left,up/down,front/back), with CV, the resulting value can be used to manipulate the alpha value of the top layer. You’ll probably also want to smooth the data or have some kind of trigger envelope so that it doesn’t look really stuttery.

I’ve done something similar to this, taking the amount of motion in a camera and mapping it to the alpha channel of a video.

The examples Alejandro mentioned should get you started displaying the videos.

Determining the amount of motion between two consecutive frames is usually done by something called frame differencing. OpenCV has this built in but I prefer to do it myself. The concept is very simple, you save a frame of video and compare it to the current incoming frame of video pixel by pixel. By keeping a running total of the amount of difference you then have the total amount that the frame has changed by. You can then use this value to determine the opacity of your video to display.

Here’s a code snippet that shows how to do this. You can assume that _vidGrabber _is an _ofVideoGrabber _object, and _prevVideoFrame _and _videoFrame _are unsigned char arrays that hold pixel data. _totalPixels _would be equal to vidGrabber.width*vidGrabber.height*3 (for color images, for grayscale don’t multiply by 3). You would then use the variables m or rawMovement as a basis for setting the opacity of your video, but you would most likely need to scale them to the appropriate value ranges.

  
  
    vidGrabber.grabFrame();  
  
    if (vidGrabber.isFrameNew() )  
    {  
        //copy camera data to current frame  
        memcpy((void*)videoFrame,(void*)(vidGrabber.getPixels()),sizeof(unsigned char)*totalPixels);  
  
        float rawMovement = 0;  
        int m=0;  
        //compute amount of frame difference, add to average  
        for(int i=0; i<totalPixels; i++)  
        {  
            m += ABS(prevVideoFrame[i] - videoFrame[i]);  
        }  
        rawMovement = m/(255.0 * totalPixels);  
  
        // copy current frame to previous frame  
        memcpy((void*)prevVideoFrame,(void*)videoFrame,sizeof(unsigned char)*totalPixels);  
    }  
  

Thanks very much for all your suggestions. Do you know of any examples showing how to adjust the alpha of the video? Now I understand how to get the variables that you described for raw movement and m but I’m not sure how to apply those variables to adjust the video. Also, I want to ask is it possible to only adjust the alpha of the video in the pixels where motion is detected (like inside the blobs)?? So that I could have two layers of video and adjust the alpha of the top layer only in the places where people/blobs were detected? Rather than adjusting the whole layer?

thanks very much for all your help and clear advice. this forum is so helpful.

Sorry for the late reply,

If you want to adjust alpha on a per pixel basis, you’ll need to create a new pixel buffer with an extra byte per pixel. Normally video will have 3 bytes/pixel, RGB. So if you are adding alpha, RGBA, then you need copy every pixel to the new buffer and add the alpha on every fourth byte. It may sound complicated but its not too bad.
here’s some pseudo code:

  
  
// in testApp.h  
unsigned char * alphaVideoFrame;  
ofTexture alphaVideoTexture;  
  
// in setup  
alphaVideoFrame = new unsigned char[videoWidth*videoHeight*4]; // pixel buffer for video with alpha channel  
alphaVideoTexture.alocate(videoWidth,videoHeight, GL_RGBA); // RGBA texture for said video buffer  
...  
// update or draw  
for(int i=0; i<videoWidth*videoHeight; i++) {  
  memcpy(alphaVideoFrame+i*4, videoFrame+i*3, sizeof(unsigned char)*3); // copy 3 bytes, RGB  
  alphaVideoFrame[i*4+3] = [insert alpha channel equation];  
}  
  
// draw  
alphaVideoTexture.loadData(alphaVideoFrame, videoWidth, videoHeight, GL_RGBA);  
alphaVideoTexture.draw(0,0);