videograbber in an ofTexture

Hi there,

I try to figure out how to put in a video on a specific shape using ofTexture.

The first ting I’ve tried is to put all the pixels of the capture in the ofTexture and something odd happens. I have to capture at a square resolution (w=h=something). If not, sometimes the application stands at the action : colorPixels[(j*w+i)*3 + 0] = pixels[index]; // r, and there is no image, sometimes it works with white flashing on background as in the image below :
[attachment=0:3n5s7ox7]essai.jpg[/attachment:3n5s7ox7]
There something I must miss.
Any ideas ?

The header is

  
  
#ifndef _TEST_APP  
#define _TEST_APP  
  
  
#include "ofMain.h"  
  
  
class testApp : public ofSimpleApp{  
	public:  
		void setup();  
		void update();  
		void draw();  
		  
		ofVideoGrabber 		vidGrabber;  
	  
		ofTexture 		texColor;  
	  
		int 			w, h;  
	  
		unsigned char 	* colorPixels;  
	  
};  
  
#endif  
  

and the cpp file is

  
  
#include "testApp.h"  
  
//--------------------------------------------------------------  
void testApp::setup(){	   
	  
	w = 320;  
	h = 240;  
	  
	vidGrabber.setVerbose(true);  
	vidGrabber.initGrabber(w, h);  
	  
	texColor.allocate(w,h,GL_RGB);  
	  
	colorPixels = new unsigned char [w*h*3];  
	  
	for (int i = 0; i < w; i++){  
		for (int j = 0; j < h; j++){  
			colorPixels[(j*w+i)*3 + 0] = 0;	// r  
			colorPixels[(j*w+i)*3 + 1] = 0;	// g  
			colorPixels[(j*w+i)*3 + 2] = 0; // b  
		}  
	}  
	texColor.loadData(colorPixels, w,h, GL_RGB);  
	}  
  
//--------------------------------------------------------------  
void testApp::update(){  
	  
	ofBackground(255,255,255);  
	  
	bool bNewFrame = false;  
	  
	vidGrabber.grabFrame();  
	bNewFrame = vidGrabber.isFrameNew();  
	unsigned char * pixels = vidGrabber.getPixels();  
	  
	if (bNewFrame){  
		cout<<"new frame"<<endl;  
		for (int i = 0; i < h; i++){  
			for (int j = 0; j < w; j++){  
				int index = i*3 + j*3*w;  
				colorPixels[(j*w+i)*3 + 0] = pixels[index];	// r  
				colorPixels[(j*w+i)*3 + 1] = pixels[index+ 1];	// g  
				colorPixels[(j*w+i)*3 + 2] = pixels[index+ 2]; // b  
			}  
		}  
	}  
	texColor.loadData(colorPixels, w,h, GL_RGB);  
	  
	  
}  
  
//--------------------------------------------------------------  
void testApp::draw(){  
	  
	ofSetColor(0xffffff);  
	  
	texColor.draw(100,100,w,h);  
	  
	  
}  
  

do you need to access the pixels of the video ? If not, have you tried using directly the ofTexture of the videoGrabber ? You can get a reference to that texture via the method getTextureReference() of ofVideoGrabber.

Thanks Smallfly about this method.

I understand that this work is already done by the frameworks of Openframeworks.

What if you need to build a specific texture with diffrents parts of images, movies, … Do you have to build a composite image and use the ofTexture of that image (I guess there is a getTextureReference() for ofImage too) ?

By the way I don’t get what’s wrong with my code.

Best
Taprik

Hi,

take a look at an old post of mine:
http://forum.openframeworks.cc/t/using-grayscale-video-as-a-mask…/346/8

In the example I describe how I use multi-texturing to draw a video in an irregular shape, like a star, a circle or a french lily. You will need an image that defines the shape. The image should be all white with an alpha channel. You then setup OpenGL to use the alpha of the image as the alpha of the video. It works like a charm.

I notice many people try to do all the hard copypixels work themselves, which puts quite some strain on the CPU. In many cases, your videocard is perfectly able to do the work for you, so always look for a solution in OpenGL first.