OpenCv get *R *G *B

Hi all

i am tried to make better the rgb colors for a live video camara, i follow this tutorial

http://dasl.mem.drexel.edu/~noahKuntz/openCVTut5.html

in the part for histogram equialization, i make for a ofxCvColorImage

and for thake each color for separated i used the class for a syllabus 09
This for set pixels into array

  
  
void testApp::ourSetPixel(int horizontal,int vertical,unsigned char R,unsigned char G,unsigned char B,int w, unsigned char pixels[]){  
	int thisPixel;  
	thisPixel = 3*(w * vertical +horizontal);  
	pixels[thisPixel]=R;  
	pixels[thisPixel+1]=G;  
	pixels[thisPixel+2]=B;  
}  
  

This for get pixels for an image

  
  
void testApp::ourGetPixel(int horizontal,int vertical,unsigned char* R,unsigned char* G,unsigned char* B,int w, unsigned char pixels[]){  
	int thisPixel;  
	thisPixel = 3*(w * vertical +horizontal);  
	*R= pixels[thisPixel];  
	*G= pixels[thisPixel+1];  
	*B= pixels[thisPixel+2];  
  

and in the part for use cvEcualizeList i used this

  
void testApp::ourProcessImage(){	  
	unsigned char R,G,B, R2,G2, B2;  
	int cell = 1 , threshold= mouseX;																		  
	for (int y = cell; y < height-cell; y++) {													//visit all pixels, make sure not to go under 0 or over width and height...  
		for (int x = cell; x < width-cell; x++) {												// that's why w start at cell and go to width-cell  
			ourGetPixel(x,y,&R,&G,&B,width,ourImagePixels);									// get the RGB of the pixel  
			int sum =0;																// for every pixel, sum will add up all the change of the neighborhood  
			for (int cellY= y-cell; cellY<=y+cell; cellY++){										// go into a repeat loop of cell x cell around the pixel  
				for (int cellX= x-cell; cellX<=x+cell; cellX++){  
					ourGetPixel(cellX,cellY,&R2,&G2,&B2,width,ourImagePixels);				// our method that gets RGBs values at x,y position, note that it expects pointers for R,G,B  
					int diference= sqrt((R-R2)*(R-R2)+(G-G2)*(G-G2)+(B-B2)*(B-B2));  
					sum+= diference;													// accumulate all the change of the neighborhood into sum  
				}	  
			}  
			if (sum< threshold){															// if there is less change than we difine as threshold...  
			//	ourSetPixel(x,y,255,255,255,width,ourResultPixels);							// set the pixel white...	  
				ourSetPixel(x,y,R,G,B,width,ourResultPixels);									// set the pixel its original colors...	  
  
			}else{  
				ourSetPixel(x,y,0,0,0,width,ourResultPixels);									//otherwise set it black  
			}	  
		}		  
	}	  
}  

but i used this with the CvEquilizateHist and really make better the final result

but my question is if exit a way for make this more fast, this is so much slowly, and in some parts the program closed for memory :S… anuone can give me a hand, really this is a dummy application but i want to take the r and gand B color for a live video and maybe the alpha for a video . mov and make some thing with each for separate…

thx :slight_smile:

sometimes it can really help to reduce the size of your image, even down to 160x120 or smaller, you can usually go much lower resolution than you think. so, if you halve the resolution eg 640x480 -> 320x240 you actually reduce the number of pixels to process by a factor of 4, and if you halve it again you reduce the number of pixels by a factor of 16 - so a 160x120 pixel image is 16x faster to process than a 640x480 image, without really losing too much detail.

i often think about David Rokeby’s Very Nervous System for inspiration: http://homepage.mac.com/davidrokeby/vns.html : this camera has a resolution of about 16x12 but he still manages to use it in a super inspiring way…

hope this helps
d