Understanding the getpixels() function


the array of pixels the function returns starts with the first pixel being (0,0) and follows through the whole window for example from (0,0) to (1280,0) and then (0,1) to (1280,1)…in other words from left to right and downwards?

I have a project in which I need to locate the shadow of a person projected over a white wall and manipulate some videos according to the position of the shadow in the wall

I have tried to locate the shadow by recognizing pixels in the window with RGB values less than 50 (I have a simple web cam and the colors of the shadow is not always pure black) and put into an array of two dimensions the x and y coordinate of the pixel.

But I am not sure if getpixels works from left to right and downwards. Can some please clarify this issue?

P.D. If anyone has a better idea than x and y coordinates to locate the shadow of a person and its motion…please let me know

Hello drogza,

getPixels() returns you an array of only ONE dimension with the length width*height*channels.

It’s better explained here.


HI drogza, the array that getPixels() returns is one dimensional storing the data from left to right, then top to bottom like you say.

if you want the pixel data for position (x, y) you want to do

int bytesPerPixel = RGB ? 3 : 4;   // if RGB image 3 bytes per pixel, otherwise 4  
int pixelIndex = y * image.width + x;  
unsigned char *pixel = image.getPixels() + (pixelIndex*bytesPerPixel);   // more readable form: &(getPixels()[pixelIndex*bytesPerPixel)]);  

Though I would recommend OpenCV for this (checkout the openCV sample app), it does things like thresholding, blurring etc. very easily…

*UPDATE*: I forgot to include the possibility of greyscale in the bytesPerPixel (1 byte per pixel), but you get the idea…