I’m having a really strange issue, I think it has something to do with the way ofImage is set up loading an image or my lack of understanding it…
Basically I am masking an ofImage by using the following code:
//------------------------------------------------------------------
void ofxAdvancedImage::mask(unsigned char * pixels, int w, int h) {
if(w*h <= width*height) {
setImageType(OF_IMAGE_COLOR_ALPHA);
for(int i=0; i<width*height;i++) {
int thisAlphaPixel=(i*4)+3;
myPixels.pixels[thisAlphaPixel] = pixels[i];
}
//Load the blurred image in
if (myPixels.bAllocated == true && bUseTexture == true){
tex.loadData(myPixels.pixels, myPixels.width, myPixels.height, myPixels.glDataType);
}
} else {
cout<< "Image is too big to be a mask!" <<endl;
}
}
which I pass in an array of grayscale pixels to use as a mask. Basically if I load in an RGB image and run this code nothing happens. If I change int thisAlphaPixel=(i*4)+3 toint thisAlphaPixel=(i*4) or int thisAlphaPixel=(i*4)+2 or int thisAlphaPixel=(i*4)+1 it will successfully mask that color channel (R,G,B).
So I’m thinking…yea this should work but its not, maybe something I’m doing wrong. Then the bizarreness starts…if I load in a Transparent PNG to my ofImage…it works perfectly fine. This is the only way it works.
I have tried setting the image type before and after loading the image of OF_IMAGE_COLOR (GL_RGBA) as well as the call in the code before applying the changes to the alpha pixels, but nothing works unless the image loaded in originally has an alpha channel.
I feel like something with the way freeImage is being used is preventing the alpha channel from being used when a non-alpha image is loaded in.
Any help would be greatly appreciated!
Thanks to anyone who can help,
-Steve