Masking and blurring a transition

I want to create a circular mask, either as imported jpg or
generated in of (preferred).

In fact i need a white circle on black ground where i can adjust
the softness of the edge (blur). The i want to use this as mask
for a transition between two videos.

now my questions:
-How can I blur images?
-How can I use an Image as a Mask to blend in and out, two different videos?

thank you for your help,
daniel

see the jonkirk GL_RGB thread - I posted something there about mixing image … blur stuff will come soon, you can blur very easily and fast with the opencv addon (ofCvGrayscale) , I will take a look at adding it to ofImage as well…

take care!
zach

Hey just wondering if openCV (ofCvGrayscale) is still the way to go for fast easy blurs? Does anyone have a simple example hanging around? Just trying to blur a image sequence depending on its xpos.

Dunno if its faster or better than the openCV blurs, but I’d converted Mario Klingemann’s superFastBlur function to C++…

its for RGB, not greyscale, but can be adapted quite easily

  
  
// Super Fast Blur v1.1  
// by Mario Klingemann <[http://incubator.quasimondo.com>](http://incubator.quasimondo.com>)  
// converted to C++ by Mehmet Akten, <[http://www.memo.tv>](http://www.memo.tv>)  
//  
// Tip: Multiple invovations of this filter with a small   
// radius will approximate a gaussian blur quite well.  
//  
  
#include "ImageFilters.h"  
  
void superFastBlur(unsigned char *pix, int w, int h, int radius) {  
	  
	if (radius<1) return;  
	int wm=w-1;  
	int hm=h-1;  
	int wh=w*h;  
	int div=radius+radius+1;  
	unsigned char *r=new unsigned char[wh];  
	unsigned char *g=new unsigned char[wh];  
	unsigned char *b=new unsigned char[wh];  
	int rsum,gsum,bsum,x,y,i,p,p1,p2,yp,yi,yw;  
	int *vMIN = new int[MAX(w,h)];  
	int *vMAX = new int[MAX(w,h)];  
	  
	unsigned char *dv=new unsigned char[256*div];  
	for (i=0;i<256*div;i++) dv[i]=(i/div);  
	  
	yw=yi=0;  
	  
	for (y=0;y<h;y++){  
		rsum=gsum=bsum=0;  
		for(i=-radius;i<=radius;i++){  
			p = (yi + MIN(wm, MAX(i,0))) * 3;  
			rsum += pix[p];  
			gsum += pix[p+1];  
			bsum += pix[p+2];  
		}  
		for (x=0;x<w;x++){  
			  
			r[yi]=dv[rsum];  
			g[yi]=dv[gsum];  
			b[yi]=dv[bsum];  
			  
			if(y==0){  
				vMIN[x]=MIN(x+radius+1,wm);  
				vMAX[x]=MAX(x-radius,0);  
			}   
			p1 = (yw+vMIN[x])*3;  
			p2 = (yw+vMAX[x])*3;  
			  
			rsum += pix[p1]		- pix[p2];  
			gsum += pix[p1+1]	- pix[p2+1];  
			bsum += pix[p1+2]	- pix[p2+2];  
			  
			yi++;  
		}  
		yw+=w;  
	}  
	  
	for (x=0;x<w;x++){  
		rsum=gsum=bsum=0;  
		yp=-radius*w;  
		for(i=-radius;i<=radius;i++){  
			yi=MAX(0,yp)+x;  
			rsum+=r[yi];  
			gsum+=g[yi];  
			bsum+=b[yi];  
			yp+=w;  
		}  
		yi=x;  
		for (y=0;y<h;y++){  
			pix[yi*3]		= dv[rsum];  
			pix[yi*3 + 1]	= dv[gsum];  
			pix[yi*3 + 2]	= dv[bsum];  
			if(x==0){  
				vMIN[y]=MIN(y+radius+1,hm)*w;  
				vMAX[y]=MAX(y-radius,0)*w;  
			}   
			p1=x+vMIN[y];  
			p2=x+vMAX[y];  
			  
			rsum+=r[p1]-r[p2];  
			gsum+=g[p1]-g[p2];  
			bsum+=b[p1]-b[p2];  
			  
			yi+=w;  
		}  
	}  
	  
	delete r;  
	delete g;  
	delete b;  
	  
	delete vMIN;  
	delete vMAX;  
	delete dv;  
}  
  

WOW Thankyou!!! this looks easy enough for me to use. I’ll give it a shot now.

cool, the original is here
http://incubator.quasimondo.com/process-…-t-blur.php

Yea looks so awesome, very fast, now I have a feeling I may be exposing my inexperience with openFrameworks with this but do I need a “ImageFilters.h”?

I’ve using version 0.04 dev. Do I need the newer version of openFrameworks for that header file??

oh sorry, no that was my header file where I defined the prototype of the function. I had the function code in an ImageFilters.cpp and then had an ImageFilters.h :

  
  
#pragma once  
  
#include "ofMain.h"  
  
void superFastBlur(unsigned char *pix, int w, int h, int radius);  
  

So I just need to #include ImageFilters.h in any cpp file that I want to call the function from.

So you just want to use the blur to soften the edge of your mask?

Why not just generate the mask in OF?

for example:

create a new pixel array
loop through the array
for each pixel, if its distance from the central pixel is less than radius1 set to white
if its distance is greater than radius2 set to black
if its in between, interpolate to a gray value to get the soft edge.

The benefit of this is that you can have your “blur” amount any size and not affect performance at all because its working in linear time and even computing a blur.

you could also make a mask this way but instead of white, make the center translucent, then stack the video and the mask so it will hide the black part and you can see through the translucent part to your video.

Hell, you could even apply the mask directly to the pixels of your video, or blend the two videos together this way.

Hey Tim,

Your idea sounds sweet for Daniel. I’m still trying to get memo’s blur running. I’m not sure where I’m going wrong but its compiling and running but no visible results.

here’s the code, I’m trying to blur a reg+A png image sequence:

memo,
I renamed your file to ImageBlur.cpp and commented out //#include “ImageFilters.h”

then I added:

void superFastBlur(unsigned char *pix, int w, int h, int radius);
into the ofmain.h file so I could access the function from any/all of my classes.

then I tested it in a class :

  
void ofHands::draw(){  
   ofImage &xx = sequenceHands[handsFrame];  
   superFastBlur(xx.getPixels(), xx.width, xx.height, 30);  
}  

and it all compiles but no visible results, do you have any idea why its not working? Are there any simple examples I could open up in openframeworks??

hi super quickly, altering the pixels like this:

  
  
superFastBlur(xx.getPixels(), xx.width, xx.height, 30);  
  

doesn’t necessarily update the image. You’ll have to use setFromPixels again after the blurring operation. (I think we will change this to allow something more like a processing style update function)…

so, roughly:

  
  
unsigned char * pixels = xx.getPixels();  
superFastBlur(pixels, xx.width, xx.height, 30);  
xx.setFromPixels(pixels, xx.width, xx.height, OF_IMAGE_COLOR);  // if the pixels are color  
  

hope that helps!

best,
zach

Hey thanks Zach!! that’s the ticket, makes sense too which is always good :smiley:

ohh no, I spoke too soon, I got the blur working really nicely on RGB images but no such luck with RGBA images, I tried going in to the SuperFastblur code and adding “a” onto rgb, and playing around a bit. but I really didn’t know what I was doing… Ha and it showed. Would that be the way to fix this? changing something in the superfastblur code or should it just work. This one is proving very, very challenging for me :frowning:

Hey I got the class working with alpha channels, I just changed the 3 value to a 4 on a few lines:

  
p = (yi + MIN(wm, MAX(i,0))) * 4;  
...  
  
p1 = (yw+vMIN[x])*4;  
p2 = (yw+vMAX[x])*4;  
  
...  
  
 pix[yi*4]      = dv[rsum];  
         pix[yi*4 + 1]   = dv[gsum];  
         pix[yi*4 + 2]   = dv[bsum];  
...  

now what that does is blur the image not the alpha channel you also need to add “alpha” when using the blur class for me it looked like:

  
backDrop.setFromPixels(pixels,  backDrop.width, backDrop.height, OF_IMAGE_COLOR_ALPHA);  

In my eyes there is still an issue, the class really needs to blur the edge of the alpha channel when blurring RGBA images, without it you have nice blurred images but they have sharp edges!

Does anyone know of a unsharpen mask type effect that could be used with the superFastBlur class to blur the edges on the alpha channel?

would be awesome! 4 sure.

[/img]

Hi Ianmere, yea you would need to change the *3 to *4 like you discovered, but you will also need to add code to blur the alpha, every where you see ‘r’, ‘g’, ‘b’ you need to duplicate that line and do the same for ‘a’,

e.g.

  
  
   unsigned char *r=new unsigned char[wh];  
   unsigned char *g=new unsigned char[wh];  
   unsigned char *b=new unsigned char[wh];   
   unsigned char *a=new unsigned char[wh];  
  
.....  
   int asum;  
....  
    asum = 0;  
.....  
    asum += pix[p+3]; // if there is an offset in the array (e.g. r has none, g has 1, b has 2) then use an offset of 3.  
....  
    a[yi]=dv[asum];  
....  
    pix[yi*3 + 3]  = dv[asum];  
  

hope that helps, bit tied up at the moment so can’t just write it out… but hopefully that should give you a direction… (i haven’t actually tried this, but that looks like what needs to be done).

by the way, i’ve been working on some GLSL blurs, and got some uber fast blurs… but they need rendering to offscreen texture etc. which is a bit complicated… once there are simple to use RTT libraries etc. (which is not my strength really), then that will probably be the way to go!

Wow Thanks memo, the other blurs your working on sound sweet. Looking forward to playing around with them sometime in the future when they are simplified ha.

I’m going to give what you recommended a shot, fingers crossed!

cheers

OK I tried it, not quite there yet. Here’s What I’ve got:

  
  
void superFastBlurAlpha(unsigned char *pix, int w, int h, int radius) {  
     
   if (radius<1) return;  
   int wm=w-1;  
   int hm=h-1;  
   int wh=w*h;  
   int div=radius+radius+1;  
   unsigned char *r=new unsigned char[wh];  
   unsigned char *g=new unsigned char[wh];  
   unsigned char *b=new unsigned char[wh];  
   unsigned char *a=new unsigned char[wh];   
   int rsum,gsum,bsum,asum, x,y,i,p,p1,p2,yp,yi,yw;  
   int *vMIN = new int[MAX(w,h)];  
   int *vMAX = new int[MAX(w,h)];  
     
   unsigned char *dv=new unsigned char[256*div];  
   for (i=0;i<256*div;i++) dv[i]=(i/div);  
     
   yw=yi=0;  
     
   for (y=0;y<h;y++){  
      rsum=gsum=bsum=asum=0;  
      for(i=-radius;i<=radius;i++){  
         p = (yi + MIN(wm, MAX(i,0))) * 4;  
         rsum += pix[p];  
         gsum += pix[p+1];  
         bsum += pix[p+2];  
         asum += pix[p+3];  
      }  
      for (x=0;x<w;x++){  
           
         r[yi]=dv[rsum];  
         g[yi]=dv[gsum];  
         b[yi]=dv[bsum];  
         a[yi]=dv[asum];  
           
         if(y==0){  
            vMIN[x]=MIN(x+radius+1,wm);  
            vMAX[x]=MAX(x-radius,0);  
         }  
         p1 = (yw+vMIN[x])*4;  
         p2 = (yw+vMAX[x])*4;  
           
         rsum += pix[p1]      - pix[p2];  
         gsum += pix[p1+1]   - pix[p2+1];  
         bsum += pix[p1+2]   - pix[p2+2];  
         asum += pix[p1+3]   - pix[p2+3];  
         yi++;  
      }  
      yw+=w;  
   }  
     
   for (x=0;x<w;x++){  
      rsum=gsum=bsum=asum=0;  
      yp=-radius*w;  
      for(i=-radius;i<=radius;i++){  
         yi=MAX(0,yp)+x;  
         rsum+=r[yi];  
         gsum+=g[yi];  
         bsum+=b[yi];  
         asum+=a[yi];  
         yp+=w;  
      }  
      yi=x;  
      for (y=0;y<h;y++){  
         pix[yi*4]      = dv[rsum];  
         pix[yi*4 + 1]   = dv[gsum];  
         pix[yi*4 + 2]   = dv[bsum];   
         pix[yi*4 + 3]   = dv[bsum];  
         if(x==0){  
            vMIN[y]=MIN(y+radius+1,hm)*w;  
            vMAX[y]=MAX(y-radius,0)*w;  
         }  
         p1=x+vMIN[y];  
         p2=x+vMAX[y];  
           
         rsum+=r[p1]-r[p2];  
         gsum+=g[p1]-g[p2];  
         bsum+=b[p1]-b[p2];  
         asum+=a[p1]-a[p2];  
           
         yi+=w;  
      }  
   }  
     
   delete r;  
   delete g;  
   delete b;  
    delete a;  
     
   delete vMIN;  
   delete vMAX;  
   delete dv;  
}   

I’m thinking that its averaging all 4 channels now, working with the alpha like a forth channel?

Does it need to work with RGB separate from the ALPHA, then work with the alpha running the similar code but only working on one channel?

hmm I’m a bit confused about this one.

Ian

Hey sorry guys to put this one forward again but does anyone have any advice/suggestions? I’m still struggling to get it sorted. But I feel its so close!

Mr.scow, I noticed your wanting to do something similar, I haven’t found a complete solution yet but the closest I’ve come up with is using the openCVBlur class to blur RGB and A channels separately and then using some code Zach wrote to convert back to RGBA. The openCVblur seems a bit buggy, it seems you can only use specific values, 3 + a multiple of 6 eg, 9, 15, 21, 28. Not sure why.

Anyway I’ll post my code it may help you out:

  
void testApp::setup(){  
  
  
	guy.loadImage("Typer_COLOR_0.png");  
	guyAlpha.loadImage("Typer_17_ALPHA.png");  
        guyAlpha.setImageType(OF_IMAGE_GRAYSCALE);  
	colorImg.allocate(250,350);  
	AlphaImage.allocate(250,350);  
  
    colorImg.setFromPixels(guy.getPixels(), 250,350);  
    AlphaImage.setFromPixels(guyAlpha.getPixels(),250,350);  
    guyAlpha.setImageType(OF_IMAGE_GRAYSCALE);  
  
    colorImg.blur(3);  
    AlphaImage.blur(3);  
  
}  
  
//--------------------------------------------------------------  
void testApp::update(){  
    int w = guyAlpha.width;  
	int h = guyAlpha.height;  
  
  
    unsigned char * pixels = new unsigned char[guyAlpha.width*guyAlpha.height*4];  
	unsigned char * colorPixels = colorImg.getPixels();  
	unsigned char * alphaPixels = AlphaImage.getPixels();  
		for (int i = 0; i < w; i++){  
		for (int j = 0; j < h; j++){  
			int pos = (j * w + i);  
			pixels[pos*4  ] = colorPixels[pos * 3];  
			pixels[pos*4+1] = colorPixels[pos * 3+1];  
			pixels[pos*4+2] = colorPixels[pos * 3+2];    //this is for mixing   the channels to get the alpha  
			pixels[pos*4+3] = alphaPixels[pos];  
		}  
	}  
	rgbaMixture.allocate(w,h,GL_RGBA);  
	rgbaMixture.loadData(pixels, w,h,GL_RGBA);  
  
	delete [] pixels;  
  
  
}  
  
//--------------------------------------------------------------  
void testApp::draw(){  
  
ofBackground(255,255,255);  
  
  
	ofEnableAlphaBlending();  
	rgbaMixture.draw(500,500);  
	ofDisableAlphaBlending();  
  
}   

This seems great :slight_smile:

I just switched over from cvSmooth to this and this is faster (from my tests so far).

I’d like to also try this on a highpass filter though to see if I can speed it up and could use some help…

The current openCV code to make the highpass is:

  
	  
  
//Blur Original Image  
if(blur1 > 0)  
cvSmooth( cvImage, cvImageTemp, CV_BLUR , (blur1 * 2) + 1);  
  
//Original Image - Blur Image = Highpass Image  
cvSub( cvImage, cvImageTemp, cvImageTemp );  
  
//Blur Highpass to remove noise  
if(blur2 > 0)  
cvSmooth( cvImageTemp, cvImageTemp, CV_BLUR , (blur2 * 2) + 1);  
  
swapTemp();  
flagImageChanged();  
  

I’d like to change both cvSmooth calls to use the superFastBlur to test if it’s faster. I’m currently using this code (and it works), but it’s somewhat slow. I think having to setFromPixels between things causes the slow down. If anyone has a way to optimize this, i’d be great:

  
  
//store original  
cvCopy(cvImage, cvImageTemp);  
  
//blur original image  
unsigned char * pix1 = getPixels();  
superFastBlur(pix1, width, height, blur1);  
setFromPixels(pix1, width, height);  
  
//Original Image - Blur Image = Highpass Image  
cvSub( cvImageTemp, cvImage, cvImage );  
  
//Blur Highpass to remove noise  
unsigned char * pix2 = getPixels();  
superFastBlur(pix2, width, height, blur2);  
setFromPixels(pix2, width, height);  
	  
flagImageChanged();  
  
  

Is there a simple way to avoid doing setFromPixels until the end? Maybe can subtract the original image from the blurred without using cvSub so that the pixels can then be directly passed into the second blur?