interpolation

Hello OF People!

I am building an application with a particle system which re-creates the scene according to what it gets from the camera. Now, my problem is I am allocating only 320x240 for what I get from camera. My application size is 1024x768 and normally I can just use scaleInto() method to scale up my ofxCvGrayscaleImage with no problems if I were to view this image on the screen. But my purpose is to do an edge detection on the image and get the x/y coordinates instead where I am going to be setting my particles’ locations according to these locations.

I am using wonderful findContours() method here and then later contourFinder.blobs[0].pts.size(); in order to get the points. The problem is my ofxCvGrayscaleImage is only 320x240 and I am not quite sure how to interpolate those coordinates so it scales up to my width/height. I am passing my ofxCvGrayscaleImage through this method. Should I be scaling it up and run my edge detection algorithm and then pass this object instead?

I would appreciate any directions.
best,
ilteris.

Hi.
you need to set a scale factor, then the particle coordinates will be determined by the coordinates of your blob points multiplied by the scale factor.

heres some pseudo code

  
  
scaleX = (float)ofGetWidth()/myImage.width;  
scaleY = (float)ofGetHeight()/myImage.height;  
  
findContours(myImage, 1, 1000, 10, true);  
  
for(int i=0; i<contourFinder.nBlobs; i++){  
   for(int j=0; j<contourFinder.blobs[i].nPts; j++){  
       myParticle.x = contourFinder.blobs[i].pts[j].x*scaleX  
       myParticle.y = contourFinder.blobs[i].pts[j].y*scaleY;  
  }  
}  
  

hope that help

This might not be what you where asking for but check this out…
http://forum.openframeworks.cc/t/spline-interpolation-addon/1150/0

while you can scale up to the size of the screen, you don’t have to and probably shouldn’t – the information that’s coming in isn’t 1024x768 ,and to scale it up is only taking processor time. pelintra’s code is right on the mark - just use math to do the conversions. While he has a scale factor,

  
  
(float)ofGetWidth()/myImage.width;   
  

I find it useful to think of two steps…

a) convert the value derived from the image to a percentage (0-100%, or 0-1 as float)

  
ie, pctx = pts[j].x/(float)myImage.width;   

b) multiply that percentage times the output size:

  
float outputx = pctx * (float)ofGetWidth();  

it’s the same mathematically, but I find it kind of helpful to break it into two parts in case the output is a special case (ie, not starting at 0,0, etc).

take care,
zach

i second what Zach said - it’s really good to scale x and y to 0…1 on input, work with them that way, and then scale up to whatever screen res when you draw.

this also has the advantage that when you’re sending data somewhere else (eg, using the x position of something as a panning control for a Pure-Data patch) you’re sending a nice 0…1 number that’s really use to turn into whatever you want.

These are exact answers I was looking for!!
Much appreciated!

ilteris.