Disturbance - reactive audio-visual floor

That’s so good !
already feeling the interaction of the fire particles :stuck_out_tongue:
Trying code, Ive got some errors with particle and ofPoint3f, looks you are using diferent ofxVectorMath and particle files.

Cheers

Oh man, I totally forgot. I did add some functionality to ofPoint.

add this code to the ofPoint class in openFrameworks/utils/ofTypes.h:

  
  
    bool insideBox(ofPoint lowPoint, ofPoint highPoint) {  
        return x >= lowPoint.x && x < highPoint.x && y >= lowPoint.y && y < highPoint.y;  
    }  
  

if that doesn’t fix it, paste the errors you’re getting and it might job my memory as to what other things i might have changed.

I’ve seen you also are using OpenMp improve cycles of particle system.
Where I should check for use that in OF?

I had some openMP code in there at one point, but it is mostly defunct now. You can remove it. alternatively you can just link with -fopenmp in gcc

hey I just gave this a try and the Particle class doesn’t seem to be the latest…some missing functionality:

error: ‘class Particle’ has no member named ‘x’
error: ‘class Particle’ has no member named ‘y’
error: ‘class Particle’ has no member named ‘resetForce’
error: ‘class Particle’ has no member named ‘updatePosition’
error: ‘class Particle’ has no member named ‘draw’

wow, this is embarassing, so many issues! those errors are because I’ve stopped using ParticleSystem.h/cpp files. remove them from the project and it will fix those errors.

I’ll clean up the code a bit over the next few days and repost.

Tim,

I just again wanted to tell that I like your project very much, looks awesome from your videos.
I hope I have a chance to look into the source tomorrow!
Keep up the good work,
greets
Snoogie

Tim do you have a website?

I just bought a domain and host, so it’s still under development:
timothyscaffidi.com

Wow, really nice :slight_smile:

[quote author=“kylemcdonald”]Wow, really nice :slight_smile:

[quote author=“kylemcdonald”]

It looks like you picked some parameters and left them alone. I’d be curious to see it change over time, like how many particles they are, their minimum distance, the viscosity/time parameter (imagine if it went into slow motion when you moved quickly)…

It’d also be really neat to see another kind of rendering. You’re using few enough particles there should be some space for this without losing the framerate.

Again, looks great :)[/quote]

I tried to pick the parameters that worked best for this installation, I liked the fire/lava concept and it makes some nice explosive looking effects sometimes. I did add something in where it will start pulsating if it gets “bored” but that never happened during the show!

What other rendering methods were you thinking about? I like the force-lines, it makes a beautiful mesh like pattern. I’ve also tried velocity lines, actually this is a combination of both force and velocity lines. maybe some nice alpha radial-gradient textures would look nice too if I had a beefy GFX card.

Thanks for the feedback and thanks for the help with the particle system earlier. Oh and btw, the particles are not nearly as fast as your code, I think the main culprit is that I am using a spring physics equation to calculate the forces. If you take a look at it, let me know if there is anything else I can do to optimize it.

I just glanced through your code and didn’t notice any glaring issues – the optimizations would probably be moving things around to minimize how often the code needs to jump between methods. Or maybe precomputing and reusing a few things.

I think for the rendering variation I’m just reminded of Zach Lieberman’s instructions to his class when teaching-about-particle-systems:

[quote author=“zach”]Please try as hard as possible to make the particles and forces drawn more interesting then just circles on the screen. you can use the velocity length and angle to control an object, and draw trails with a point recorder, etc.

for inspiration, take a look at Golan Levin’s AVES:

http://acg.media.mit.edu/people/golan/aves[/quote]

:slight_smile:

[quote author=“kylemcdonald”]
Basically, you have so much information: this complex network of interacting points with positions and velocities… so there must be some really mind-blowing way to show that :roll:

[quote author=“smuorfy”]great work!

Could you post the code from the floor version? it’s awsome!!

regards[/quote]

Actually, the floor version’s code is almost identical to the wall version, all you would need to do is remove the global downward force on the particles and get rid of the boredom feature. Oh and change the color to blue :slight_smile:

Thanks for posting this. I was able to run it on Windows (Visual Studio EE). Since I am new to the interactive thing, I would appreciate it if you could summarize how this works in principle (at the level of 1) how you get the blobs (opencv? background subtraction?, how many blobs, resolution etc), 2) how the blobs interact with the particles 3) how is the calibration done vis a vis projection/camera combination).

Cheers.

Excellent questions. These are all very closely related.

  1. I ended up going with simple background subtraction. I initially tried using openCV’s blob tracking but it turned out to be too slow and clunky. I couldn’t get the kind of accuracy I wanted with blobs because a blob might be very large or very small, I wanted to work with a lower level.

  2. My background subtraction is done at a very low resolution, I think something like 64x48. The resolution can be this low because I am not using too many particles.

For each pixel, if the subtraction is greater than a threshold value, then the particles are repelled from it’s position. This turns out to work very nicely when there is a large block of pixels that are “solid”.

  1. If you want to try out the video perspective correction, it works like this: After the program loads, you can press and hold either the ‘v’ key or the middle mouse button. This will switch on a video overlay of the background subtraction image. It will also show 4 circle in each corner. You can move these circles to warp the incoming video. move the mouse near a circle and press the left mouse button to start dragging it around. You will see the raw camera video at this point while dragging so that you can line up the corners to whatever you want, for example a projection screen, wall or monitor. You can also just use them to “zoom” the camera by cropping out what you don’t want it to see.

The video warping is done by the function getQuadSubImage() which basically calculates which pixels to copy from a source image to a destination image based on the 4 corners given:

  
void getQuadSubImage(   unsigned char * inputData, unsigned char * outputData,  
                                int inW, int inH, int outW, int outH,  
                                int x1, int y1, int x2, int y2,  
                                int x3, int y3, int x4, int y4, int bpp) {  
    for(int x=0;x<outW;x++) {  
        for(int y=0;y<outH;y++) {  
            float xlrp = x/(float)outW;  
            float ylrp = y/(float)outH;  
            int xinput = (x1*(1-xlrp)+x2*xlrp)*(1-ylrp) + (x4*(1-xlrp)+x3*xlrp)*ylrp;  
            int yinput = (y1*(1-ylrp)+y4*ylrp)*(1-xlrp) + (y2*(1-ylrp)+y3*ylrp)*xlrp;  
            int inIndex = (xinput + yinput*inW)*bpp;  
            int outIndex = (x+y*outW)*bpp;  
            memcpy((void*)(outputData+outIndex),(void*)(inputData+inIndex),sizeof(unsigned char)*bpp);  
        }  
    }  
}  

[quote author=“Tim S”]Excellent questions. These are all very closely related.
…[/quote]

Thanks, this is very helpful. I am going to try to reproduce your setup for the educational purposes.

Best.

Update: I was able to reproduce your app more or less on the wall and on the floor. Calibration wasn’t straightforward – dragging red circles sometime seemed to be ignored and image wouldn’t warp no matter what.

Another more important issue was interference of the human shadows in my ad hoc setup. It seems that this issue had to be dealt with by some strategic placement of the projector and the camera. Could you kindly comment on that?

Thanks again.

I am trying to build a similar setup, and at this point am thinking of reusing the warping part and the simplicity of the difference calculation instead of opencv’s blobs, silhouettes etc.

[quote author=“boba”]Update: I was able to reproduce your app more or less on the wall and on the floor. Calibration wasn’t straightforward – dragging red circles sometime seemed to be ignored and image wouldn’t warp no matter what.

Another more important issue was interference of the human shadows in my ad hoc setup. It seems that this issue had to be dealt with by some strategic placement of the projector and the camera. Could you kindly comment on that?

Thanks again.

I am trying to build a similar setup, and at this point am thinking of reusing the warping part and the simplicity of the difference calculation instead of opencv’s blobs, silhouettes etc.[/quote]

In my code it only shows you the warped output in differenced form, so it may be hard to tell, and usually the warping is very slight if you have a decent vantage point for both projector and camera. you may want to display the full warped image at the same time as the unwarped image, for reference.

As for shadows, yes, they are annoying. For my installation in the gallery I had an almost perfect overhead placement, and the lighting was fairly uniform, but I did have to block out one light near the door. It can be very finicky to set up. I think I would like to try using an IR camera and some IR lights on the background. Other than that, try to stay away from direct lighting if you can, and if you can’t place the light in the same spot as the camera to reduce stray shadows.

If anyone is intrested in creating a floor projection project, I have just released my open source solution for floor projection tracking. It is called openFloor and it consists of two programs, one that is written with openFrameworks and the other is written in processing. The first program is responsible for detecting and decoding all of the humans on the floor and the second one is responsible for producing the user interface.

If you would like to download it and try it out just visit: http://www.vladcazan.com/openfloor/intr-…-openfloor/