Image glitch/noise/distortion study


I’ve been messing with openFrameworks for weeks and finally got 2 simple projects using ComputerVision & Kinect.
Also I’ve been playing with OpenGL but lack of knowledge here.

Now I want to do a “study” of image glitching/processing.
And then, move the same concept to video.
Some cool effects will be, for example, pixelate an image -and then make these pixels in 3d-, old tv interference, etc…

So for an starting point, I would like to reproduce something like this:
and this

I don’t want the source code -I want to fight and learn how and why “things happen”- but I would like some advices for getting that result.
I mean which steps, concepts, etc suits best.

I hope I’ve been clear enough and thanks a lot for your help guys!


Well reading the source code is a good way of figuring out how it works :slight_smile:

The technique in the images is slit scanning

and all you could ever need to know about it you can probably find here (as well as some OF stuff)

Thanks for the resources.

It’s been a bit hard to get used to OF since I’m not used to code in C.
Also I’m starting to mess with openGL and I think I have to consider too for my image “study”.

Anyway thanks for your time!

Hi !

I think that’s a quite interesting topic you have here.

I might be wrong but the picture you posted here seems to belong more to the analog world than the digital one (or here apparently it is messing up with a scanner…). If you want to do it purely digitally I’d try to build a few primitive functions first:

  • Stretch the image by repeating a line of pixels many time
  • Take a block of lines and shift them along the x axis, each line a bit more/less following some sinusoidal function
  • Define some kind of image transformation that takes a center (where you apply it), a window (the range on which you apply the transform, with some weighting saying the farther from the center, the less you apply the transform), and a direction (see explanation). This transformation will move the color values following a certain direction, and depending on the weighting from the window, and the colour you have, it will move it farther and farther.

The idea for the transformation is to mimic a prism, that splits the light and deviates it to a certain angle that depends on the wavelength (hence creating a rainbow effect)

Finally, once you have these primitives, you can apply them randomly to your image, and change (randomly) the parameters you have (number of lines of the image on which you apply sinus effect, the frequency of the sin, …)
If you manage to build all this, you may get some result close to what you want (or at least, I hope it’s going to be an interesting result!)

Another thing you could consider to obtain interesting and unplanned result is to play with some kind of larsen effect (like when you film the livefeed from your camera).
This can be done using FBO in openGL (you draw some image in an fbo and apply some fitlering to it, then draw the fbo in another fbo using another filtering, and then you switch the fbo and start again… something like that)

You should look the shaders example of openFrameworks (or anywhere else) to see some fun processing you can do on images !

Hello Adrigil,

I have been experimenting using a different approach, in this case a visual version of a granular syntesis:

Maybe what could interest you is that I’m using GLSL shaders witch is a very openGL approach to it. It’s not so much consuming so I think you could apply it to video.



1 Like