Q: How to set colors NON-LINEARLY from image to videograbber

Hi. I have a working project based on the meshfromcamera example. However I’d like set part of the colors of the camera stream so that part of the vidgrabber screen would constitute of these rectangle bits of a jpg file. I couldn’t find a solution to this, every where I looked the vertex colors are set linearily, so that it sets one vertex after another. For example like this:

for (int i= 0); i<vidGrabber.getWidth()*vidGrabber.getHeight(); i+=1) {
  int x = mainMesh.getVertex(i).x;     
  int y = mainMesh.getVertex(i).y;
  mainMesh.setColor(i, jpgFile.getColor(x,y));

How could I do this by setting the vertex so that it would go only let’s say 100 lines in X and then changes to Y so I could change rectangle areas of the vidgrabber stream. So basically how to set vertex colors in [x][y] style?


 mainMesh.setColor(x + vidGrabber.getWidth() * y, jpgFile.getColor(x,y));

Thanks a lot, works great! Can someone explain it a little bit, now it’s bit esoteric to me :slight_smile: So for example if x=0 and y= 5, then it skips 5 times the vertices which consititute one row of pixels on a screen?

in this particular case, one vertex is created for each pixel, so the indices for the mesh’s vertices is the same as the pixels index.
take a look at this about pixels

in the following there is an explanation about accessing pixels.

Great material, thanks for sharing! So basically there are three times more indices for pixels color information than there are pixels in case of RGB image? In the vidgrabber example it is like this:

      ofFloatColor sampleColor(vidGrabber.getPixels()[i*3]/255.f,				// r
								 vidGrabber.getPixels()[i*3+1]/255.f,			// g
								 vidGrabber.getPixels()[i*3+2]/255.f);			// b

So one pixel takes actually 24 bits(3 bytes) in the memory, in case of 8 bit jpg file? I was trying to calculate a jpg files size by hand deducing from the fact that it is 8 bit rgb file of certain pixel dimension, still the size was a lot larger than the actual size on disk. Is it because of compression? For example if I have RGB jpg of 8 bit and 600px*600px big, it should be 1,080,000 bytes (1,08 megabytes) however the actual file was a lot smaller. Do you know what might be the reason? I assume that the jpg files headers don’t take too many bytes.

Hi! JPEG images have lossy compression, and the amount of compression can be specified when creating the file.

A 600x600 image with random colored pixels can occupy anything between 50Kb and 1.1Mb, depending on the compression amount. An image with the same dimensions but flat color takes less than 5Kb.

Alright, thanks!

So what does that compression do memory-wise, if the ammount per/channel is the smallest unit already(1 byte) in RGB file? I’d assume that to get a smaller file than 3 bytes per pixel you should somehow put those bytes together and a viewer program should be able to know this, so the structure of the file and the compression should be defined somewhere in the JPG container?

And if you open a JPG in text editor, you see some gibberish characters, are those the values of the bytes that the image consists of?

I think you should check https://en.wikipedia.org/wiki/JPEG :slight_smile: The bytes you see when opening a JPEG file on a text editor do not directly correspond to pixels as they may in other formats. You can see that by changing a random byte in a JPEG file. Such a change may affect most pixels in the image.

JPEG compression works by discarding data that it is not perceived by humans. So, the actual bytes in the file are not representative of the pixel values, the whole file needs to be decompresses in order to recreate the pixel values. This compression is lossy, which means that the compressed image will not be the same as the original, the pixel values will not be the same, but quite close. Read that wikipedia …
When working in OF, ofImage and ofPixels always have decompressed data, even when you open a JPEG file. It is OF that does all the file decompression when loading behind the scenes, so you dont worry about it. If you want to store files without lossing info use tiff files, https://es.wikipedia.org/wiki/TIFF , To save a tiff file from an ofImage just append the .tiff extension when saving, something like this.

ofImage myImage;
// do something with the image...

hope this helps.

Thanks! Really interesting that there is type of FFT going on inside JPEGs aswell. I was reading the ISO JPEG standard:

Why I am asking about this is I was working on a python script which manipulates wav files bits. In that process I realized that I don’t know anything about these actual files people usually work with, for example JPG. So I would like to see those files in non-gibberish form and what they actually contain and maybe manipulate that information or to move it to another file container and see/hear what happens. So apparently in jpeg there are these markers for the image information:

Start of image marker – Marks the start of a compressed image represented in the interchange format orabbreviated format.

End of image marker – Marks the end of a compressed image represented in the interchange format orabbreviated format.

And this is I assume the part that OF is dealing with? I was trying to read the OF source code and apparently OF uses FreeImage.h and .cpp to parse images? At least it was imported in ofImage, couldn’t find the FreeImage.cpp file from my computer tho.

right, the fft is super important for jpeg.
of uses freeimage as you noted, but it comes precompiled for ease of use, that’s why you wont find its implementation files.
you might want to take a look at Rosa Menkman’s “Vernacular of file formarts”