How to edit screen pixels

So in Processing you can edit pixels with loadPixels() and updatePixels() and I’m trying to achieve the same here. Below the image is the code I’m using.

I know that ofPixels is an unsigned char* + some functionalities. Since this is an array of pointers, the line:

img.getPixels() = pixels;

shouldn’t be necessary, but else I won’t get a visual.

It’s result can be seen in the image below, but I don’t understand why I get three images.

So can you give me a correct (and maybe the best) way to edit screen pixels?

ofImage img;
ofPixels pixels;

void ofApp::setup(){
	img.allocate(ofGetWidth(), ofGetHeight(), OF_IMAGE_GRAYSCALE);	
}

void ofApp::update(){
	float noiseScale = ofMap(mouseX, 0, ofGetWidth(), 0, 0.1);
	float noiseVel = ofGetElapsedTimef();

	img.grabScreen(0, 0, ofGetWidth(), ofGetHeight());
	pixels = img.getPixels();
	int w = img.getWidth();
	int h = img.getHeight();
	for (int y = 0; y < w; y++) {
		for (int x = 0; x < h; x++) {
			int i = y * w + x;
			float noiseValue = ofNoise(x * noiseScale, y * noiseScale, noiseVel);
			pixels[i] = 255 * noiseValue;
		}
	}
	img.getPixels() = pixels;
	img.update();
}


void ofApp::draw(){
	ofSetColor(ofColor::white);
	img.draw(0, 0, ofGetWidth(), ofGetHeight());
}

in c++ when you do:

pixels = img.getPixels()

you are making a copy of the original pixels inside the image into the variable pixels. So when you later modify pixels it won’t change the original. To work on the original you can call img.getPixels every time or better use a reference like:

ofPixels & pixels = img.getPixels();

You have switched w and h in the nested for loops, also the image will not be greyscale after you grab the screen but rgb, thats the reason for the triple noise images.

This code works for me:

 void ofApp::update(){
    float noiseScale = ofMap(mouseX, 0, ofGetWidth(), 0, 0.1);
    float noiseVel = ofGetElapsedTimef();
    
    img.grabScreen(0, 0, ofGetWidth(), ofGetHeight());
    int w = img.getWidth();
    int h = img.getHeight();
    for (int y = 0; y < h; y++) {
        for (int x = 0; x < w; x++) {
            int i = y * w + x;
            float noiseValue = ofNoise(x * noiseScale, y * noiseScale, noiseVel);
            unsigned char noisegrey = 255 * noiseValue;
            img.setColor( x, y , ofColor(255 * noiseValue));
        }
    }
    img.update();
}

Edit:
The noisegrey variable is a rest from my experiments and should be removed.

I declared ofPixels in the .h before as reference, which didn’t work. So it was because it was out of the scope of update() and that’s not how references work. I understand now thanks.

So that means it’s drawing the rgb channels next to each other, right? But why is that?

Your code and my code, both gives me a fps below the 10 which is pretty low. And mine is still drawing three images next to each other.

Do you know how to get a better fps and, instead of three, one full window color image? If it’s in color I can switch to grayscale later, I think.

float noiseScale = ofMap(mouseX, 0, ofGetWidth(), 0, 0.1);
float noiseVel = ofGetElapsedTimef();

img.grabScreen(0, 0, ofGetWidth(), ofGetHeight());
ofPixels& pixels = img.getPixels();
int w = img.getWidth();
int h = img.getHeight();
for (int y = 0; y < h; y++) {
	for (int x = 0; x < w; x++) {
		int i = y * w + x;
		float noiseValue = ofNoise(x * noiseScale, y * noiseScale, noiseVel);
		pixels[i] = 255 * noiseValue;
	}
}
img.update();

the slowest here is to grab the screen and there’s really not much to do about that since it’s a hardware limiation, you can look into using PBOs but it’s a relatively complex technique. But you don’t need to grab the screen at all if you are going to overwrite the whole thing anyway.

Also iterating through the pixels in an image is simpler and usually faster like:

float noiseScale = ofMap(mouseX, 0, ofGetWidth(), 0, 0.1);
float noiseVel = ofGetElapsedTimef();

ofPixels& pixels = img.getPixels();
int w = img.getWidth();
int h = img.getHeight();
for(auto line: pixels.getLines()){
	int x = 0;
	for(auto pixel: line.getPixels()){
		float noiseValue = ofNoise(x * noiseScale, line.getLineNum() * noiseScale, noiseVel) * 255.;
		pixel[0] = noiseValue;
		pixel[1] = noiseValue;
		pixel[2] = noiseValue;
		x+=1;
	}
}
img.update();

The other thing that is fairly slow is to calculate ofNoise for each pixel.

Thank you, this works. I’m guessing the fastest way would be a shader.

I have two last questions, it’s about the color array.

First, when using the function getPixels() I wasn’t aware I was accessing an array. I think this explains the triple noise images. The documentation didn’t say anything about it and I didn’t find any way to print the array or acces the length to use in a for loop.

So for future encounters what method do I need to use to know I’m working with an array?

And secondly, why are these:

 pixel[0];
 pixel[1];
 pixel[2];

In cmyk and not rgb?

about ofPixels being an array i guess that’s obvious if you know how memory is laid out for images in the computer but i agree that the documentation could be clearer about that. not sure what you mean though by a method to know that. the total size of ofPixels can be queried using pixels.size() and the width and the size using pixels.getWidth()/getHeight()

also not sure what you mean with cmyk? the pixels are usually in rgb not cmyk and depends on how you allocate the original image (using OF_IMAGE_GRAYSCALE, OF_IMAGE_RGB or OF_IMAGE_RGBA)

		pixel[0] = noiseValue;
		pixel[1] = noiseValue;
		pixel[2] = noiseValue;

these are because the second for loop iterates over the pixels in each line which being RGB are composed of 3 components r, g and b 0, 1 and 2

I see that I didn’t asked my question clearly. The getPixels() function is called two times in the code that you gave and my question was about the second time in the nested for loop. I made a notation in the code below.

ofPixels& pixels = img.getPixels(); <-- first time
int w = img.getWidth();
int h = img.getHeight();
for(auto line: pixels.getLines()){
	int x = 0;
	for(auto pixel: line.getPixels()){ <-- second time

This second time when getPixels() is called, you get access to an array (again). But the variable pixel in this for loop doesn’t has the function size(). So I wouldn’t know that I’m working with an array and that I could acces it with pixel[n], as can been seen in the rest of the code below.

		float noiseValue = ofNoise(x * noiseScale, line.getLineNum() * noiseScale, noiseVel) * 255.;
		pixel[0] = noiseValue;
		pixel[1] = noiseValue;
		pixel[2] = noiseValue;
		x+=1;
	}
}

How would I know that I’m working an array?

And about the cmyk. In the code below I specified what color I would get when I comment two of the three lines out.

pixel[0] = 0; <-- cyan
pixel[1] = 0; <-- magenta
pixel[2] = 0; <-- yellow

I would assume there is allso a pixel[3], but I didn’t check that out.

that second getPixels doesn’t really return an array but an iterator over the pixels in each line in the original pixels array. an image in rgb is like an array of the componenets one after another like:

rgbrgbrgbrgbrgb...
rgbrgbrgbrgbrgb...
....

what each pixel in the second for loop represents is an rgb in the original array so accesing pixel[0] gives you r for each pixel, 1 -> g and 2 -> b.

I understand now, thank you for explaining.