Question about memory consumption in oF

I am starting a new project that requires around 200 animated sprites to be playing at the same time so I knocked together a couple of small demos, one in oF and one in WPF/C#. All I have done is make a basic sprite class that loads a directory of images into memory. I then make 200 of these objects and play them back in a loop.

In the sprite class in oF I am setting setUseTexture to false and assigning the pixels to another ofImage that is drawn to screen.

Surprisingly the WPF application works better than the oF application and I am wondering why.

WPF - starts instantly, 20% CPU, 1GB memory.
oF - memory hits close to 2GB and crashes while loading the images.

I know a sprite sheet would be better in oF but am wondering why it is struggling to keep up with WPF (which in my experience is normally a lot slower/more memory intensive) in this instance.

Can anyone shed any light on this?

If you set an ofImage for each sprite frame you’re setting both RAM and VRAM (GPU memory). What you want is to load the sprite sheet to GPU and then index GPU memory for rendering. There must be an addon that loads sprite sheets and does exactly this.

What kills it is loading the sprite to an ofImage on every frame.

You could do something like:

ofTexture sprites[NUM_SPRITES];
vector<string> disk_image_name;
ofPixels auxPixels;
...
for(int i = 0;;)
{
    ofLoadImage(disk_image_name[i], auxPixels);
    sprites[i].loadData(auxPixels, ...);
}

...
draw()
{
    sprites[current].draw(...);
}

This uses a single pixel buffer (auxPixels) to load from disk and just uses that to set the ofTextures. You’ll need to allocate several ofTextures for this. It would be best to use something like ofxSpriteSheetRenderer, haven’t used it myself but it’s been recently updated…that’s usually good. Anyhow, a texture atlas is always best performance-wise.

Good luck!

also if you are using an image to load each sprite and then copying that to another image you probably have 2 copies of each image in memory.

ofLoadImage can also load directly to an ofTexture without need to use any external ofPixels:

vector<ofTexture> sprites(NUM_SPRITES);
for(auto & sprite: sprites){
    ofLoadImage(imagePath, sprite);
}

What is weird though is that 2 copies of each image results in more than 2 times the memory. If I do this:

for (int i = 0; i < dir.size(); i++){
    images.push_back(ofImage());
ofLoadImage(images[i], dir.getPath(i));
}

The memory goes through the roof (up to about 1.8GB before it crashes). But if I do this:

for (int i = 0; i < dir.size(); i++){
    textures.push_back(ofTexture());
    ofLoadImage(textures[i], dir.getPath(i));
}

The memory sticks at less than 200MB and it all works perfectly.

i’m testing it and i’m getting the expected memory usage. what platform are you in and what’s the size is each of the images you are loading?

Loading ofImages loads to both ofPixels and ofTexture, and you just need the ofTexture side for rendering.

I’m on Windows 8 using Parallels. The images 190x190 png. Around 13.8 KB each. Around 65 of them per sequence.

I’ll test on another machine too.

the size on disk is not very relevant since it’s compressed, the real size would be 190x190x4 = 144400 bytes which is aprox 141 Kb for the 65 images a little less than 9Mb. so it’s really strange that it’s using so much memory. the fact that you are running under parallels might be the cause that is crashing. snce OF uses the hardware very intensively using it under virtualization is usually not a very good idea.

Ah ok, so 9MB per image sequence, then I am creating 200 of these which results in 1.8GB.

That makes sense and I am getting that memory usage in OS X. I’m getting about 2.5GB in windows and need to add the LARGEADDRESSAWARE command in VS to stop it crashing.

Now I don’t understand how under parallels, the same code using ofTexture results in less than 200MB in memory. I guess it’s just not telling me the whole story.

when you only use textures the images go directly to the graphics card so they use no ram but instead memory in the graphics card.

Ah, of course! thanks