Rendering/displaying MASSIVE images

I’m working on a project that will use images to the tune of 100,000+ pixels per axis. Even Ubuntu image viewer seems to have issues with 12,000 x 12,000 images, so I wonder about solutions for real-time or pre-rendered panning, zooming, etc.

It seems that the solution would be to break the image up into several tiles and render the necessary images on an as-needed basis. Is there a pre-built solution for this? I came across this quite old solution: https://github.com/timknapen/ofxGiantImage but I also wonder if it will be necessary to consider the production of the original massive image. Will it necessarily be already chopped up into a series of images? I assume so. Any insights or strategies appreciated.

1 Like

I would suggest the tiles solution, that one that it is used also for online maps services. It is not too complicated, depending on the use case, you divide your big image in clusters of smaller images, and then depending on the zoom level, you fetch a certain cluster. There is a bit more of overhead, but in this way you can handle zoom and high quality details.

1 Like

Hi prismspecs!

In theory the splitting of the big image can be done in your project. The biggest image we use is about 18.000 px wide. It is (was) already too big to pass it to the graphic card memory as one.
But there is no problem to load this image in openFrameworks and then split it in 12 tiles (we are using .setROI(…) and getRoiPixels(…) fromofxCvColorImage) and draw them on the screen. We do not even check if a specific tile is in sight and needs to be rendered, if I recall correctly this is taken care of by openFrameworks/ofCamera - but not sure here.
We are also using MipMaps (ofTexture.generateMipmap()) which is incredibly convenient and easy. No problem to zoom out and see the whole picture then.

hope this helps - have a great day!
oe

1 Like