Mapping 2d image of room onto 3d objects to recreate the space


I am trying to find an addon/solution for mapping image textures to 3d objects to recreate the space as seen in a photograph.

I had been working on this problem in openframeworks by creating planes within a scene, but they never had textures applied to them. I also had some difficulty with the depth of the various planes, since the image was being drawn and then the 3d space created overtop.

I switched to Unity to rework this project, mostly because of the shadows and such, but I discovered how easy it was to work on these kinds of spatial recreations by using a virtual projector tied to a static camera.

You can see an image here, where the lower part is the the view from the camera, and the upper part is how that 3d space has been recreated to fit the image, with textures on the various objects.

I have tried searching for addons/solutions, but most seem to imply projection mapping, which this kind of is, but my result is never meant to be used in the real world. I would ‘like’ to use openframeworks, since the earlier projects compile well on the raspberry pi (images will be displayed on multiple monitors, and pi’s are cheaper than multiple laptops/computers).

Even with the ease of using shadows, reflections and collisions in unity, there is less ability to do deeper things in unity like have vines linking the objects and objects adhering to surfaces.

I was thinking using the image coordinates, normalized, to find the vertices of the object/plane created would allow for proper mapping, at least for rectangular items and surfaces facing the image origin (projector?) My earlier attempts in openframeworks saved the coordinates of a planes vertices in an xml file, but they never had an texture applied to them. In more complicated images the depth of the scene could become quite compressed and that created it’s own complications. For example, a distant narrow pillar would be the same, visually, as a close wide pillar, since neither was tied to the image.

Let me know if I am not using the proper terminology, which may have affected my search results.


Hi nosarious,

this sounds very interesting. But I have to admit I can’t really grab the setting yet. As far as I can see you have some objects that have correct dimensions in 3D-space (so are “correctly shaped”) as well as some objects that are distorted and only look correct from one point of the room (camera point). Do I understand correctly? If yes: why? Do want to create an optical illusion and then move the camera to show how it’s done?

Sorry to ask so basically, but if I can’t understand I can’t think on how it could be done…

greetings and have a good day!


Thanks for the response, @dasoe .

This is basically an art project, where I am trying to determine what is intriguing about the photographs of abandoned or empty spaces. I am trying to recreate these spaces as 3d models, based on the photographs, and then have elements which ‘populate’ those spaces as amorphous blobs of leftover imagination.

These are some instagram images of some tests:

these two were made with openframeworks, where an image lies in the background and planes of transparent-in-2d (allowing the background image to show through) planes which had an opaque nature in 3d (hiding the elements present in 3d space) are created. The planes are not connected to the image, however, so they are not at the right depth. Hence, they don’t necessarily meet up with the floor, and the depth of the scene can be compressed (instead of occupying an entire 50 foot space, they are within 10 feet, but sized to fit based on the camera viewpoint)

For context, this is a video of a scene created with unity, as shared on instagram:

you can see in the unity video that the shadows fall on the wall texture and the front cabinet a bit. I also added a layer of reflectivity on the floor. The problem is that unity does not offer the same controls over the look I am looking for, such as the piping on the edges of the shapes and (eventually) vines reaching out and adhering to surfaces.

I am exploring other options for the object surface making up the scenes, such as exporting from unity (tricky) and creating textured mesh scenarios in cinema4d (does not have a projection system as handy as unity).

As I said, this will eventually be several monitors showing scenes which are all running on individual raspberry pi’s (both on the wall and within hand held objects) so I would like to keep it as openframeworkie as possible. The monitor scenes will have some limited ‘shaky cam’ like the blair witch project, and others will have art scan lines and artifacting like frizzy security cameras.


OK, nice. Sorry to make you all this work to explain! But I am getting closer.
So if I get you right: If you …

  • import a 3D sketch of the room in OF and draw it there.
  • draw 3D objects in OF (could be also imported) respecting depth (e.g. hiding behind a pillar)
  • texture these objects
  • and move them in 3D space
  • all this on a pi

… then you are there?

In this case you could give ofAssimpModelLoader a try. It can import your objects that you textured before in a 3D Application. There is an example in examples->addons folder. Could this be an approach?
Not sure how quid the 3D-moving thing will be on a Pi. Strongly depends on number and complexity of your objects…

have a great day!


that… kind of helps, but I was hoping to be able to do this entirely within openframeworks. I have a separate program built for creating the planes for the scenes, I just need a way to ‘project’ the image onto objects in 3d space. I shall be playing with some of the projection mapping stuff this weekend.


Hi nosarious,

don’t give up yet. This is still just to grab what you exactly want to do with your project. If my description above allows you to get the result you want, all is good:
It is possible to create and texture the 3D objects in OF too.
Check ofFbo.bind().
Also here is a (working?) example:
With this you can draw a textured mesh. And that should be all you need.

If what you want is really about projecting an image on an object, this is still not what you need, though. The projection lets the image “stand still” while the object is moving, only brings perspective distortions of the image according to the angles on the object.

Which one will it be?

have a great day!


i think what you explain in your original post is projective texturing. i would recommend using opengl 3+ since there’s a couple of additions in glsl that can make this easier. here’s a link from someone using glsl + glm which should be easy to port to OF:


I’m not sure but maybe Virtual Mapper could be useful or be a good starting point.