Texture mapping ofxAssimpModelLoader model


With texture mapping simple rectangles out of the way, I’m moving onto bigger and better things… like texture mapping rectangles imported from CAD software! I’d like to be able to draw sets of rectangles (and polygons) in CAD software to represent physical objects, and then simulate how they’d look by texture mapping these virtual screens in software.

If I was just going to draw a rectangle and then texture map it, I’d do:

mesh.addVertex( ofPoint(0,0) );
mesh.addTexCoord( tex.getCoordFromPercent(0,0) );

mesh.addVertex( ofPoint(100,0) );
mesh.addTexCoord( tex.getCoordFromPercent(1,0) );

mesh.addVertex( ofPoint(0,100) );
mesh.addTexCoord( tex.getCoordFromPercent(0,1) );

mesh.addVertex( ofPoint(100,100) );
mesh.addTexCoord( tex.getCoordFromPercent(1,1) );


And that works fine! Hooray!

So I went and made a model in my favorite CAD software, and exported a .stl of the same rectangle. So I try:

ofxAssimpModelLoader model;

but no dice. model.drawVertices() and model.drawWireframe() don’t work either.

I presume the problem is that the model doesn’t have any texture coordinates – so it’d be nice if I could get a list of the ofxAssimpModelLoader’s vertices and then iterate over them, and add a texture coordinate with mesh.addTextureCoordinate() as in my previous example.

But I can’t seem to find a way to do that. Any suggestions on how to do this? Does ofxAssimpModelLoader contain any methods that would do this for me?


Hi morphogencc,

I am pretty new too to these things, but I could share with you what I have figured out.

First of all, since model.drawVertices() and model.drawWireframe() do not show anything there might be a problem with the model, not the texture. Or if the model is correct, it might be that the background colour and the colour that you use to draw the wireframe is the same. Or it might be that it is drawn outside the screen (I had some problems defining my world and my models where actually off screen). In any case, I would forget about the texture at first, and just confirm that the model is correctly imported and drawn.

 ofxAssimpModelLoader model;
 ofSetColor(255, 0, 0);

If everything is fine now, you may add the texture (I guess tex is somewhere else defined, as it is working on the first example that you gave, right?). In any case you can access the meshes that loaded in the model using

model.hasMeshes();       // returns bool
model.getMeshCount();   // returns int

and then perhaps select a mesh from there and handle it as you would with a normal mesh :


Unfortunately I cannot help you with the .stl file, as I used Rhinoceros to export the 3d model to an .obj file. So I will explain how things are done in the obj format.

This file can be opened with a text editor so that you can see all the definitions :
At first there is the definition of the vertices (v) in (x, y, z) coordinates ranging from values 0 to 1 [ ex.: v 0.123 0.234 0.345 1.0 ].

Then there is the definition of the texture coordinates (vt), again with range from 0 to 1 [ex.: vt 0.500 1 ]. These vales match the x, y of the texture that is going to be bound later, having (0, 0) as the top left and (1, 1) as the bottom right point of the texture.

Last of all there is the definition of the vertex normals (vn) again in (x, y, z) and with values from 0 to 1 [ex.: vn 0.707 0.000 0.707]. As I understand, vertex normals are the vectors that are in vertical position of the face and will practically define the “direction” that the face will be seen. Let’s say that you have a sphere in the centre of the screen, you may bind a picture facing outside (from the sphere to the near/far pane) or you may bind it facing inside (from the sphere to the centre of the world)

So… since everything is defined, here comes the matching so that the faces/triangles are completed :
[ex. 1 : f 1 2 3 ]
This face is formed from the first, second and third index of vertices.
[ex. 2 : f 3/1 4/2 5/3 ]
This face is formed from the third, forth and fifth index of vertices matched with the first, second and third index of the textures. This means that whatever texture you bind later will get matched on these specific texture coordinates.
[ex. 3 : f 6/4/1 3/5/3 7/6/5 ]
This face is formed like the previous (sixth, third, seventh index of vertices with forth, fifth and sixth index of textures) with the addition of the normals (first, third, fifth index of normals).

I am thinking of all these as having a table and you need to define the points from which it is formed (vertices). Then you want to put a tablecloth on it, so you need to define where on that table will be a match with the tablecloth (vectex texture). Last thing to be defined is which way you place the tablecloth (the front or the reverse side).

I hope that something could be helpful to you!

Interesting! So it seems like .obj files has uv coordinates as part of the data structure. I’m using Rhinoceros 3d also, so I just exported my rectangle as a .obj.

Using your suggestion as a basis, I tried:

  ofMesh mesh;
  if(model.hasMeshes()) {                                                                                              
    std::cout << "number of meshes: " << model.getMeshCount();
    mesh = model.getMesh(0);                                                                                            

My .obj has only 1 mesh in it, according to my print out – this seems to make sense given that my 3d file is just a rectangle with 4 vertices.

However, this doesn’t seem to draw the texture. The rectangle appears much smaller than when I draw it with model.drawWireframe(), and the rectangle is just black. Could you advise me as to how you’d approach this if you were doing it with an obj file?

thanks again!


since you can see the model is loaded correctly with model.drawWireFrame(), perhaps you could try


I have noticed that ofxAssimpModelLoader normalises the scale of the model automatically, so that could be the reason why you see the mesh of it really small [ however I am not so sure about that ].

Which IDE are you using? At some point I could take a screenshot of the application running on an iOS device and see the objects that were created in GPU from the Xcode environment. I guess such thing is offered by other IDEs (if you are not on Xcode). Using this feature, I could see that I was trying to bind an image 250x150 on a fbo and then on a model, but on the GPU the texture was of 256x256 size, leaving two strips unused.