ofxVec3f and OpenGL vertex arrays

I’m trying to render an array of ofxVec3f’s using OpenGL vertex arrays, but it is not working properly.

I assumed that an array of ofxVec3f’s would keep all the coordinates stored in consecutive memory spots, but this appears to not be the case. First, I got some strange unexpectedly placed vertices with something like this:

  
  
	ofxVec3f *verts = new ofxVec3f[4 * 6];  
	verts[0].set(0,0,0);  
	verts[1].set(1,0,0);  
	verts[2].set(1,1,0);  
	verts[3].set(0,1,0);  
	glEnableClientState(GL_VERTEX_ARRAY);  
	glVertexPointer(3, GL_FLOAT, 0, (GLfloat*)verts);  
	glDrawArrays(GL_POINTS, 0, 4);  
	glDisableClientState(GL_VERTEX_ARRAY);  
  

So to get a better idea of what was getting passed into OpenGL through glVertexPointer, I did this:

  
	verts[0].set(1,2,3);  
	verts[1].set(4,5,6);  
	verts[2].set(7,8,9);  
	verts[3].set(10,11,12);  
	  
	float *floats = (float*)verts;  
	  
	for (int i=0; i<16; i++)  
	{  
		float num = floats[i];  
		printf("%i: %f\n", i, num);  
	}  

That code outputs this:
0: 0.000000
1: 1.000000
2: 2.000000
3: 3.000000
4: 0.000000
5: 4.000000
6: 5.000000
7: 6.000000
8: 0.000000
9: 7.000000
10: 8.000000
11: 9.000000

So it seems that there is an extra float stored in front of each ofxVec3f. I’m not a C++ expert, but I’m assuming there is a totally reasonable explanation for this. But it didn’t seem like such a big hurdle either… I can set the vertex stride in the glVertexPointer() to sizeof(float) and then in glDrawArrays, provide an offset of one index to start at the right place. However, this did not work… I still got unexpected results on the screen. On top of that, this is not a super optimal situation anyway, since I’ll likely be passing texture coordinates and vertex colors and normals in other arrays that may not have the same offset that I’d be sending to glDrawArrays.

So does anyone know a proper way to use an array of ofxVec3f’s with glVertexPointer / glDrawArrays?

this might be helpful:

http://stackoverflow.com/questions/9377-…-bject-in-c

the compiler is entitled to insert padding space between data members to ensure that each data member meets the alignment requirements of the platform. Some platforms are very strict about alignment, while others (x86) are more forgiving, but will perform significantly better with proper alignment. So, even the compiler optimization setting can affect the object size.

you might want to take care in using ofxVec3f the way you’d like to – I suspect the stride will be different in different scenarios, which could be problematic. much better would be throwing the data into a float[][] so that it’s guaranteed to be laid out in memory the way you expect.

take care!
zach

Oooh yeah sounds like that could easily slip into a cross-platform disaster! :shock:

Yeah… I suppose I’ll just dump my verts into the traditional float array then.

[quote author=“zach”]this might be helpful:

you might want to take care in using ofxVec3f the way you’d like to – I suspect the stride will be different in different scenarios, which could be problematic. much better would be throwing the data into a float[][] so that it’s guaranteed to be laid out in memory the way you expect.

take care!
zach[/quote]

The underlying problem is that ofxVec3f has a vtable (it has a virtual d’tor) so this adds another member to the class, a pointer to the vtable of the object (where the function pointers to the virtual methods are stored)

Without the virtual keyword in ofPoint you can use ofxVec3f with glDrawArrays etc. There may be issues on some specific platforms as Zach pointed out, but with proper compiler flags this is not an issue. OpenSceneGraph does this all the time and it works on a lot of platforms (linux, OS X, WIN32).

Cheers,
Stephan