Increase ofImage/ofTexture drawing speed by less bit depth?

Hi iOS-developers,

drawing some RGBA (8_8_8_8 bit for each chanel) ofImages makes my IOS app slow.

Would pixel_Type GL_UNSIGNED_SHORT_4_4_4_4 and/or GL_UNSIGNED_SHORT_5_5_5_1 improve drawing speed?

How to convert an ofImage (RGBA, OF_IMAGE_COLOR_ALPHA, 8_8_8_8 ?) to a reduced ofImage/ofTexture(RGBA, GL_UNSIGNED_SHORT_5_5_5_1)?

I hope this makes sense.

Thanks

Mike

Hi…,

I’ve tried to allocate an ofTexture with GL_UNSIGNED_SHORT_4_4_4_4, but this seams not to work:

ofTexture        texColorAlpha;
texColorAlpha.allocate(w,h,GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, GL_UNSIGNED_SHORT);
colorAlphaPixels    = new unsigned char [w*h*2];
texColorAlpha.loadData(colorAlphaPixels, w,h, GL_RGBA);

Any ideas?

Thanks

Mike

Hi…,

I am still stuck with iOS textures. My framerate is too low with ofImages, so I think I would need to draw ofTextures with less bit depth GL_UNSIGNED_SHORT_4_4_4_4 or GL_UNSIGNED_SHORT_5_6_5_1.

Has some an idea or code, or I am totally wrong with this?

Thanks

Mike

Hi @tgfrerer,

sorry to write directly, I’ve read that you support the ofTexture class.

This was my latest try to set a pixelType (GL_UNSIGNED_SHORT_4_4_4_4), but no ofTexture is shown.

ofTexture texColorAlpha;
unsigned short * colorAlphaPixels;
    
texColorAlpha.allocate(w,h,GL_RGBA, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4);
colorAlphaPixels = new unsigned short [w*h];
... // colorAlphaPixels fill this with some Bits
texColorAlpha.loadData(colorAlphaPixels, w, h, GL_RGBA);

Please can you help. Thanks,

Mike

Hi all,

sorry again for nerving the last months with this thread, I am a beginner. :wink:

This is solved for me now, I’ve made some mistakes, sorry. The main mistake was to write a own sprite class. In my sprite drawing function I did a lot of OpenGL state changes, like binding Texture, ofPushMatrix, ofTranslate, ofRotate, ofScale, ofPopMatrix. When drawing some sprites, this sums up to a very big state change list, which slows down CPU and GPU massively.

In the end I’ve found this GLTextureAtlas example, where a texture atlas is bound once and all sprites are drawn in one glDrawElements call.

myOFtexture.bind();
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
 
glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), pos_tex_all);
glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), pos_tex_all+3);
    
// draw all sprites using ONE single call
glDrawElements(GL_TRIANGLE_STRIP, NUM_SPRITES*6, GL_UNSIGNED_SHORT, indices_all);
 
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, 0);

I thought my slow down problem were caused by the big 32-Bit textures. I’ve invested some time to create ofTexture with 16-Bit depth (GL_UNSIGNED_SHORT_4_4_4_4 or GL_UNSIGNED_SHORT_5_5_5_1). In of 0.8.x I was not able to create these 16-Bit textures with texture.allocate(…) or texture.loadData(…), so I ended up to create my own texture with OpenGL and linking it to ofTexture with setUseExternalTextureID():

Gluint myTexture;
glGenTextures( 1, &myTexture );
glBindTexture( GL_TEXTURE_2D, myTexture );
glTexImage2D(
  GL_TEXTURE_2D,
  0,
  GL_RGBA,
  textureWidth, textureHeight,
  0,
  GL_RGBA,
  GL_UNSIGNED_SHORT_5_5_5_1,
  colorAlphaPixelsPtr
);
 
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glBindTexture( GL_TEXTURE_2D, 0 );
myOFtexture.setUseExternalTextureID( myTexture );

Comparing 16-Bit (GL_UNSIGNED_SHORT_5_5_5_1) with 32-Bit depth textures was disappointing, CPU time is the same and GPU time speeds up only about 5%. :frowning:

Thanks for your time, and thank you for the great OF

Mike

P.S.: Perhaps knowing VBOs, would have direct me earlier in the right direction. :wink: