And then in the shader, to read an item from the vector:
int segX = int(texelFetch(tex_SegWeb, i).x);
Is there any way to optimise or improve on this? i.e. I am only sending ints, whereas this reads floats, which then need to be turned back into ints - can I just send ints? And, are there any other improvements I could do, that would help with speed?
Sending as static_draw vs dynamic_draw has no clear effect on performance.
Allocating the buffer as GL_R16I doesn’t seem to work - the shader runs but the index doesn’t read correctly (although I know all the ints are under the 16 limit). So no savings to be had there? Or am I doing something wrong?
Allocating the buffer as GL_R32I as below does work, but when I call the texelfetch in the shader, I still need to convert to int() or the shader doesn’t run. Which I don’t understand, as shouldn’t it be returning an int value from this format?
With all else the same, the shader runs but is full of errors and distortion issues. I thought maybe the integers were exceeding the limit, but I’ve checked and the biggest is circa 1500, so that’s not it. Any suggestions as to what this might be caused by? I was hoping accessing 16 bit integers might be a bit faster…
oh. It looks like the openGL shader doesn’t recognise the short format, which is what the GL_R16I setting provides? So reading texelFetch to an int maybe distorts the data or something?
Ho hum, if anyone can advise as to improvements I can make I’d be grateful, otherwise I guess I am not much further forward in terms of speed ups
Have you tried using non-texture type buffers? Like packing them in a vbo and defining your own attribute layout? Appreciate that might not play well with other parts. You would at least have options around data types https://www.khronos.org/opengl/wiki/Vertex_Specification#Component_type
Not sure if this will help, but a consideration: if your data_segWeb is small, less than 512 for example, consider using an actual array (not vector). Arrays can be sent directly to the shader as uniforms.