VBO Billboard Particle System

Hey guys here is a test im working on for a piece. Im trying to make a really fast particle system with a VBO that uses a texture.

The idea is to use one texture that is broken up into cells. In each cell there is a image that you want to display for the particle.

Here is an example of the texture:

that particle just shifts its texture coords to the cell it wants.

Everything is working so far. I’m having some trouble with the billboard aspect of the system. So the particle is built with a quad.

  
[0]----[1]  
 |         |  
 |         |  
[2]----[3]  

I need to figure out how to rotate the particle so that it is always perpendicular to the camera. I have found some tutorials online but they are not going to work in respect to a VBO.

http://www.lighthouse3d.com/opengl/bill-…-hp?billCyl

What I think I need to do is rotate the quad points, but im lost. Can anyone help.

I posted the code on my google SVN. grab it here
http://code.google.com/p/vanderlin/sour-…-0Particles

or download the zip here
http://toddvanderlin.com/OF/VBO%20Billb-…-ticles.zip

Here is an idea of what the camera is doing
[attachment=0:1vwmvc8r]camera-01.png[/attachment:1vwmvc8r]

Hey, try using textured point sprites, so each particle is not a quad, but rendered with GL_POINTS, but opengl will render them as billboards. And then using a vertex shader you can try randomizing the texture coordinates. I haven’t tried that myself but should work

Billboarding is usually something you want to do in a shader. Basically the fastest way would be to only render one Point for each quad, and then expand that point to a quad in the geometry shader. an easier way if you dont want to get into geometry shaders is just to send 4 vertices to the GPU and construct the quad in the vertex shader. (i posted a shader for this method on the forums a while back, could not find it though)

The easiest way would be to use point sprites and scale the point size in the Vertex Shader.- That would mean though that you also calculate the distance attenuation from the camera yourself.-

If you want to do it on the CPU check out this tutorial:
http://www.lighthouse3d.com/opengl/billboarding/

the shader versions would basically do the same only faster.

1 Like

ohh wow so let me get this right…

Instead of my array of points*4 for a quad im going to just use a single point. ( a point is always facing the camera - ya!)

i tried this but you can not have various sized points. You can only call glPointSize(…) once. Can i change the point size in the shader.

I guess im might be moving to a shader

thats where a vertex shader is handy, check out this tutorial:
http://lumina.sourceforge.net/Tutorials-…-rites.html

it explains how to change the point size in the shader and also correct the distance attenuation to the camera.

i used the same appoach for the particles in this vid:
http://vimeo.com/3324904

Using the geometry shader probably won’t be faster and will likely be slower. The geometry shader uses GPU cycles that would otherwise go to vertex and fragment processing. Generating geometry on the GPU is really expensive and almost never worth the extra expense.

Typically the bottleneck in billboarding systems is fill rate. Imagine you have thousands of particles. If your camera is in the middle of a large cloud of particles such that many of them nearly fill the screen, you will be fill rate bound before anything else. You can easily use tens of thousands of quad-based particles via dynamic VBOs on each frame with little worry for bandwidth between CPU and GPU or computation in the vertex or fragment shader.

In my experience, the best compromise between efficiency and flexibility is to use VBOs with quads where each point is repeated 4 times and some other piece of data (texture coordinates for example or a generic vertex attribute) provides an offset to make the quad in the vertex shader. You can also do the screen-aligned billboarding here by using the transpose of the modelview matrix. Here’s some example GLSL code:

  
attribute vec3 offset;  
uniform float scale;  
  
varying vec2 texcoord0;  
  
void main()  
{  
	//screen-aligned axes  
	vec3 axis1 = vec3(	gl_ModelViewMatrix[0][0],   
						gl_ModelViewMatrix[1][0],  
						gl_ModelViewMatrix[2][0]);  
						  
	vec3 axis2 = vec3(	gl_ModelViewMatrix[0][1],   
						gl_ModelViewMatrix[1][1],  
						gl_ModelViewMatrix[2][1]);  
  
  
	//offset from center point  
	vec3 corner = (offset.x*axis1 + offset.y*axis2)*scale + gl_Vertex.xyz;  
	  
	// position in eye space  
	gl_Position = gl_ModelViewProjectionMatrix * vec4(corner, 1.);  
  
	texcoord0 = vec2(gl_TextureMatrix[0] * gl_MultiTexCoord0);  
	gl_FrontColor = gl_Color;  
}  

you don’t need geometry shaders for this, just use a point sprite (glEnable(GL_POINT_SPRITE); glDrawArrays(GL_POINTS, 0, numParticles); )
and you’re reducing the number of vertices processed by x4, and you don’t need to do any transformations to make them camera facing as point sprites are rendered flat anyway.

http://memo.tv/vertex-arrays-vbos-and-p-…-frameworks

you need the vertex shader to give each point a different size. from your app call once
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE); which overrides glPointSize and tells opengl that each point sprite will have a different size as set by a vertex shader. Then in your vertex shader write to gl_PointSize and that sets the size of that particular sprite. hope that helps!

thats exactly what i said.- the biggest drawback of that is that you have to compute the distance attenuation to the camera yourself though.

Othersides shader is pretty similar to what I use most of the time.

The geometry shader is propably not faster in practice (even though thats what in theory it was designed for, i am pretty sure it will be gone pretty soon since tesselation shaders are coming, and the geometry shader iskind of like a hybrid between vertex and tesselation shader which cant do anything in practice :P), so I would stick with making a quad in the vertex shader in screen space.

ok so this sounds great. I haven not gone down the road of shaders yet, I have been avoiding it. But since it sounds like the way to execute this I have started.

Here is a simple example of a VBO and the shader running. I do not understand how to set varios points sizes that are stored for the particle.

Is there a way to loop through all the data on the gpu… Im not sure where to start…

http://toddvanderlin.com/OF/VBO%20Particle%20Shader.zip

the same way that you pass in vertex positions and colors with glVertexPointer and glColorPointer, you can also pass in generic properties with http://www.opengl.org/wiki/GlVertexAttribPointer

then in your vertex shader you just set gl_PointSize to whatever you want it to be (gl_PointSize is a special variable, that when you write to in a vertex shader, it sets the size of that point).

  
  
// in your vertex shader  
attribute float myPointSize;  
void main() {  
	gl_PointSize = myPointSize;  
	gl_Position = ftransform();  
}  

// in your app
sizeLocation = glGetAttribLocation(myShader, “myPointSize”);
glEnableVertexAttrib(sizeLocation);
glVertexAttribPointer(sizeLocation, 1, GL_FLOAT, GL_FALSE, numParticles, sizesArray);

P.S. i wrote the code above from memory, so might be errors or something missing.

The code is fine as long as the glsl version is less than 1.2, I don’t think you have access to ftransform() from 1.3 and above (you need to pass in and multiply your matricies manually), either that or it’s deprecated.

Also, there’s the option of calculating gl_PointSize in the vertex shader based of distance from camera in the vertex shader itself.

Hmm: glEnableVertexAttrib() does not exist for me?

glEnableVertexAttribArray(GLuint index)

might be what you’re after instead.

http://www.opengl.org/sdk/docs/man/xhtml/glEnableVertexAttribArray.xml

reading this now: http://www.lighthouse3d.com/opengl/glsl-…-lattribute

Ok so it is compiling and glGetAttribLocationARB() is finally return 1 and not a -1 but nothing is happening. I am getting either a few little dots or a filled screen of yellow.

http://toddvanderlin.com/OF/VBO%20Parti-…-r%20V2.zip

this is the bit i Added to ofxShader:

  
GLint   getAttributeLocation(char * name) {  
	return glGetAttribLocationARB(shader, name);  
}  


Ahhh YEs!

Its working here is the latest code:
http://toddvanderlin.com/OF/VBO%20Parti-…-r%20V3.zip

The next step is to fix the points size based on the distance from the camera. Not sure the step for that but take a look at the code :slight_smile:

thanks for sharing your source vanderlin, it’s been my introduction into shaders :stuck_out_tongue:
i’m just having a probling applying a texture to the point sprites.
i keep getting a border on both left and bottom side of the texture of random garbage as you can see in the picture here -> http://www.flickr.com/photos/ruimadeira/3984143897/sizes/o/
yet if i draw the texture to screen directly it works. For the texture im using an ofImage with ARB texture disabled. here’s the code im using to draw the particles (im using vertex arrays instead of fbo)

  
  
void testApp::draw() {  
	shader.setShaderActive(true); // Turn on the Shader  
	// Get the attribute and bind it  
	GLint pixel_loc = glGetAttribLocationARB(shader.shader, "thePixelSize");  
	glVertexAttribPointerARB(pixel_loc, 1, GL_FLOAT, false, 0, pointSizes);  
	glBindAttribLocationARB(shader.shader, pixel_loc, "thePixelSize");  
	//printf("Pixel Location%i\n", pixel_loc);  
		  
	glDisable(GL_DEPTH_TEST);  
	smokeTexture.getTextureReference().bind();  
	glEnable(GL_POINT_SPRITE);  
	glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);  
	glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);		// Enable Vertex Points  
	// Enable the Vertex Array and PixelSize Attribute  
	glEnableClientState(GL_VERTEX_ARRAY);  
	glEnableVertexAttribArrayARB(pixel_loc);  
	glVertexPointer(3, GL_FLOAT, 0, pnts);  
	glEnable(GL_BLEND);  
	glBlendFunc(GL_SRC_ALPHA, GL_ONE);  
	glColor4f(1.0f, 1.0f, 1.0f, 1.0f);  
	glDrawArrays(GL_POINTS, 0, NUM_PARTICLES);				// Draw Just Points  
	smokeTexture.getTextureReference().unbind();  
	  
	// Clean up  
	glDisableClientState(GL_VERTEX_ARRAY);   
	glDisableVertexAttribArrayARB(pixel_loc);  
	shader.setShaderActive(false);  
	  
	// FPS Debug  
	ofSetColor(0xff0000);  
	ofDrawBitmapString(ofToString(ofGetFrameRate()), 20, 20);  
}  
  

here’s my shader code

vertex:

  
  
attribute float partSize;  
  
void main(void)  
{  
	gl_TexCoord[0] = gl_MultiTexCoord0;  
	vec4 eyeCoord = gl_ModelViewMatrix * gl_Vertex;  
	gl_Position = gl_ProjectionMatrix * eyeCoord;  
	float dist = sqrt(eyeCoord.x*eyeCoord.x + eyeCoord.y*eyeCoord.y + eyeCoord.z*eyeCoord.z);  
	float att = 500.0 / dist;  
	gl_PointSize = partSize * att;  
	gl_FrontColor = gl_Color;  
}  
  

and fragment

  
  
	  
uniform sampler2D tex;  
	  
void main (void) {  
  
	gl_FragColor = texture2D(tex, gl_TexCoord[0].st) * gl_Color;  
  
}  
  

any help would be greatly appreciated :slight_smile:

cheers

Rui

I think your image will have to be a power of two – or you’ll have to adjust how the texture draws. the way we deal with non power of two textures without ARB is that we upload to a sub region, then set new u / t coordinates (instead of 0,0 to 1,1) – I think you are seeing a texture being drawn the full way (0,0, to 1,1) instead of what we compute internally (and thus garbage).

for example, if you have a 320,240 image, we do:

  
w = ofNextPow2(320);  // 512  
h = ofNextPow2(240); // 256  
  
t =  320 / 512.0;  // 0.625  
u = 240 / 256.0;   // 0.9375  

so when you draw, we draw…

  
x,y,w,h   /  (0,0,0.625, 0.9375)  

but I think you are seeing (0,0,1,1), thus garbage…

hope that helps !
zach