Compute Normal Map from Image

Hi,

I would like to revisit this little fun Project:
http://vimeo.com/1600741

The problem is that the normals are not correct. To fix that I would like to compute a normalmap from the webcam image which is my displacement map. I found alot of software on the net that can transform images into normalmaps but nothing about the actual algorithm.
Does anybody know how to do this or have any useful links? I can’t use any extra software since I want to do it on the fly every frame.

Thanks

Well if you have the data as triangles - or even quads then it is quite straightforward to calculate the normals. ofxVec3f makes it quite easy.

make two vectors one from A to B (on the triangle) and one from B to C
normalize the vectors using the normalize(); function of ofxVec3f
then find the perpendicular using the ofxVec3f perpendicular function.

ofxVec3f normal = vec1.perpendicular(vec2);

You then have the normal for the face of the triangle which you can use for all the vertexes of that face. This doesn’t give you the best lighting but it works pretty well.

For even better results you then need to go through for each vertex and make a normal at each vertex that is the average of the normals of the faces it is touching. This can create much more realistic lighting - supposedly weighting each face normal by the area of the face also helps.

Then don’t forget to normalize! Of if you don’t feel like doing it, use the glEnable(GL_NORMALIZE); function.

Hope that helps!
theo

–edit

If you don’t have the data as triangles try a triangulation function - like triangle++ . I have also had good luck with this code from paul bourke http://local.wasp.uwa.edu.au/~pbourke/p-…-iangulate/

Hey Theo, Thank you, well I allready know how to compute per Vertex Normals and such, I just don’t really know how to correctly do that from an image. I also want to save the normal map as an image (for testing) later on I will just send it to my shader for texture lookup.

That is also the reason why the normals are not correct. The sphere also gets dissorted in the Vertex Shader so that there is no way to get correct normals other than a normal map if you know what I mean.

Something like that:

Hey Moka, normal maps simply store the components of the normal vector for each texel. And then when doing you’re lighting, instead of using the normals at the vertices and linear interpolating across the faces, you use the normal from the normal map (and a shader to read the normal map and use those normals instead). Usually you’d create a high resolution mesh in a 3D app, then tell the 3D app to create a normal map, then use your low resolution mesh in your realtime app with the normal map from the high resolution mesh so it appears you have all the details, nooks and crannies.

A displacement map is a bit different to a normal map. Displacement map is greyscale and is just the z displacement (perpendicular to the surface). Normal maps are rgb contains the actual normal vector.

http://www.blender.org/development/rele-…-rmal-maps/

dunno if that is helpful in any way or not!

I could be totally wrong about this, but couldn’t you just convert the pixels to triangles, like:

  
  
  
pixels:  
  
1 2 3 4 5 6  
7 8 9 10 11 12  
  
  

and have those pixels become triangles, like, 1-2-7, 2-8-7, 2-3-8, 3-9-8, etc?

use the values from the image to displace (ie, brightness = height, or however you want to map) and then from the displacement calculate triangles. If there are too many triangles, maybe resample the image down?

just a guess…

take care!!
zach

ps: theo’s right to point out that libraries like triangle++ will create the mesh for you from a point set – but it seems like the pixels are in the order you want the triangles described anyway…

edit - fixed some typos :slight_smile:

ah, i think i only just understood the question… you want to create your own normal map from a displacement map? Well a displacement map is a essentially a heightfield, so you can compute normals the same way you would for vertices (cross product vectors to neightbors at N and E),and if you google heightfield normals I’m sure you’ll find loads. then map the -1…1 to 0…255 for your rgb normal map.

edit: essentially what zach said actually…

thanks guys, that was actually what I have been thinking. I also found some threads at gamedev about this subject. I will try some of thos techniques as soon as I have some spare time. I think correctly lit projects like that will get alot more charm.

at zach:
you are actually totally right. I just have to make the displacement shader using the same pixels so that things will be correct.

Yes, I did it:

Video:
http://vimeo.com/2584246

I ended up converting this sobel HLSL shader:
http://catalinzima.spaces.live.com/blog-…-!223.entry

to GLSL that computed the normals from the webcam image on the fly.

moka,

this is sick (and also a little gross)! i would love to give it a try if you’re sharing!

jeremy

Hi Jeremy!

Glad you like it! I am actually most of the time only sketching things real quick so I don’t really want to share the whole source code since there are still some things I actually don’t really get myself.

Actually most of the important stuff happenes in the shaders. I could share those if you want. Also I might upload the application so people can play around with it.

hi moka,

i should have been more specific, i wouldn’t expect you to share the whole code. i’m more interested in testing out the shader. i’d tried to do this myself after reading the thread but didn’t make much progress.

are you using face (mouth) tracking for the application?

jeremy

no, I am only using the webcam image and as displacement map and compute a normal map from the webcam image too.

I will post some shader code tomorrow!

Here are some simplified shader examples:

compute normal map:

  
  
uniform sampler2D check;  
  
float texelWidth =  1.0 / 320.0; //size of one texel;  
float normalStrength = 12.0;  
  
void main(void)  
{    
    float tl = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2(-1.0, -1.0)).x);   // top left  
    float  l = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2(-1.0,  0.0)).x);   // left  
    float bl = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2(-1.0,  1.0)).x);   // bottom left  
    float  t = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2( 0.0, -1.0)).x);   // top  
    float  b = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2( 0.0,  1.0)).x);   // bottom  
    float tr = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2( 1.0, -1.0)).x);   // top right  
    float  r = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2( 1.0,  0.0)).x);   // right  
    float br = abs(texture2D(check, gl_TexCoord[1].st + texelWidth * vec2( 1.0,  1.0)).x);   // bottom right  
	  
	// Compute dx using Sobel:  
  
    //           -1 0 1   
  
    //           -2 0 2  
  
    //           -1 0 1  
  
    float dX = tr + 2.0*r + br -tl - 2.0*l - bl;  
  
   
  
    // Compute dy using Sobel:  
  
    //           -1 -2 -1   
  
    //            0  0  0  
  
    //            1  2  1  
  
    float dY = bl + 2.0*b + br -tl - 2.0*t - tr;  
  
    vec4 N = vec4(normalize(vec3(dX, 1.0 / normalStrength, dY)), 1.0);  
    
    N *= 0.5;  
    N += 0.5;  
  
   gl_FragColor = N;  
   //gl_FragData[0] = N;  
}  

Displacement Vertex Shader:

  
  
uniform sampler2D displacementMap;  
uniform sampler2D normalMap;  
  
varying norm;  
  
void main(void)  
{     
	vec4 newVertexPos;  
	vec4 dv;  
	vec3 nm;  
	float df;  
	  
	gl_TexCoord[0] = gl_MultiTexCoord0;  
	  
	dv = texture2D( displacementMap, gl_MultiTexCoord0.xy );  
	nm = texture2D( normalMap, gl_MultiTexCoord0.xy ).xyz;  
	  
	df = 0.30*dv.x + 0.59*dv.y + 0.11*dv.z;  
	  
	newVertexPos = vec4(gl_Normal * df * 80.0, 0.0) + gl_Vertex;  
	  
        norm      = normalize(gl_NormalMatrix * ((gl_Normal+nm)/2.0));  
	gl_FrontColor = gl_Color;  
	gl_Position = gl_ModelViewProjectionMatrix * newVertexPos * vec4(1.0,-1.0, 1.0,1.0);  
}  

PLEASE post source and shaders.

Hi Moka,
I’ve been looking at this stuff and I’m slightly confused by your shader code. I’m pretty new to GLSL so let me know if I’ve misunderstood.
The code computation of the normal map is a fragment shader right?
It doesn’t seem like the two snippets pass data about normals between them. The

  
varying norm  

(should this be defined as vec4) doesn’t get used in the frag shader, so where does the texture data referenced by

  
uniform sampler2D check;  

come from and how is it related to what the vertex shader is doing?

Cheers
Chris

sorry i see what’s going on now.

is this method any more efficient than computing the normals on a per-vertex basis at the same time as the vertex displacement? i.e. both within the same shader.

ok. while moka’s example looks awesome, i discovered with my work that if you rotate the sphere then the lighting doesn’t work any more. this is because the normal calculation is only valid for a plane, where the original normals are all pointing in the same direction (0,0,1). and instead it’s being wrapped onto a sphere where all the normals point out from the centre of the sphere to the point of the vertex on the sphere’s surface.

to fix this, if you rotate your new calculated normal by the different between the previous normal (this is the direction from the centre of the sphere to the un-displaced position on the sphere) and the object z-axis (0,0,1) then you get the correct results for the whole sphere. there is still a simplification but it is now assuming that the only the local 3x3 neighbourhood of vertices is planar rather than the whole sphere.

i’m simply calculating the axis and angle of rotation and converting it to a rotation matrix.

  
  
	// compute new normal from displaced vector neighbourhood  
	// rotate calculated normal by gl_Normal i.e. rotation from (0,0,1) to gl_Normal  
	// axis = norm(gl_Normal x (0,0,1))  
	// angle = acos(gl_Normal . (0,0,1))  
	vec3 dispNorm;  
	vec3 axis = normalize(cross(gl_Normal,vec3(0.,0.,1.)));  
	float c = dot(gl_Normal,vec3(0.,0.,1.));  
	if(c>0.99999) {  
		dispNorm = nm;  
	} else if (c < -0.99999) {  
		dispNorm = -nm;  
	} else {  
		float s = sin(acos(c));  
		float t = 1. - c;  
		mat3 transform = mat3(t*axis.x*axis.x + c, t*axis.x*axis.y - s*axis.z, t*axis.x*axis.z + s*axis.y,  
							t*axis.x*axis.y + s*axis.z, t*axis.y*axis.y + c, t*axis.y*axis.z - s*axis.x,  
							t*axis.x*axis.z - s*axis.y, t*axis.y*axis.z + s*axis.x, t*axis.z*axis.z + c);  
		dispNorm = transform * nm;  
	}  
	normal = normalize(gl_NormalMatrix * dispNorm);  
  

replaces

  
  
norm = normalize(gl_NormalMatrix * ((gl_Normal+nm)/2.0));  
  

obviously there’s a bit more calculation involved but it makes it work properly.