Equirectangular Projection Shader


#1

Hi all,

I was wondering if anyone has any experience with generating equirectangular projections from 3d scenes on the fly in glsl. Imagine you have a camera inside a sphere, and want to record the entire sphere into one 2d texture. It seems like this is something that would likely already exist.

The projections look something like these


Some more info on these.

There is also this library for generating the image from a 6 cubemap textures, but it would be nice to just go straight from the 3d scene to the projection without the cube map in between.

There’s lots of information about how to project the equirectangular texture on to a sphere, but very little that describes how the projection is generated in the first place. Any body have any leads?

There is also this page by Paul Bourke about different transformations, and in particular the section on “Converting to and from 6 cubic environment maps and a spherical map” seems like it might offer some hints if you want to do it from a cube map.
http://paulbourke.net/geometry/transformationprojection/

Thanks!


Exporting 360 video from OF?
#2

I think I found an answer for this using a cubemap. Would still be nice if you could bypass that step entirely but maybe not possible.

Using a shader found here. I only modified one line of it so that the texcoords work with oF.
https://www.shadertoy.com/view/XsBSDR#

And Andreas Muller’s ofxCubeMap

Frag Shader

uniform samplerCube envMap;

void main() {
  
    vec2 tc = gl_TexCoord[0].st / vec2(2.0) + 0.5;  //only line modified from the shader toy example
    vec2 thetaphi = ((tc * 2.0) - vec2(1.0)) * vec2(3.1415926535897932384626433832795, 1.5707963267948966192313216916398);
    vec3 rayDirection = vec3(cos(thetaphi.y) * cos(thetaphi.x), sin(thetaphi.y), cos(thetaphi.y) * sin(thetaphi.x));
    
    gl_FragColor = textureCube(envMap, rayDirection);
}

Vert Shader:

void main() {
    gl_TexCoord[0] = gl_MultiTexCoord0;
    gl_Position	= gl_ModelViewProjectionMatrix * gl_Vertex;
}

cpp

int w, h;
ofShader warpShader;
ofxCubeMap cm;

void testApp::setup(){
    w = 900;
    h = 600;
    ofSetWindowShape(w, h);
    
    warpShader.load("base");
    
    cm.loadImages("x-pos.jpg",
                  "x-neg.jpg",
                  "y-pos.jpg",
                  "y-neg.jpg",
                  "z-pos.jpg",
                  "z-neg.jpg" );
}

void testApp::draw(){
    
    cm.bind();
    warpShader.begin();
        warpShader.setUniform1i("envMap", 0);
        cm.drawFace(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, 0, w, h);
    warpShader.end();
    cm.unbind();
}

#3

this looks super interesting.

i was looking in to something similar because we have a PanTiltZoom camera that i wanted to use to build a constantly updating 360 field of view.

do you think it can be adapted to combine multiple image from multi camera PZT angles in to the Equirectangular Image?
this would allow me to build a spherical representation of what the camera can see.

here are some attempts:

thanks. stephan


#4

Did any of you figure out the direct shader approach? I am using the cubemap approach but would really want to do away with a single vertex shader that does this style projection,

Thanks


#5

@Jason_Oliver_Stone As I understand it, the cubemap approach is the best way to go unless you are rendering your scene by raymarching or some technique where you’re doing all the 3d rendering inside a fragment shader.

One problem with doing things in the vertex shader, is that the projection warps straight lines into curves, so in order to get good results you would need a very tessellated mesh. I’ve never seen this implemented though I suppose it is possible.

If you don’t need a realtime method, the code I posted up above seems to work well. I think you could expand on it by rendering your scene into a cube map and then sending that to the shader, skipping the image loading step entirely. You may want to check out this repo, which seems to build on the code I posted above and ofxCubeMap. There’s also been a few more snippets posted on shadertoy in the meanwhile.

There is also this command line tool cmft that I’ve used in the past. You may be able to pipe cubemap images directly to cmft (or just run it on a folder of images afterwards).


#6

Thanks for the reply.

I think I had a typo in my earlier post. I meant to say I want to do away the cubemap approach and want it to be done on the fly using a vertex shader. I am trying to do this inside of Maya Viewport 2.0. So further restrictions come in.

I was able to get a “spherical” projection using a GLSL. It has the drawbacks you mentioned. What I really want is a equirectangular render that you get from a pathtraced rendered like Arnold or PBRT.

In a way, a true latlong image maps every pixel in the rendered image to a corresponding 3d location. But in a vertex shader the domain are the incoming vertices. In a path tracer you visit every pixel and compute a spherical ray and fetch the corresponding hit point from the ray. But how would go about doing the reverse - that is, start with a spherically projected vertex and map it the correct pixel or NDC space scaled by the render resolution?

So I doubt there is a way to map every pixel like warranted in a equirectangular image. Or do you think it can be done inside a vertex shader?

Thanks
Jason


#7

@Jason_Oliver_Stone AFAIK this isn’t usually done on the vertex shader. If you tried to move the vertices to their corresponding polar coordinates, there would be odd angles on your mesh from where they were stretched. Hence why you would need a tessellation shader or something else to make your mesh much more dense for the translation. Someone else describes this problem here

In a naive approach a vertex shader that transforms the vertex positions not by matrix multiplication, but by feeding them through trigonometric functions may seem to do the trick. The problem is, that this will not make straight lines “curvy”. You could use a tesselation shader to add sufficient geometry to compensate for this.

One of those shadertoy links has an example of ray tracing the scene to a texture, but again, you’d have to render your whole scene that way, which isn’t really oF (or brain) friendly.


#8

While I am okay to use a tesselation shader, I can’t seem to get the 360 spherical projection to work the way I want. Like I explained in my earlier post, what I really after is to match the latlong render to that of a pathtracer.

Suppose say I want a render output of 2000x1000 pixel(Maya’s playbast in this case which does hardware/openGL baked viewport render).

So how can I ensure that within the vertex shader, each incoming vertex maps a pixel in the view space/space according to my desired resolution? At the moment, I can get it to “unwrap” on screen place like that of environment mapping but that’s not really a latlong image that I can use for VR.

Is there a correct solution/technique that you know?


#9

@Jason_Oliver_Stone ultimately, I’m not sure how you’d go about doing it in the way you describe, sorry to say. I did remember that someone made a latlong lens shader for maya. If you’re already using maya, might make sense to just use that, or perhaps there’s some source floating around that you could dig into.

I’ve never tried it though, so ymmv.