Ray Tracer implementation


#1

Hello, I’m following the graphic codex tutorials http://graphicscodex.com/projects/rays/index.html and I’m now at the point where I’ve to build a ray tracer in OF, precisely:
“Backward” ray tracing algorithms iterate over the light rays that entered a camera and trace them back along the negative direction of propagation to discover the surfaces, and eventually light sources, that gave rise to them.

I need a gui, with a button that trigs a “render”, and this render will take some minute or seconds, depending on the resolution, to render the Cornell Box.

My question is, once that I’ve set up (in the setup method) the scene with the model (a loaded .obj file with the Cornell Box), I’ve defined the camera position and the light position, and I’ve initialized the GUI, where should i draw the rendered image? I assume not in the draw method, as what I need to draw is an image that progressively gets updated.
The same thing for the ray tracing loop, it should not go into the update method, right?

So, I think everything should go in the setup method, that restart from the beginning when i click on the “render” button in the GUI.

Is this correct? do someone have some suggestion about the skeleton of my application?


#2

The way I would do it is: get your GUI trigger to start an of Thread, then do the render calls (your raytracing loop) inside the ofThread which will render your image to an FBO. This way your renderer is decoupled from the ofLoop so it does not matter how long it takes to render, you will see the progress when drawing the fbo from your draw() method.
Maybe other people has better ideas?


#3

Actually, the final implementation should be multithread. can an ofThread have child threads? in this case I could do as you say.

But, to be honest, I would try to keep it as simple as possible at the beginning, and I would like to introduce threads later, when the minimal implementation works.


#4

Hi, did you solve this?
unfortunately you cannot use an FBO within a thread different to the main thread. it is an openGL limitation, I think.
take a look at this example https://github.com/openframeworks/openFrameworks/tree/master/examples/threads/threadExample
It is quite much what you need.
in order to distribute the work load along all possible running threads, which depends on the number of cores your computer has, run several threadedObjects, but modify it so you can specify which area of the total pixels it is that the thread should handle.
this might be useful also
https://github.com/bakercp/ofxTaskQueue/
cheers


#5

Hello @roymacdonald , no, unfortunately I did not solve it. Also, I would like to focus on the ray tracer first, without using ofThread.
What I’ve now is:
a CornellBox scene loaded through ofAssimp,
a camera position
a light position.
a point in the virtual image plane (a point between the camera and the object, in which the rays pass through )

I want to trace the ray that from the camera goes to the object, passing through the point on the virtual plane.


Image from http://graphicscodex.com/, by Morgan McGuire.

What I think I will do is:
1)In the setup method, I set a GUI, a scene and an empty png image
2)In the setup method, I start the ray casting. I still have to dig into the algorithm but I thing it belongs here and should not go in the update method
3)Whenever I found a point on the surface of the object that intersect with one of the ray, I calculate the light and the final color of the pixels, and then I save it into the png image
4)The draw method (that is called 60 times per second) just display the actual status of png, that is still being updated by the ray tracing loop launched in the setup method.

Do you think it make sense?
When I get this working I will start to see how can I improve the speed adding ofThread, but for now I leave it out.


#6

I see. I thought that you had the raytracer solved. There is ofxRay which kinda makes what you intend to do. At least it will let you setup the camera that “shoots” rays. The approach you mention should work but it will not have lights just the color of the objects. To add lights, you’ll need to bounce the rays from the object, depending on the surface properties the light will get scattered in a certain amount, generating more rays. Then you check if any of these rays arrive on a light. Then from this you compute the final color of the pixel in the png.
I’m not sure if this is the standard approach, I’m just guessing.
Also, it might me more efective to run a kind of low res render first and then based on that one make a higher res one, taking into account what is in the low res in order to discard unused rays.

hope this helps.

best


#7

It took me a while to figure out how to do it, but here I’m with a new question :wink:
I’ve written a rudimental version of a Ray Tracer, with almost no features, but it works.
Now I would like to add multithread to it.

I’ve read this:
http://openframeworks.cc/ofBook/chapters/threads.html

And I’had a look at this example:

I assume I should find a way to run this function https://github.com/edap/ofxRayTracer/blob/master/src/ofxRayTracer.cpp#L11

in a thread. But I do not understand how to adapt that example to my code.
Does anyone have some suggestion?


#8

make a threaded class in which you call the necesary stuff inside the threadedFunction().
You’ll need to setup each of this class instances to process only a part of the whole scene, then when all of these are ready you just put together the results in order to produce the whole scene render.

also take a look at the threadedChannel example, which might be more suitable.
hope it help.
best


#9

Hello Roy, thank you for your answer. “All the necessary stuff” is quite a lot, there are no features, everything is necessary at the moment. I had a look at the threadedChannel example, and it looks more clearer to me than the other example but still I do not get how to adapt it to my ray caster. I see 2 way to do things:

A)

Write 2 channels like this:

ofThreadChannel<ofxRTPinholeCamera, shared_ptr<ofImage>> toCast;
ofThreadChannel<ofxRTPinholeCamera, shared_ptr<ofImage>> casted;

then in the traceImage method:

void ofxRayTracerThread::traceImage(const ofxRTPinholeCamera& camera, shared_ptr<ofImage>& image) const{
    toCast->send(camera, image);
}

And in my threadFunction

void ofxRayTracerThread::threadedFunction(){
    const int width = int(image->getWidth());
    const int height = int(image->getHeight());

    for (int y = 0; y < height; ++y) {
        for (int x = 0; x < width; ++x) {
            glm::vec3 P;
            glm::vec3 w;
            // Find the ray through (x, y) and the center of projection
            camera.getPrimaryRay(float(x) + 0.5f, float(y) + 0.5f, width, height, P, w);
            image->setColor(x, y, L_i(ofxRTRay(P, w)));
        }
    }
    image->update();

#if __cplusplus>=201103
        analyzed.send(std::move(image));
#else
        analyzed.send(image);
#endif
	}
}

Short parenthesis: I do not understand this part in the example code:

ofPixels pixels;
while(toAnalyze.receive(pixels)){

For which reason pixels needs to be redefined before the while loop? it was not passed to the toAnalize channel through the send method?

Where should my ofxRayTracerThread::threadedFunction() method takes the reference for the variable camera and image?

B

The other thing that I was considering, is to have a threadedFunction for each pixel in the image. In this case each channel would receive something like:

// x and y are the image coordinates
// ray is the primary ray
toCast->send(x, y, ray)

This sounds more understandable to me, but I have no experience with multithreading and I do not know that to do.


#10

I think that in the example

There is an unnecessary variable definition. @arturo, can you confirm this?

ofPixels pixels;
while(toAnalyze.receive(pixels)){

I’ve tried to comment out the pixels variable definition and it also works

//ofPixels pixels;
while(toAnalyze.receive(pixels)){

This would clarify me something.


#11

no that’s correct, the pixels in the class are only for the main thread to access, the ones in that function only for the analysis thread to access. if you use the same memory structure in both you will have the main and auxiliary threads accessing simultaneously to those pixels

they should probably have different names to make it less confusing


#12

But ofPixels pixels is already in the header file


#13

yes that’s what i mean, that pixels in the header is only for the main thread to access if both threads access it would be problematic, in the case of this example it probably wouldn’t crash if i remember well but you’ll get tearing cause you would upload the pixels to the texture in the main thread while the analisys thread is writing to it.

The pixels locally in the threadedFunction are the only ones the thread is supposed to be touching, that’s why it’s declared twice


#14

So, to see if I get it correct:

in the header

ofPixels pixels;

Is what the main thread read and draw in the draw method. Is the variable that keeps track of all the pixels being updated by the threads

  1. in the .cpp file
void ImgAnalysisThread::threadedFunction(){
    ofPixels pixelsSent;
    while(toAnalyze.receive(pixelsSent)){ //these should be renamed, correct?
    // pixelsSent it refers to the pixels that was sent with toAnalyze.send(pixels);
    // right?
        pixelsSent.setImageType(OF_IMAGE_GRAYSCALE);
        for(auto & p: pixels){
            if(p > 80) p = 255;
            else p = 0;
        }
        
        // this replace the value of the variable `ofPixels pixel` defined in the header
        // with the new computed values stored in `pixelSent`
        analyzed.send(std::move(pixelsSent));
	}
}

#15

yeah that’s it.


#16

Well, arturo cleared some your doubts.
I haven’t read your raycaster implementation, although the way for threading it should be independent of it. So, in the ofxRayTracerThread class add a setup function (or traceImage, as you already have) where you pass the camera as well as the portion of the whole scene to render, for which I think the best is to use an ofRectangle.
I guess that it will be more efficient to make each ofxRayTracerThread instance to have its own pixels object where to render instead of passing an external object as this will need some locking to avoid thread related memory access.
Then define the ofThreadChannel objects as ofThreadChannel<ofPixels>.
So the h file should look something like:

   class ofxRayTracerThread: public ofThread {
    public:
    	ofxRayTracerThread();
    	~ofxRayTracerThread();
    	void traceImage(const ofxRTPinholeCamera& camera, ofRectangle rect, shared_ptr<ofImage> fullSceneImg ){
    	void update();
        bool isRayTracing(){return bIsRayTracing;}
    	bool isRayTracingFinished(){return bIsRayTracingFinished;}

    private:
    	void threadedFunction();
    	ofThreadChannel<ofxRTPinholeCamera> toCast;
    	ofThreadChannel<ofPixels> casted;
     ofxRTPinholeCamera camera; 
    ofRectangle roiRect ;
    shared_ptr<ofImage> fullSceneImg;
    ofPixels renderPixels, castedPixels;
    bool bIsRayTracing = false;
    bool bIsRayTracingFinished = false;

and cpp:

void ofxRayTracerThread::threadedFunction(){
    ofxRTPinholeCamera cam;
    while(toCast.receive(cam)){
    for (int y = roiRect.y y < roiRect.getMaxY(); ++y) {
        for (int x =  roiRect.x; x < roiRect.getMaxX(); ++x) {
            glm::vec3 P;
            glm::vec3 w;
            // Im not sure if in the following line, the function expects what where roiRect.width is passed.
            cam.getPrimaryRay(float(x) + 0.5f, float(y) + 0.5f, roiRect.width, roiRect.height, P, w);
            renderPixels.setColor(x-roiRect.x, y-roiRect.y, L_i(ofxRTRay(P, w)));
        }
    }
    

#if __cplusplus>=201103
        casted.send(std::move(pixels));
#else
        casted.send(pixels);
#endif
	}
}
void traceImage(const ofxRTPinholeCamera& camera, ofRectangle rect, shared_ptr<ofImage> fullSceneImg ){
    bIsRayTracing = true;
    bIsRayTracingFinished = false;
    roiRect = rect;
    this->fullSceneImg = fullSceneImg;
    renderPixels.allocate(rect.width, rect.height, OF_IMAGE_COLOR_ALPHA);
    toCast.send(camera);
}
ofxRayTracerThread::ofxRayTracerThread(){
    startThread();
}

ofxRayTracerThread::~ofxRayTracerThread(){
    toCast.close();
    casted.close();
    waitForThread(true);
}

void ofxRayTracerThread::update(){
    if(bIsRayTracing){
    while(casted.tryReceive(castedPixels)){
        bIsRayTracingFinished = true;
    }
    if(bIsRayTracingFinished){
        //set the pixels of fullSceneImg acording to castedPixels and roiRect
        //you'll nedd to walk the pixels from castedPixels and place this in fullSceneImg.pixels relative to roiRect.
    }
}
}

I think that it is a better idea to use an ofPixels object in the threadedFunction rather than an ofImage.
You´ll need some kind of thread manager class, which sets up all the threads and then assembles the whole image once ready.

BTW I haven’t tested any of these, just wrote down here, but it should work or at least give you a general idea.

hope it helps.
best.


#17

Hello Roy, thank you very much for this detailed answer!
I will give a try and let you know.


#18

Hello @roymacdonald, I’ve another question.
If, as you said, “I guess that it will be more efficient to make each ofxRayTracerThread instance to have its own pixels object where to render”, why does this pixel object need to big as the whole image?

renderPixels.allocate(rect.width, rect.height, OF_IMAGE_COLOR_ALPHA);

would not be enough to make it 1x1 ?

renderPixels.allocate(1, 1, OF_IMAGE_COLOR_ALPHA);

#19

theres a mistake i spotted
in

#if __cplusplus>=201103
        casted.send(std::move(pixels));
#else
        casted.send(pixels);
#endif
	}

it should be renderPixels and not pixels.

You need to allocate the renderPixels before accessing these. if you make it 1 x 1 it will crash unless the rect passed is 1x1.


#20

I see. I’ve sorted out the first part, using ofRectangle and writing the the calculated color first to an ofPixel instance and then, a the end, to an image. Just this doubles the performance, setting a new color on an ofPixels is way faster than setting a new color on an ofImage.

But I still have to fix threads. The problem that I’m facing now is that, in my actual implementation, the meshes, the materials and the lights are saved as instance variables.

And I need them when the thread has to calculate the incoming light and the intersection. I’ve two choices:

1 - Passing the meshes, the lights and material vector (by reference and using const) to the toCast.send method. Basically changing this:

toCast.send(camera);

to this:

toCast.send(camera, materials, meshes, lights);

2 - To have materials, meshes and lights as instance variables for each threads, as it happens for roiRect. This would mean that each thread has a copy of these 3 variables. That are actually small but with a big scene it becomes a problem, I think