Adapting WebGL to OpenGL

Hi everyone,

I come to you today because I have to adapt a WebGL/javaScript program onto OpenGL, for stability reasons. The program is meant to work in quite long during installation (like, 10-15 hours per day) and doing this on the browser is Just So Crappy.

Problem is, I’m a visual artist and I’m always learning and working at the same time, but I do not have so much time for developing. In this project, I managed only the javascript on basis of existing shaders and webgl programs. It is actually more patchwork than programming…

So now I have five shaders a lot of functions made with javascript methods for WebGL that I have to bring in OpenFrameworks. I’m quite confused with ofx, because all OpenGL’s tutorials I find around always do mention of GLEW, GLUT or another library, and I don’t know where I should begin. I often see questions about bringing OpenGL to WebGL (there’s apparently a whole library about), but never the opposite.

If interested, there is the project will all code if you inspect

The GL program generates waves above a video texture, and the shaders use them as error diffusion for dithering it in real time.

If someone has some experience with similar case, I would be glad to know it!

good afternoon ++

I have been working on the same thing, generally trying to bring shadertoy code (built using webgl) into open frameworks.

This is a small example of how I accomplished this, so far.

This does not use any outside libraries and for convenience sake I have used the shader code inside the setup part of the openframeworks program instead of separate files. The passing of variables should be quite clear, and how they are used within each of the shaders.

This may help you understand some of the techniques used to create this kind of code. I also come from the visual arts background, and most of my code starts as an experimental hack. The link I provided was mostly a note of how I did something so I can use it later.

Hello there. This is from what I remember, I haven’t programmed in a while.

I’ve ported a few WebGL to OpenGL before, most of the problems were about how to port javascript and c++. WebGL and OpenGL are essentially the same thing, they are just calls to hardware and the hardware is already specified and cannot be changed much. The huge difference is WebGL is OpenGL ES 2.0 and OpenGL can get up to 4.5.
So, if you try and port some big shader with modern functionality into the old one you might not be able to or you will get different results or no results. Getting an old GLSL shader to work on modern system may tell you that a function was deprecated or it would just ignore you.
Now it you are talking about openFrameworks use of GLSL you might bump into the floating point and integer texture problem when sending them as well as the POT texture problem (Power-Of-Two). Remember, you are porting WebGL to a system that can handle more modern openGL.
In any case your problem is not exactly porting WebGL/Javascript to OpenGL but porting Javascript to C++. In this case to OpenFrameworks. Otherwise I did this many years ago, ports of Orange Book examples for openFrameworks:

Code was made to do exactly the same thing,

ShaderToy was already ported by PatricioGonzales and later by me:

Functionality is rather outdated as no video input was done, here is an awkward video demo I did for it a few months ago:

My old openFrameworks account had a lot more GLSL stuff too, including a test port of CG to openFrameworks

I’m not as active as I used to be so that is about as much as I can help. Good luck!

oooh, hey, thanks for the links.

I wish I had seen those when I was looking, especially as I was looking for methods that were video-driven and and time-driven for cycling visuals.

I’m not sure if anyone of us has helped answer the question of the original poster, except to leave a bunch of ‘try this, learn from what doesn’t work’ breadcrumbs. (what I call my “code-test-curse-repeat” cycle)

An easy to understand method for demonstrating how to understand technique for understanding and programming from one to another is still probably needed.

I will definitely check out your generous code sharing for this… it’s nice to see them all in one place.

If there was a method that I don’t remember having being taught specifically by openF examples and docs it’s ping pong buffer but there are many examples of this now. My kaleidoscope one has it built in. Patricio Gonzales has a few other examples with it too, is is an extremely common and powerful concept. Check the Kaleidoscope in my GitHub, the forum isn’t letting me post the link (who flags free source code as spam?).
Hint, you usually need a ping pong buffer and multiple passes of it to do blurring in almost anything.

You will not find a method to go from javascript shaders to c++ shaders because they are already very similar. OpenGL calls cannot change much because they are made for the hardware, GLSL code cannot change because they are made to be used in runtime by the compiler (your graphics driver) for the graphics pipeline, the problem comes down to porting javascript to openF. If you go from three.js to openF you have one type of problem, from three.js to pure openGL there are others. Most of the problems I ran into where that openGL was made to be efficient, not easy to understand. Pixels are not pixels, textures are POT (used to be) and have floating point coordinates and colours are also floating point. From a graphics architecture point of view, pixels are a new thing found initially ARB Extensions, something that I don’t think is part of openGL ES which is what I think javascript uses.

Can’t really help much when there are no examples, but guessing what the original post wanted, if the waves move around a video it might need ping pong buffer, but I can only guess.