This seems to be some sort of super voxel storage/culling system
It’s also (presumably) got loads of hidden drawbacks and is alleged to just be hype (the whole project disappeared for the last year). That’s what Mr. Minecraft has to say: http://notch.tumblr.com/post/8386977075/its-a-scam
off-topic: dear english native speakers: what kind of accent does this guy speak?
he speaks aussie-yanko-entrepeneur
Yeah it’s strangely evasive. But the promise of ‘unlimited power’ is easy to fulfil by redefining the playing field.
There’s even examples of ‘unlimited power’ fractal worlds running in browser (opengl shader based).
Obviously there’s some limitation somewhere. I presume that having either procedural worlds or high repetition is required (otherwise you couldn’t store / enumerate the objects) in order to achieve the LOD.
Even Mr Minecraft admits “Its a very pretty and very impressive piece of technology”, and goes onto to point out other (not too dissimilar) examples of the same ideas being applied (with different promises). And his scam claim doesn’t actually claim that the video is a fake, but rather that it’s oversold compared to its capabilities.
Given any sufficiently interesting new piece of graphics tech, the limitations and advantages offer new artistic opportunities. Even if nobody’s fooled by the sales rhetoric.
hehe maybe fractals will become en vogue in the VJ scene again. i would like that (but then, i also love tunnels, an admission for which you’ll probably get flogged on vjforums :D)
this caught my eye today: http://www.hardocp.com/article/2011/08/10/euclideon-unlimited-detail-bruce-dell-interview
It’s an interview and live demo of the tech. apparently it all runs on the CPU in real time. An 8 thread mobile i7 but still. pretty impressive.
There’s obviously some pretty slick rendering going on there, at the end he claims that the models are converted to voxels at some insane resolution. It’s most likely a compressed oct-tree data structure of some sort and it seems like it would have to be rendered using raycasting, though in this video he specifically mentions that raytracing is not used and that there are no rays, however he also sounds like he has no idea what he is talking about half of the time.
I’ve written a realtime raycasting renderer for both CPU and GPU. The CPU version is hardly real-time, but if i had an 8 threaded CPU I’m sure it would be a different story. As for the GPU, the lowly Nvidia ION core is suficient to render 30fps at 640x480. This is all poorly written and unoptimized code too, Im not even using oct-trees.
Basically, I’m guessing this is a really well written voxel raycasting engine + data compression scheme that can perform really well under the right circumstances, but I don’t see how it could be used for what we think of as a game engine without everything looking very repetitive.
As for the scam part, there is a reason everything is repeated geometry. There’s no way a world that size and that resolution could be unique objects all over, the amount of memory it would take is astronomical. Therefore for the sake of impressing investors they have a handful of pretty models that are repeated to give the illusion of a vast landscape, when in reality the technology is no where near ready for a complex and dynamic world. Impressive still, but not game changing at this point. Hopefully they can do something about that with their new found capital.
Maybe if they can do lower resolution voxels that are rendered with a nice smooth iso-surface for large scale geometry it would work. Who knows. Kudos to them for getting funding and making an amazing demo. I hope this pans out more to something usable for the game industry, that would really shake things up and potentially let artists be more intuitive, especially what he says about making physical models out of clay and 3d scanners to convert it to digital.
Sorry to bring this up again. Maybe people is not interested in the topic anymore.
I just saw an update on their work (from last year). It seems that the game industry was not the best target for them. And it looks that they found now their market.
I’m a bit curious … I wish I could know what sort of encoding system they use… the magic must be in the way they store those voxels to read them on the fly with such speed…
Wow. I’f I had a hat I would have to eat it. Apparently I was dead wrong about it not being able to handle a large, non-repetitive scene.
I agree that the magic must be in the data structure, which I would guess is a SVO (sparse voxel octree). This can be used for quick frustrum culling and voxel lookup.
I have seen several other systems on youtube as well but now I can’t seem to find them.
One thing that dramatically speeds up most SVO rendering implementations is that all voxels are opaque. This means that you only need to find one voxel per pixel of the final rendered scene. Volume raycasting, on the other hand which can allow you to see through areas of differing densities (skin/flesh/bone or whatever) typically needs to process multiple voxels as the ray passes through the volume, instead of just stopping at the first point it hits.
i was looking at this some days ago: http://www.openvdb.org/ it’s also some kind of sparse hierarchical structure. most interesting is that it’s open source it has lots of dependencies which was a problem for me and seems to be oriented to offline rendering of high quality models more than real time rendering but it can be interesting.to look into
OpenVDB is really cool, thanks for sharing!
At some point I’d like to convert ofxVolumetrics to use a SVO datastructure but I haven’t had the time to really dive in and understand the concepts behind this rendering method yet.
Some more interesting things I came accross while developing ofxVolumetrics that I’ve never gotten around to implementing:
Hey @arturo and @TimS ,
Thank you very much for your input! this is very interesting! I’m actually trying to write some voxel classes for a project I’m doing… I’m rendering with Tim’s ofxVolumetrice but i need more tools to load/store/process voxels. I also thought that that could end up being a simple fluid simulation code… let’s see how far i get. Thanks for the links! i will study there!
i did a quick octree class to encode point clouds as voxels, it’s in this addon https://github.com/arturoc/ofxDepthStreamCompression/tree/master/src
it’s not very clean code as i was just trying to see if how much i can compress depth by using a binary representation of the octree but finally went with another solution so i didn’t refined it further, it’s also kind of slow