Voxel Quest
  • Home
  • News
  • Videos
  • Gallery
  • Contact or Subscribe
  • downloads and source

How Does Voxel Quest Work Now? (August 2015 Update)

7/31/2015

9 Comments

 
Since its inception, the rendering technology behind Voxel Quest has undergone 3 major revisions.  The first was isometric chunk rendering (every voxel computed at a pixel scale), and you can find an explanation of how that worked here.  The next revision added perspective, and meshed those voxels with a point cloud.  The current and final revision only renders visible surface points, and it procedurally generates these in real time. Arguably the isometric rendering looked the best of all the methods, but I also spent over a year refining it.  The current revision is only 3-4 months in so give it time to mature. :)
Picture
The fact that VQ has undergone three tech revisions over two years probably seems a bit ridiculous, and maybe it is.  Something like this would normally kill a game.  That said, the point here is not just to make a game (plenty of people are doing that already), but to make a unique engine, and that could not happen in a vacuum. All I know is that I am finally happy with where the engine is at in terms of performance and flexibility, and I couldn't have gotten here without knowing everything I've learned since the start.

So the most common question I get, of course, is how does this stuff work? It is actually simpler than it might seem.

At a high level it is based on ray tracing, specifically a method known as Signed Distance Field ray marching (SDF ray marching).  But that is only part of what is going on.

Vanilla ray marching simply steps along a ray at fixed intervals until it hits something.  This is costly with a small step size, and erroneous with a large step size.  SDF ray marching computes the distance from the nearest objects, and moves along the ray no more than the smallest distance from any of these objects.

Picture
(Image credit: http://www.polygonpi.com/)
This is very fast when the ray actually hits the object, but considerably slower if the ray "just misses" the object as shown above.  The interesting thing about SDF marching is that it works hand in hand with boolean geometry or solid modeling.  Inigo Quilez has a great primer on distance functions and operations located here.  You can also find many great live examples of SDF ray marching at shadertoy.com.

The problem with naive SDF is that every ray at every pixel is considering every object in the scene to be rendered.  This becomes insanely expensive if you have many objects in the scene.  For example, imagine I have 1920x1080 pixels to render, which amounts to about 2 million rays to cast.  Each ray is calculating the distance from 1000 objects, 60 times per second.  Lets imagine each ray travels 100 steps on average, and requires 100 instructions to make its calculations.  You are looking at 1.2 quadrillion instructions per second, if I did my math right.

The trick I use is aggressively reducing the number of objects each ray considers, using very straightforward, naive cell hashing.
Picture
Here we have a ray (purple) intersecting 3 objects.  The ray gradually marches through each cell, from position B3 to N13.  If the cell is empty, it does nothing.  If any objects are within the cell, it adds them to the list of objects which it should compute.  It does not store duplicate object references.  Using math, it determines exactly whether or not it hit a given object - but this is not a final result.  If an object has a bumpy surface, or has holes in its surface, its possible a ray can hit the object but miss it when it renders the surface in detail.  All of these objects are gathered up and then a traditional SDF pass is used to determine the details and exact hits.  For some further clarity, cells can contain multiple references.  Cell H8 contains references to both the green rectangle and red circle, but cell M11 only contains 1 reference (the orange pentagon).  The way in which you implement the cell data is up to you, but I use OpenGL's samplerBuffer (which is like a texture but functions more like an array).

In order to step through cells efficiently, I recommend using a supercover line algorithm, such that each cell is only considered once as the ray travels through it (image credit: http://lifc.univ-fcomte.fr/home/~ededu/projects/bresenham/).
Picture
You will want to toy with cell sizes to hit the sweet spot: big enough so that one cell usually fits an entire object, but small enough such that there are not too many objects within a given cell.

Because VQ uses only one type of uber geometry (rounded boxes with super-ellipsoid-like corners), it can compute exact hits for every single type of primitive only using one set of instructions (from cones to cylinders to spheres to rounded boxes to you name it), eliminating the need for branching in that area.  So it narrows it down to geometry that the ray hits exactly.

But it goes further than that.  How many objects is a ray likely to miss?  As it turns out, you can usually get away with only considering the first 4 to 8 objects and you will rarely get errors.  In the illustration above I have just shown 3 objects, but imagine if that box is filled with hundreds of objects.  Now, for each ray, I only need to consider the first couple of objects that get hit instead of hundreds of objects for every ray.  As I have shown in the past, you can render millions of objects using this method and there is not really any slow down - it tends to run at a pretty constant speed.  You can optionally improve performance further through use of the flyweight pattern or templates.  If I am using an object that will be repeated many times (for example, a hallway) - I need only store the specifications for this object once, and then instance of it is just a reference with a bit of data like position and orientation.
Picture
Not every object is traditional geometry though.  The terrain is composed of smooth chunks of earth, which is produced by sampling a volume texture with bilinear filtering.  As it turns out, vanilla SDF won't work that great trying to compute distances based on textures.  The trick I have found is to allow the SDF algorithm to overstep into a solid area, and then step backwards along the ray (correcting itself), back and forth a few times until it gets close to the surface.  There are probably better ways of analytically computing distance from a bilinear filtered point, but this works for now.

Boom Boom Boom, let me hear you say wayoh... #gamedev pic.twitter.com/DQYKwGaiLs

— Gavan Woolery (@gavanw) July 31, 2015

What happens when we apply the ripple effect to land instead of water? #gamedev pic.twitter.com/vTnkn0bFZc

— Gavan Woolery (@gavanw) July 20, 2015

Wizard Pool Party!!! Part Deux. #gamedev pic.twitter.com/u11bpIZqmY

— Gavan Woolery (@gavanw) July 27, 2015

First person camera with collision #gamedev pic.twitter.com/xMo6YD1teb

— Gavan Woolery (@gavanw) July 19, 2015

Maybe a bit better than the last #gamedev pic.twitter.com/ivHRD8boOl

— Gavan Woolery (@gavanw) July 15, 2015
9 Comments
Marcos
7/31/2015 02:03:01 am

Nice work! I know it's not the main idea here, but I would love to see an RPG Ultima Online revival with these kind of graphics. Could be serious. Thanks for the images and info!

Reply
BinarySplit
7/31/2015 10:52:37 am

Does your engine make it possible to combine these rendering methods in the same frame?

Also, do you intend to do animated character models or high-res texturing (e.g. text on a signpost) in SDF? It sounds like that would be a lot more difficult with SDF than sorted point clouds.

Reply
BinarySplit
7/31/2015 11:08:30 am

Where are my manners! I should have lead with this:

Thanks for the update. I really enjoy watching your engine develop. I was previously skeptical about sorted point cloud splatting and had dismissed SDF rendering entirely. I'm surprised and inspired by the ways you've found to make them work.

Reply
Semi Essessi
7/31/2015 11:15:07 am

Check out whittaker's method for ray marching .. it has advantages and disadvantages compared to sphere tracing - it's faster, constant cost always, less accurate per iteration, different edge artifacts etc

Reply
Procgen mike
8/3/2015 12:37:07 am

Love watching the explosions and first person!

Any chance for our next update we can get something more in the gameplay, story, AI. I think most of the updates give people the (false) sense that you are working on a graphics engine, not an emergent role-playing simulation.

Reply
Andi link
8/4/2015 02:34:42 am

Looking awesome man, gotta love the effects you can do with a simple shader, looks like it's not as expensive as I would've thought. Have you considered partnering with another game company to help you out on the dev time of the game you want to make? I'd love to help out!

Reply
Peter Petrov
11/11/2016 03:18:26 pm

How can I actually compile the codebase? I am interested in trying to understand how this work and how hard it will be to be usable into a real game.

Reply
ray ban sunglasses outlet link
9/22/2019 03:33:12 am

The window on the ground paints his beautiful longing and pursuit. The graffiti on the paper is the portrayal of his confused thoughts. The long poems are the moans and whispers of his dreams when he grinds at fate. The world says you are crazy about love, but I clearly see your purity and transparency in that line.

Reply
celeb networth link
1/21/2021 12:59:48 am

Thank you! I hope to see more from you.

Reply



Leave a Reply.

    RSS Feed

Legal Information