Been working on many things - increased view distance and improved rendering, much better AI / pathfinding, and more - I will post a "real" update on these things soon but for now here is a video and some images:
Since its inception, the rendering technology behind Voxel Quest has undergone 3 major revisions. The first was isometric chunk rendering (every voxel computed at a pixel scale), and you can find an explanation of how that worked here. The next revision added perspective, and meshed those voxels with a point cloud. The current and final revision only renders visible surface points, and it procedurally generates these in real time. Arguably the isometric rendering looked the best of all the methods, but I also spent over a year refining it. The current revision is only 3-4 months in so give it time to mature. :)
The fact that VQ has undergone three tech revisions over two years probably seems a bit ridiculous, and maybe it is. Something like this would normally kill a game. That said, the point here is not just to make a game (plenty of people are doing that already), but to make a unique engine, and that could not happen in a vacuum. All I know is that I am finally happy with where the engine is at in terms of performance and flexibility, and I couldn't have gotten here without knowing everything I've learned since the start.
So the most common question I get, of course, is how does this stuff work? It is actually simpler than it might seem.
At a high level it is based on ray tracing, specifically a method known as Signed Distance Field ray marching (SDF ray marching). But that is only part of what is going on.
Vanilla ray marching simply steps along a ray at fixed intervals until it hits something. This is costly with a small step size, and erroneous with a large step size. SDF ray marching computes the distance from the nearest objects, and moves along the ray no more than the smallest distance from any of these objects.
(Image credit: http://www.polygonpi.com/)
This is very fast when the ray actually hits the object, but considerably slower if the ray "just misses" the object as shown above. The interesting thing about SDF marching is that it works hand in hand with boolean geometry or solid modeling. Inigo Quilez has a great primer on distance functions and operations located here. You can also find many great live examples of SDF ray marching at shadertoy.com.
The problem with naive SDF is that every ray at every pixel is considering every object in the scene to be rendered. This becomes insanely expensive if you have many objects in the scene. For example, imagine I have 1920x1080 pixels to render, which amounts to about 2 million rays to cast. Each ray is calculating the distance from 1000 objects, 60 times per second. Lets imagine each ray travels 100 steps on average, and requires 100 instructions to make its calculations. You are looking at 1.2 quadrillion instructions per second, if I did my math right.
The trick I use is aggressively reducing the number of objects each ray considers, using very straightforward, naive cell hashing.
Here we have a ray (purple) intersecting 3 objects. The ray gradually marches through each cell, from position B3 to N13. If the cell is empty, it does nothing. If any objects are within the cell, it adds them to the list of objects which it should compute. It does not store duplicate object references. Using math, it determines exactly whether or not it hit a given object - but this is not a final result. If an object has a bumpy surface, or has holes in its surface, its possible a ray can hit the object but miss it when it renders the surface in detail. All of these objects are gathered up and then a traditional SDF pass is used to determine the details and exact hits. For some further clarity, cells can contain multiple references. Cell H8 contains references to both the green rectangle and red circle, but cell M11 only contains 1 reference (the orange pentagon). The way in which you implement the cell data is up to you, but I use OpenGL's samplerBuffer (which is like a texture but functions more like an array).
In order to step through cells efficiently, I recommend using a supercover line algorithm, such that each cell is only considered once as the ray travels through it (image credit: http://lifc.univ-fcomte.fr/home/~ededu/projects/bresenham/).
You will want to toy with cell sizes to hit the sweet spot: big enough so that one cell usually fits an entire object, but small enough such that there are not too many objects within a given cell.
Because VQ uses only one type of uber geometry (rounded boxes with super-ellipsoid-like corners), it can compute exact hits for every single type of primitive only using one set of instructions (from cones to cylinders to spheres to rounded boxes to you name it), eliminating the need for branching in that area. So it narrows it down to geometry that the ray hits exactly.
But it goes further than that. How many objects is a ray likely to miss? As it turns out, you can usually get away with only considering the first 4 to 8 objects and you will rarely get errors. In the illustration above I have just shown 3 objects, but imagine if that box is filled with hundreds of objects. Now, for each ray, I only need to consider the first couple of objects that get hit instead of hundreds of objects for every ray. As I have shown in the past, you can render millions of objects using this method and there is not really any slow down - it tends to run at a pretty constant speed. You can optionally improve performance further through use of the flyweight pattern or templates. If I am using an object that will be repeated many times (for example, a hallway) - I need only store the specifications for this object once, and then instance of it is just a reference with a bit of data like position and orientation.
Not every object is traditional geometry though. The terrain is composed of smooth chunks of earth, which is produced by sampling a volume texture with bilinear filtering. As it turns out, vanilla SDF won't work that great trying to compute distances based on textures. The trick I have found is to allow the SDF algorithm to overstep into a solid area, and then step backwards along the ray (correcting itself), back and forth a few times until it gets close to the surface. There are probably better ways of analytically computing distance from a bilinear filtered point, but this works for now.
This is not a full update, just a recent peek at some improvements and performance. In my next full update I will release a video. If you are curious to know more about the planned development timeline, see the bottom of this post.
A constant question I get is something along the lines of "what is performance like?" The truthful answer was - I did not know! It looked like it was getting at least 60 FPS. Accurately measuring FPS can be tricky in OpenGL without profiling software, and I have not yet gotten to the point where I am using that (I don't want to get sucked into tweaking the heck out of things to profile this early on). If you think it is as simple as doing a timer between frames, that is not true. See here for further details on why getting the FPS can be tricky.
In order to perform these frame tests, I did it the old fashioned way - render a large amount of frames without any sort of timer constraints, and measure the amount of time it takes (in this case, I am measuring how long it takes to render 10,000 frames).
Some VERY important notes :D
The new rendering methods now support any amount of overlapping geometry (previously, each piece of geometry was constrained to one instance within its own cell. There is still more work to do before I do a full writeup, as the process is still changing slightly and there are more steps to finish. I am also doing a huge amount of work just to manage dynamically placing objects in the scene, saving your creations, and a lot of stuff along those lines, which I will also talk about in the next update.
I'm not going to type a ton about the development status of the game - I've already done that in the forums if you are curious (check recent threads). But let me just provide a brief summary of recent issues:
Edit: I am soon going to a writeup on the rendering. I wrote this post in haste so have not gotten a chance to write much yet but in as few words as possible the new rendering is much faster because it only calculates visible surfaces instead of volumes. It is based on cell marching and distance field marching. More info to come soon!
Since the last update:
I have been fairly busy with "real life" issues, which seems to be the norm these days, but I have managed to get quite a bit done regardless.
Below is one of the earlier tests I put together - rendering terrain based on VQ's generated map.
Added in bilinear filtering to smooth out the water (sorry for all the twitter embeds, I am lazy):
And one more demo of the smooth water:
Since the last update:
I am still working on many aspects of the AI but only a day or two per week right now - I am trying to not get stuck in any one area as there are many tasks left to cover. I think this most recent set of changes opens up a lot more modding potential (in particular, for realtime games). I think it would be pretty easy to make a platformer (like Mario N64), an artillery game (like Scorched Earth or Worms), or many other options (at least, easier than it would have been with strictly unit/turn-based movement).
I came in with the intention of doing videos weekly but as it turns out monthly makes more sense - there are some weeks where not a lot happens visually. It also saves me time. Nonetheless I advise you to follow the @voxelquest (or @gavanw) account on Twitter or my Youtube channel for smaller updates in between.
Also, I posted this a bit late for April Fool's, but it is worth reposting I think. Since I added in first person view, I obviously need the obligatory hand swinging some weapon. (In this case it is one of the weapons from Hexen):
In spite of this being a joke, I really do dig this style (as did Twitter, apparently) and I would like to perhaps have something similar ultimately.
Hello! It has been over a month since the last update. I apologize for the delay, but as I have mentioned elsewhere there really was not much to look at as the majority of my work was doing IO in a text-based console. I have been hard at work on the AI, and now have the logical portion of it mostly complete. It is capable of making deductions as I have described prior in the forums and elsewhere. This is a full "scripting" language with compiler, although for the time being it is parsed using rules from JSON which makes it somewhat unwieldy but I just wanted to get a parser working as fast as possible. It contains many features, and is both a subset and extension of the language Prolog, albeit with different syntax and grammar.
I use quotes on "scripting" language because it is not writing AI scripts like you might see in other games. Rather it is just specifying a knowledge base with various rules and relations. You can then pose queries to this knowledge base in order to make deductions and determine whether or not things are true or false. Additionally you can do more advanced proofs and solve statements for a given set of variables, finding the variables which result in a true set of facts. If you are interested in learning more about this, I recommend learning Prolog - there is a good free book I used here for reference.
So why not just use an existing solution like Prolog? Well, first of all Prolog is relatively bloated. My language fits within under 1000 lines of code if you ignore the JSON parsing (less minus comments/whitespace). It is compiled to integer symbols so it avoids string manipulation and comparison. I only need a subset of Prolog's functionality, so it would be wasteful to include the entire thing. Moreover, Prolog is not sufficient on its own. I must carefully modify the deduction algorithms to support things like score maximization, and thus I must have a full understanding of the code involved. This would be much harder if I went in and tried to modify an existing Prolog implementation, and more error-prone due to the larger breadth Prolog has over my subset language.
The ultimate goal of this is to build an AI that can make logical deductions and perform surprising, emergent behavior based on things that are not explicitly scripted. The term I have been throwing around, as have some other games, is a "Game of Thrones" simulator - something capable of weaving together a complex plot in which every NPC has their own set of motivations.
I also added in some billboarding to show sprites in the placeholder boxes, which makes it much easier to figure out what a given object is. Also reorganized all the items and added in placeholder NPCs and monsters. Art is by 7soul and available for purchase in most game dev asset stores.
Also, as a quick (dumb) experiment, I messed with voxel warping. Just 2 lines of code, could have some interesting uses for spell effects and such.
Happy New Year!
It has been a while since the last update, I know. As I may have mentioned prior, aside from the holidays, I have been just trying to get something interesting to show, which is hard when the majority of my work the past few weeks has been on relatively boring back-end stuff.
Here is a video - sorry I was tired when I made it!
Since the last video update, we have:
- Walking around, with collision and auto-climbing/descent of stairs/hills (the power of technology!!!)
- Camera that follows the character, orients itself in your travel direction, and more
- Saving and loading of character poses
- Rendered (fully voxel-based / procedural) character has replaced the placeholder box object.
- Ability to assign materials to each character limb, or other objects
- Material editor has been migrated from the web client. For those who have not seen it, the web client looked like this. The web client had many problems, including performance (slowing down the main client), inability to effectively manage data over a websocket, requiring a web server to run (i.e. a localhost), being generally unwieldy, and more.
- Much more advanced GUI features, data binding, and data serialization. The GUI now supports multiple features to keep everything DRY (Don't Repeat Yourself), keep data and presentation separate, and more. This is important because many times it is not a 1:1 mapping of data to UI - for example the color picker has 6+ components which map to only 3 data values.
- Several more optimizations, in particular some UI optimizations that make it load faster and make it more responsive.
- Many more small fixes and improvements.
In the coming week I am going to work on inventory management, picking up items and dropping them, and rendering additional NPCs and making them walk around.
As usual feel free to ask me anything.
This week I did a pretty huge cleanup on the code (I deleted thousands of lines while retaining the same functionality and more). I also added in collision detection, a context menu (right click menu), and some functions for spawning/removing/editing entities - mostly for the purposes of quickly testing stuff but also useful for the eventual sandbox.
Making the collision work was not exactly trivial but it is a lot easier now that voxels are passed to the CPU and meshed. It looks at all the voxels within a given cell and determines (based on a given threshold) whether or not that cell is mostly solid, air, water, etc. More granular collisions can be done in screenspace, but this is necessary for offscreen or hidden collision testing. This coming week I will be working on camera movement (following an NPC or trackballing around it), in addition to registering collision and behaving accordingly (i.e. climbing stairs automatically if you collide with a step, etc). I will also try to work on managing movement and collision when other NPCs are present.
Also, if you enjoyed the classic Master of Magic, you might want to check out my friend Aaron's game Worlds of Magic. He is running for Indie Game of the Year (in the top 100 now), so throw him a vote if you are feeling generous:
Also, David Sahlin, who I have known for almost a decade (since I worked on Genesis) sent me his ideas for a mod for VQ, drawing inspiration from the likes of Pokemon, Spore, etc - I think it is a pretty interesting idea and I would love to see someone make a mod like this: