Happy New Year! Ok, maybe a bit late for that, but I still want to do a review of the last year, what went right, what went wrong, and so forth. I will also try to preemptively address some of the common questions I get - please read the bottom of this post before you ask!
So, since the last update, there was so much stuff that I could not even cover it all in the video, and I won't cover most of it here. But here is a high-level list:
What Went Wrong:
What Went Right:
I don't have the greatest feedback mechanisms but it seems people are generally happy and impressed with my work so far. Somehow things are all coming together in the code, in ways I would not expect. I had never written a fluid simulator, yet it turned out well. I had never written a character animation system, but I was pleased with the results. I had never written a networking client or server (beyond trivial examples), yet somehow I did it (even though it is a simple TCP deterministic architecture, it was not trivial to write). I guess if nothing else, I have gained faith that I can tackle pretty much any task, given enough time.
Other than that, not really sure what to put here. :) There are plenty of ups and downs, but I remain confident in the outcome of the future.
The character animation system presented several new challenges. First of all, how do you efficiently ray cast with hundreds of dynamic limbs moving around on the screen? Well, limbs share a common trait: they are attached, and hence in the same proximity. So you only need to compute an intersection for one AABB (or sphere if you prefer) and that will let you know if you are anywhere near to hitting a limb in that body. Then you need not march into every limb of the body, but do an analytical prepass to determine roughly which shapes get hit, and collect some of them and march into those with a more complex algorithm. In this case, everything is just a line-line distance (one line being the view ray, the other line being the bone segment in question).
I had initial fears about passing in a bunch of bone data to the GPU, that the bottleneck would choke performance. But really its not so bad (really no different in practice than passing bone data to a vertex mesh), and you can animate a pretty large crowd - its not going to do Lord of the Rings scale battles, but that is ok (quality vs quantity).
Some of the biggest challenges came from trying to wrangle a physics system. There is a dichotomy between how the physics world wants to behave and how the user wants it to responsively react. This was compounded by trying to do complex animation systems with dynamically-oriented limbs over a custom collider for the voxel data of the terrain.
I'm going to add a few other technical notes soon as well, when I get a moment.
Been working on many things - increased view distance and improved rendering, much better AI / pathfinding, and more - I will post a "real" update on these things soon but for now here is a video and some images:
Since its inception, the rendering technology behind Voxel Quest has undergone 3 major revisions. The first was isometric chunk rendering (every voxel computed at a pixel scale), and you can find an explanation of how that worked here. The next revision added perspective, and meshed those voxels with a point cloud. The current and final revision only renders visible surface points, and it procedurally generates these in real time. Arguably the isometric rendering looked the best of all the methods, but I also spent over a year refining it. The current revision is only 3-4 months in so give it time to mature. :)
The fact that VQ has undergone three tech revisions over two years probably seems a bit ridiculous, and maybe it is. Something like this would normally kill a game. That said, the point here is not just to make a game (plenty of people are doing that already), but to make a unique engine, and that could not happen in a vacuum. All I know is that I am finally happy with where the engine is at in terms of performance and flexibility, and I couldn't have gotten here without knowing everything I've learned since the start.
So the most common question I get, of course, is how does this stuff work? It is actually simpler than it might seem.
At a high level it is based on ray tracing, specifically a method known as Signed Distance Field ray marching (SDF ray marching). But that is only part of what is going on.
Vanilla ray marching simply steps along a ray at fixed intervals until it hits something. This is costly with a small step size, and erroneous with a large step size. SDF ray marching computes the distance from the nearest objects, and moves along the ray no more than the smallest distance from any of these objects.
(Image credit: http://www.polygonpi.com/)
This is very fast when the ray actually hits the object, but considerably slower if the ray "just misses" the object as shown above. The interesting thing about SDF marching is that it works hand in hand with boolean geometry or solid modeling. Inigo Quilez has a great primer on distance functions and operations located here. You can also find many great live examples of SDF ray marching at shadertoy.com.
The problem with naive SDF is that every ray at every pixel is considering every object in the scene to be rendered. This becomes insanely expensive if you have many objects in the scene. For example, imagine I have 1920x1080 pixels to render, which amounts to about 2 million rays to cast. Each ray is calculating the distance from 1000 objects, 60 times per second. Lets imagine each ray travels 100 steps on average, and requires 100 instructions to make its calculations. You are looking at 1.2 quadrillion instructions per second, if I did my math right.
The trick I use is aggressively reducing the number of objects each ray considers, using very straightforward, naive cell hashing.
Here we have a ray (purple) intersecting 3 objects. The ray gradually marches through each cell, from position B3 to N13. If the cell is empty, it does nothing. If any objects are within the cell, it adds them to the list of objects which it should compute. It does not store duplicate object references. Using math, it determines exactly whether or not it hit a given object - but this is not a final result. If an object has a bumpy surface, or has holes in its surface, its possible a ray can hit the object but miss it when it renders the surface in detail. All of these objects are gathered up and then a traditional SDF pass is used to determine the details and exact hits. For some further clarity, cells can contain multiple references. Cell H8 contains references to both the green rectangle and red circle, but cell M11 only contains 1 reference (the orange pentagon). The way in which you implement the cell data is up to you, but I use OpenGL's samplerBuffer (which is like a texture but functions more like an array).
In order to step through cells efficiently, I recommend using a supercover line algorithm, such that each cell is only considered once as the ray travels through it (image credit: http://lifc.univ-fcomte.fr/home/~ededu/projects/bresenham/).
You will want to toy with cell sizes to hit the sweet spot: big enough so that one cell usually fits an entire object, but small enough such that there are not too many objects within a given cell.
Because VQ uses only one type of uber geometry (rounded boxes with super-ellipsoid-like corners), it can compute exact hits for every single type of primitive only using one set of instructions (from cones to cylinders to spheres to rounded boxes to you name it), eliminating the need for branching in that area. So it narrows it down to geometry that the ray hits exactly.
But it goes further than that. How many objects is a ray likely to miss? As it turns out, you can usually get away with only considering the first 4 to 8 objects and you will rarely get errors. In the illustration above I have just shown 3 objects, but imagine if that box is filled with hundreds of objects. Now, for each ray, I only need to consider the first couple of objects that get hit instead of hundreds of objects for every ray. As I have shown in the past, you can render millions of objects using this method and there is not really any slow down - it tends to run at a pretty constant speed. You can optionally improve performance further through use of the flyweight pattern or templates. If I am using an object that will be repeated many times (for example, a hallway) - I need only store the specifications for this object once, and then instance of it is just a reference with a bit of data like position and orientation.
Not every object is traditional geometry though. The terrain is composed of smooth chunks of earth, which is produced by sampling a volume texture with bilinear filtering. As it turns out, vanilla SDF won't work that great trying to compute distances based on textures. The trick I have found is to allow the SDF algorithm to overstep into a solid area, and then step backwards along the ray (correcting itself), back and forth a few times until it gets close to the surface. There are probably better ways of analytically computing distance from a bilinear filtered point, but this works for now.
This is not a full update, just a recent peek at some improvements and performance. In my next full update I will release a video. If you are curious to know more about the planned development timeline, see the bottom of this post.
A constant question I get is something along the lines of "what is performance like?" The truthful answer was - I did not know! It looked like it was getting at least 60 FPS. Accurately measuring FPS can be tricky in OpenGL without profiling software, and I have not yet gotten to the point where I am using that (I don't want to get sucked into tweaking the heck out of things to profile this early on). If you think it is as simple as doing a timer between frames, that is not true. See here for further details on why getting the FPS can be tricky.
In order to perform these frame tests, I did it the old fashioned way - render a large amount of frames without any sort of timer constraints, and measure the amount of time it takes (in this case, I am measuring how long it takes to render 10,000 frames).
Some VERY important notes :D
The new rendering methods now support any amount of overlapping geometry (previously, each piece of geometry was constrained to one instance within its own cell. There is still more work to do before I do a full writeup, as the process is still changing slightly and there are more steps to finish. I am also doing a huge amount of work just to manage dynamically placing objects in the scene, saving your creations, and a lot of stuff along those lines, which I will also talk about in the next update.
I'm not going to type a ton about the development status of the game - I've already done that in the forums if you are curious (check recent threads). But let me just provide a brief summary of recent issues:
Edit: I am soon going to a writeup on the rendering. I wrote this post in haste so have not gotten a chance to write much yet but in as few words as possible the new rendering is much faster because it only calculates visible surfaces instead of volumes. It is based on cell marching and distance field marching. More info to come soon!
Since the last update:
I have been fairly busy with "real life" issues, which seems to be the norm these days, but I have managed to get quite a bit done regardless.
Below is one of the earlier tests I put together - rendering terrain based on VQ's generated map.
Added in bilinear filtering to smooth out the water (sorry for all the twitter embeds, I am lazy):
And one more demo of the smooth water:
Since the last update:
I am still working on many aspects of the AI but only a day or two per week right now - I am trying to not get stuck in any one area as there are many tasks left to cover. I think this most recent set of changes opens up a lot more modding potential (in particular, for realtime games). I think it would be pretty easy to make a platformer (like Mario N64), an artillery game (like Scorched Earth or Worms), or many other options (at least, easier than it would have been with strictly unit/turn-based movement).
I came in with the intention of doing videos weekly but as it turns out monthly makes more sense - there are some weeks where not a lot happens visually. It also saves me time. Nonetheless I advise you to follow the @voxelquest (or @gavanw) account on Twitter or my Youtube channel for smaller updates in between.
Also, I posted this a bit late for April Fool's, but it is worth reposting I think. Since I added in first person view, I obviously need the obligatory hand swinging some weapon. (In this case it is one of the weapons from Hexen):
In spite of this being a joke, I really do dig this style (as did Twitter, apparently) and I would like to perhaps have something similar ultimately.
Hello! It has been over a month since the last update. I apologize for the delay, but as I have mentioned elsewhere there really was not much to look at as the majority of my work was doing IO in a text-based console. I have been hard at work on the AI, and now have the logical portion of it mostly complete. It is capable of making deductions as I have described prior in the forums and elsewhere. This is a full "scripting" language with compiler, although for the time being it is parsed using rules from JSON which makes it somewhat unwieldy but I just wanted to get a parser working as fast as possible. It contains many features, and is both a subset and extension of the language Prolog, albeit with different syntax and grammar.
I use quotes on "scripting" language because it is not writing AI scripts like you might see in other games. Rather it is just specifying a knowledge base with various rules and relations. You can then pose queries to this knowledge base in order to make deductions and determine whether or not things are true or false. Additionally you can do more advanced proofs and solve statements for a given set of variables, finding the variables which result in a true set of facts. If you are interested in learning more about this, I recommend learning Prolog - there is a good free book I used here for reference.
So why not just use an existing solution like Prolog? Well, first of all Prolog is relatively bloated. My language fits within under 1000 lines of code if you ignore the JSON parsing (less minus comments/whitespace). It is compiled to integer symbols so it avoids string manipulation and comparison. I only need a subset of Prolog's functionality, so it would be wasteful to include the entire thing. Moreover, Prolog is not sufficient on its own. I must carefully modify the deduction algorithms to support things like score maximization, and thus I must have a full understanding of the code involved. This would be much harder if I went in and tried to modify an existing Prolog implementation, and more error-prone due to the larger breadth Prolog has over my subset language.
The ultimate goal of this is to build an AI that can make logical deductions and perform surprising, emergent behavior based on things that are not explicitly scripted. The term I have been throwing around, as have some other games, is a "Game of Thrones" simulator - something capable of weaving together a complex plot in which every NPC has their own set of motivations.
I also added in some billboarding to show sprites in the placeholder boxes, which makes it much easier to figure out what a given object is. Also reorganized all the items and added in placeholder NPCs and monsters. Art is by 7soul and available for purchase in most game dev asset stores.
Also, as a quick (dumb) experiment, I messed with voxel warping. Just 2 lines of code, could have some interesting uses for spell effects and such.
Happy New Year!
It has been a while since the last update, I know. As I may have mentioned prior, aside from the holidays, I have been just trying to get something interesting to show, which is hard when the majority of my work the past few weeks has been on relatively boring back-end stuff.
Here is a video - sorry I was tired when I made it!
Since the last video update, we have:
- Walking around, with collision and auto-climbing/descent of stairs/hills (the power of technology!!!)
- Camera that follows the character, orients itself in your travel direction, and more
- Saving and loading of character poses
- Rendered (fully voxel-based / procedural) character has replaced the placeholder box object.
- Ability to assign materials to each character limb, or other objects
- Material editor has been migrated from the web client. For those who have not seen it, the web client looked like this. The web client had many problems, including performance (slowing down the main client), inability to effectively manage data over a websocket, requiring a web server to run (i.e. a localhost), being generally unwieldy, and more.
- Much more advanced GUI features, data binding, and data serialization. The GUI now supports multiple features to keep everything DRY (Don't Repeat Yourself), keep data and presentation separate, and more. This is important because many times it is not a 1:1 mapping of data to UI - for example the color picker has 6+ components which map to only 3 data values.
- Several more optimizations, in particular some UI optimizations that make it load faster and make it more responsive.
- Many more small fixes and improvements.
In the coming week I am going to work on inventory management, picking up items and dropping them, and rendering additional NPCs and making them walk around.
As usual feel free to ask me anything.