Applied Modern Game Development
Friday, 8 July 2016
Realtime Editing of C++ via Runtime Compiling and HotLoading
Once upon a time, I was introduced to a scripting language called Lua, which promised to improve my productivity by reducing my edit-iteration times in return for a small share of my cpu time.
While trying not to sound like a fanatic, this did sound appealing, and in time I discovered that the promise was true - I could indeed expose my C/C++ code to the scripting environment, move some part of my code operations into the script world, and take advantage of the fact that we can edit scripts, save them, and reload them, without ever needing to close and restart our game application.
Given my background as an Assembly language programmer, I had always known that it was possible to compile new code, import it into a living application, and execute it - essentially, the same promises can be made for C/C++ or any other compiled languages, there is basically always a way to introduce hot code into any environment, within user permissions - while this sounds 'hacky', and it is, dynamically-loaded libraries are certainly not new, and represent perhaps the most legitimate way to achieve the 'impossible dream' for C/C++ programmers: hit save, and see the changes appear in the still-running host application.
Sounds like fantasy? Not at all - runtime compilation of code is the happening thing as far as I am concerned - basically there are three required pieces to this puzzle.
1 - our application needs to be able to monitor a set of (sourcecode) files for changes
2 - when changed files are detected, our application needs to be able to trigger a recompile/relink of any code modules which depend on the affected files
3 - our application should be able to detect when compilation completes, whether it was successful, and if so, be able to dynamically reload the newly-compiled code module.
There's nothing in there which is particularly difficult, however the Devil is in the Details.
I've got all the pieces working, and should soon have them integrated - and I have several contributing authors to individually thank for that, in addition to my own work, thanks guys, your names have been added to the credits. I am humbled by those who choose to do the impossible things, as much now as I was back when the hardware limits of the commodore 64 were overcome via timing glitches by a rank amateur. Game the system!
Saturday, 2 July 2016
CubeMap Visualization/Preview
I decided it would be very handy if I could actually inspect various textures at runtime from within the engine, and chose to use a gui approach to displaying them.
Of course, that's fine for 2D textures, but other types, not so much.
My CubeMap textures, for example, would appear blank, when treated as 2D textures, which they are not. So, how would one go about displaying them, short of applying them to a skybox?
OpenGL does not give us easy access to the 6 individual 'face textures' of a CubeMap texture - we can do it, but it involves copying the image data into a new texture or textures, which is very slow.
A nice fast approach was presented a while back by Scali, whom I credit for my implementation of his solution.
He suggests that we create a flat geometry which happens to have 3D texture coordinates to suit a cubemap, and render it using a shader that samples a cubemap.
Since I was already using a gui widget to display 2D textures, I decided to take a 'render to texture' approach, and noted immediately that since the projection is orthographic, there is no need for 3D positions - 2D positions with 3D texcoords is what I have used to render the tile geometry to a regular 2D texture, which is then handballed to the gui system for display.
In retrospect, given that the geometry is presented in NDC Space, there's no need for a projection transform either, so there is still room for some optimizing and polish, but it works well enough for the purpose, I'm happy, since it performs well on dynamic cubemaps - I'm now able to view both static and dynamic cubemaps in realtime :)
Wednesday, 29 June 2016
PVS - John Carmack's infamous potentially-visible set
So, I have been working hard on realtime game editor stuff - being able to check and edit the state of the game engine, and things like that.
One major recent component to the game editor stuff is called Constructive Solid Geometry.
Realtime CSG editing is not completely new, but relatively unpopular, mostly due to John Carmack creating a visual game editor based on BSP-CSG.
John and I have some things in common, but there are things we disagree on.
I believe in realtime csg editing, but I think that BSP is not the only spatial partitioning system that can support CSG operations.
In fact, I might even publish my editor at some point, just to stick it to the man, JC, you might be, but your stolen tech from the 1968 paper is not even relevant now, and neither are you.
I would further extend this sentiment to a somewhat more interesting person, did you guess who?
The games I make have no god complex, that should be all the clues needed, who holds the can?
Seriously though, I do have some things to say about PVS, and I promise to do so in the next post, it's interesting stuff, real philosophers would shit their pants.
Friday, 17 June 2016
Material Instancing on the GPU
It seems like a lifetime ago that I learned how to perform 'gpu instancing' by sending to the GPU a special VertexBuffer containing an array of ModelToWorld transforms, and telling the underlying renderer (OpenGL) that this buffer's content is not per vertex, its per mesh instance (so all vertices).
We can issue one single draw call, but draw the current mesh subset multiple times, at different positions and orientations. I noted at the time, this meant we could render all instances of a mesh with as few as ONE DRAW CALL PER SUBSET, based on the premise that a mesh is divided into per-material subsets of faces (or triangles), and that we at least need to change Materials inbetween draw calls.
More recently, I expounded that I believed it to be quite possible to further reduce draw calls, by supplying a second 'Per Instance VertexBuffer' containing an array of Material data, and by tagging individual Triangles with a local material index.
Today, I'm adding a 'Tumbler' 3D Widget to the Editor interface - it will display the current camera view's orientation, and also act as a tool to select from multiple fixed views (top, front, etc.)
I've created a special mesh for this widget, which has SEVEN materials, identifying the (signed) major axes plus one extra 'arbitrary' perspective view.
Note that I only have one instance of this guy, but it contains seven materials - this implies that using the render technology described thus far, I would need seven draw calls to draw this ONE instance of this multi-material mesh.
This seems like a good time to experiment with the concept that we can use 'gpu instance buffers' to send *anything* that we need per instance, not just the Model transform, but ANYTHING.
Anyone done this already, or know of any references?
Tuesday, 7 June 2016
Immediate-Mode as a Programming Paradigm?
So, I needed to implement some sort of GUI (graphical user interface) in the FireStorm (v2) engine framework, and my previous effort had been to cobble together a custom solution, which was basically a classic Retained-Mode Gui, where we create a bunch of 'widget objects' upfront, that might be visible, and might not be, and often holds lots of redundant state.
I was recently re-introduced to 'Dear ImGui', an immediate-mode gui library, which does things very differently - basically, under the immediate-mode paradigm, the items we work with never 'exist as objects', unless they are actually being referenced... conceptually, if we choose not to draw something, it does not exist, and contains no state either way.
Updating the logic for gui widgets under a retained-mode system is not the responsibility of each widget, since they don't exist, but the responsibility of the system framework... when we 'draw' each widget, we can update its logic at the same time, and depending on the result, choose to do (whatever), including actually displaying it. The gui code becomes highly 'lexically-scoped', and is a really good fit with most modern scripting languages.
In fact, I found myself able to edit my gui script with itself, within minutes of working with a well-designed immediate-mode GUI.
Which brings me to my point.
Classical GUI are typically hierarchical, tree-structured things, which got me thinking, if the immediate-mode works so well with GUI, and allows us to break the shackles of the traditional model of retained-mode object hierarchies, what ELSE can we use it for?
Saturday, 4 June 2016
'Truly global' variables pay off
In order to facilitate Rapid Application Development, the FireStorm game engine implements the Lua scripting engine, and also the Dear ImGui immediate-mode gui library.
One of the 'problems' when interfacing C and Lua code is that Lua likes to 'own everything', and does not easily share its data with other languages (like C, the language in which it was written). Allowing Lua to own all our application's runtime variables is the easy option, but it has a terrible price, and furthermore, Lua is not designed with hardware multithreading in mind - sharing data between one Lua state and the engine is one thing, but multiple Lua states means that letting Lua own anything is just a really bad idea - Lua is not the center of the Universe.
To be clear, I found that I had two concepts of 'global data' - the Lua global variables stored in (each) Lua State, and the C 'global data', but rather than choose one, I created a third - the C 'blackboard' whose sole purpose is to share data across all interested parties regardless of language or thread.
I chose to keep all my data on the C (engine) side, as the engine is my hub, my nexus, no matter how many threads are independently running Lua engine instances.
This turned out to be a good idea, with respect to ImGui.
You see, the immediate-mode GUI requires that we provide Pointers to Typed Data, which it can use to display variable contents, and also to allow their live editing.
But values in Lua don't have pointers, and we can't create them, so Lua variables are pretty much useless with respect to interfacing with the ImGui library.
Luckily, it's no problem to get raw pointers to the data held by the engine's 'truly global' shared data.
As a result, I'm able to smash out quite complex Application GUIs with a fairly small amount of Lua Scripting, and to illustrate how handy that is, I can use the GUI to edit its own script, and reload the edited script into the Engine, without blinking an eye;)
Friday, 27 May 2016
CSG : Constructive Solid Geometry
Constructive Solid Geometry is a Modelling technique, and it's not a new one.
Basically you get to create or load 'CSG Brushes' which can be used to sculpt your game's level geometry. Further, you can apply boolean operations to sets of brushes in order to create more complex brushes - which is the basis of CSG Modelling.
A brush is nothing but a set of unique 3D Planes, each of which carries one or more Coplanar Polygons - typically, textured planar n-gons.
You can imagine a simple cube as a nice example of a Brush - it has six planes, each plane contains one polygon with four sides. We could use such a Brush to create a 'room', then shrink the brush (yes we can manipulate brush geometry) and use it again to stamp out a doorway in one of the walls just like you were wielding a cookie-cutter.
That's what CSG editing is like in practice, although our brushes can be much more complex and interesting... importantly, it's possible to construct a CSG brush from an existing mesh of arbitrary complexity... we can basically load brushes from existing meshes, or face-selections thereof.
Less obviously, CSG can be used to create a BSP Tree (and by extension, a Sector-and-Portal Graph), rather than generating the BSP Tree from a large mesh (or set of mesh instances), and then extracting sectors and portals from the triangle soup.
It is noteworthy that CSG Brushes share some properties with BSP Leaf Nodes.
Both are comprised of Planes and their Planar Polygons, one major difference is that there is no guarantee that the set of planes comprising a Brush form a closed space, while BSP leaf nodes have an iron-clad guarantee that the surrounding planes form a watertight seal - a truly closed space.
They are nonetheless, more similar than they are different, which is what got my attention.
We can in fact, generate a BSP Tree from a Brush, and if we wish, we can define CSG boolean operations as operations involving BSP Trees - however, this is not the only possible approach.
Now I have to be honest, my only interest in BSP Trees is their ability to spew forth a set of connected gamespaces, as well as information about the connections between the spaces - that is to say, we can extract a sector-and-portal graph from a conventional solid-leaf bsp tree.
Having said that, the process of generating BSP Trees for entire levels is exponentially expensive with respect to the complexity of the input triangle soup. CSG Brushes, on the other hand, offer a rapid way to produce large and complex BSP Trees from much smaller and simpler subtrees, as well as a way to re-use those subtrees.
Most modern game engines have deprecated CSG within their respective toolsets, although there are notable exceptions. The reasons given generally mention that CSG has been more or less replaced by static meshes (with respect to MODELLING) which is a fair statement, that CSG Brushes are notoriously difficult and error-prone to implement with respect to models of arbitrary complexity (also a fair call), and that there are better tools for modelling than the engine's CSG editing tools can provide - this last point is completely irrelevant, and really seems like a cop-out.
In my opinion, what is not being talked about is how modern game engines deal with static meshes with respect to spatial partitioning. I mean, octrees and boundingvolume hierarchies are still the order of the day, while the best features of BSP trees (the properties of their leaves) are still valid, even if we discard the tree, we still want some kind of sparse spatial mapping, and nothing I can think of does a better job of mapping mostly-empty space than a sector and portal graph.
I'm looking forward to implementing realtime in-game CSG editing tools, and I admit that I will be looking at the current crop of modelling tools for inspiration.
Subscribe to:
Posts (Atom)