Friday 8 July 2016

Realtime Editing of C++ via Runtime Compiling and HotLoading


Once upon a time, I was introduced to a scripting language called Lua, which promised to improve my productivity by reducing my edit-iteration times in return for a small share of my cpu time.
While trying not to sound like a fanatic, this did sound appealing, and in time I discovered that the promise was true - I could indeed expose my C/C++ code to the scripting environment, move some part of my code operations into the script world, and take advantage of the fact that we can edit scripts, save them, and reload them, without ever needing to close and restart our game application.
 Given my background as an Assembly language programmer, I had always known that it was possible to compile new code, import it into a living application, and execute it - essentially, the same promises can be made for C/C++ or any other compiled languages, there is basically always a way to introduce hot code into any environment, within user permissions - while this sounds 'hacky', and it is, dynamically-loaded libraries are certainly not new, and represent perhaps the most legitimate way to achieve the 'impossible dream' for C/C++ programmers: hit save, and see the changes appear in the still-running host application.

Sounds like fantasy? Not at all - runtime compilation of code is the happening thing as far as I am concerned - basically there are three required pieces to this puzzle.
1 - our application needs to be able to monitor a set of (sourcecode) files for changes
2 - when changed files are detected, our application needs to be able to trigger a recompile/relink of any code modules which depend on the affected files
3 - our application should be able to detect when compilation completes, whether it was successful, and if so, be able to dynamically reload the newly-compiled code module.

There's nothing in there which is particularly difficult,  however the Devil is in the Details.
I've got all the pieces working, and should soon have them integrated - and I have several contributing authors to individually thank for that, in addition to my own work, thanks guys, your names have been added to the credits. I am humbled by those who choose to do the impossible things, as much now as I was back when the hardware limits of the commodore 64 were overcome via timing glitches by a rank amateur. Game the system!

Saturday 2 July 2016

CubeMap Visualization/Preview


I decided it would be very handy if I could actually inspect various textures at runtime from within the engine, and chose to use a gui approach to displaying them.
Of course, that's fine for 2D textures, but other types, not so much.
My CubeMap textures, for example, would appear blank, when treated as 2D textures, which they are not. So, how would one go about displaying them, short of applying them to a skybox?
OpenGL does not give us easy access to the 6 individual 'face textures' of a CubeMap texture - we can do it, but it involves copying the image data into a new texture or textures, which is very slow.


A nice fast approach was presented a while back by Scali, whom I credit for my implementation of his solution.





He suggests that we create a flat geometry which happens to have 3D texture coordinates to suit a cubemap, and render it using a shader that samples a cubemap.
Since I was  already using a gui widget to display 2D textures, I decided to take a 'render to texture' approach, and noted immediately that since the projection is orthographic, there is no need for 3D positions - 2D positions with 3D texcoords is what I have used to render the tile geometry to a regular 2D texture, which is then handballed to the gui system for display.
In retrospect, given that the geometry is presented in NDC Space, there's no need for a projection transform either, so there is still room for some optimizing and polish, but it works well enough for the purpose, I'm happy, since it performs well on dynamic cubemaps - I'm now able to view both static and dynamic cubemaps in realtime :)









Wednesday 29 June 2016

PVS - John Carmack's infamous potentially-visible set



So, I have been working hard on realtime game editor stuff - being able to check and edit the state of the game engine, and things like that.
One major recent component to the game editor stuff is called Constructive Solid Geometry.
Realtime CSG editing is not completely new, but relatively unpopular, mostly due to John Carmack creating a visual game editor based on BSP-CSG.

John and I have some things in common, but there are things we disagree on.
I believe in realtime csg editing, but I think that BSP is not the only spatial partitioning system that can support CSG operations.
In fact, I might even publish my editor at some point, just to stick it to the man, JC, you might be, but your stolen tech from the 1968 paper is not even relevant now, and neither are you.

I would further extend this sentiment to a somewhat more interesting person, did you guess who?
The games I make have no god complex, that should be all the clues needed, who holds the can?

Seriously though, I do have some things to say about PVS, and I promise to do so in the next post, it's interesting stuff, real philosophers would shit their pants.


Friday 17 June 2016

Material Instancing on the GPU


It seems like a lifetime ago that I learned how to perform 'gpu instancing' by sending to the GPU a special VertexBuffer containing an array of ModelToWorld transforms, and telling the underlying renderer (OpenGL) that this buffer's content is not per vertex, its per mesh instance (so all vertices).
We can issue one single draw call, but draw the current mesh subset multiple times, at different positions and orientations. I noted at the time, this meant we could render all instances of a mesh with as few as ONE DRAW CALL PER SUBSET, based on the premise that a mesh is divided into per-material subsets of faces (or triangles), and that we at least need to change Materials inbetween draw calls.

More recently, I expounded that I believed it to be quite possible to further reduce draw calls, by supplying a second 'Per Instance VertexBuffer' containing an array of Material data, and by tagging individual Triangles with a local material index.

Today, I'm adding a 'Tumbler' 3D Widget to the Editor interface - it will display the current camera view's orientation, and also act as a tool to select from multiple fixed views (top, front, etc.)
I've created a special mesh for this widget, which has SEVEN materials, identifying the (signed) major axes plus one extra 'arbitrary' perspective view.

Note that I only have one instance of this guy, but it contains seven materials - this implies that using the render technology described thus far, I would need seven draw calls to draw this ONE instance of this multi-material mesh.

This seems like a good time to experiment with the concept that we can use 'gpu instance buffers' to send *anything* that we need per instance, not just the Model transform, but ANYTHING.

Anyone done this already, or know of any references?

Tuesday 7 June 2016

Immediate-Mode as a Programming Paradigm?


So, I needed to implement some sort of GUI (graphical user interface) in the FireStorm (v2) engine framework, and my previous effort had been to cobble together a custom solution, which was basically a classic Retained-Mode Gui, where we create a bunch of 'widget objects' upfront, that might be visible, and might not be, and often holds lots of redundant state.

I was recently re-introduced to 'Dear ImGui', an immediate-mode gui library, which does things very differently - basically, under the immediate-mode paradigm, the items we work with never 'exist as objects', unless they are actually being referenced... conceptually, if we choose not to draw something, it does not exist, and contains no state either way.
Updating the logic for gui widgets under a retained-mode system is not the responsibility of each widget, since they don't exist, but the responsibility of the system framework... when we 'draw' each widget, we can update its logic at the same time, and depending on the result, choose to do (whatever), including actually displaying it. The gui code becomes highly 'lexically-scoped', and is a really good fit with most modern scripting languages.
In fact, I found myself able to edit my gui script with itself, within minutes of working with a well-designed immediate-mode GUI.

Which brings me to my point.

Classical GUI are typically hierarchical, tree-structured things, which got me thinking, if the immediate-mode works so well with GUI, and allows us to break the shackles of the traditional model of retained-mode object hierarchies, what ELSE can we use it for?



Saturday 4 June 2016

'Truly global' variables pay off


In order to facilitate Rapid Application Development, the FireStorm game engine implements the Lua scripting engine, and also the Dear ImGui immediate-mode gui library.

One of the 'problems' when interfacing C and Lua code is that Lua likes to 'own everything', and does not easily share its data with other languages (like C, the language in which it was written). Allowing Lua to own all our application's runtime variables is the easy option, but it has a terrible price, and furthermore, Lua is not designed with hardware multithreading in mind - sharing data between one Lua state and the engine is one thing, but multiple Lua states means that letting Lua own anything is just a really bad idea - Lua is not the center of the Universe.

To be clear, I found that I had two concepts of 'global data' - the Lua global variables stored in (each) Lua State, and the C 'global data', but rather than choose one, I created a third - the C 'blackboard' whose sole purpose is to share data across all interested parties regardless of language or thread.

I chose to keep all my data on the C (engine) side, as the engine is my hub, my nexus, no matter how many threads are independently running Lua engine instances.
This turned out to be a good idea, with respect to ImGui.

You see, the immediate-mode GUI requires that we provide Pointers to Typed Data, which it can use to display variable contents, and also to allow their live editing.
But values in Lua don't have pointers, and we can't create them, so Lua variables are pretty much useless with respect to interfacing with the ImGui library.
Luckily, it's no problem to get raw pointers to the data held by the engine's 'truly global' shared data.

As a result, I'm able to smash out quite complex Application GUIs  with a fairly small amount of Lua Scripting, and to illustrate how handy that is, I can use the GUI to edit its own script, and reload the edited script into the Engine, without blinking an eye;)

Friday 27 May 2016

CSG : Constructive Solid Geometry


Constructive Solid Geometry is a Modelling technique, and it's not a new one.

Basically you get to create or load 'CSG Brushes' which can be used to sculpt your game's level geometry.  Further, you can apply boolean operations to sets of brushes in order to create more complex brushes - which is the basis of CSG Modelling.
A brush is nothing but a set of unique 3D Planes, each of which carries one or more Coplanar Polygons - typically, textured planar n-gons.
You can imagine a simple cube as a nice example of a Brush - it has six planes, each plane contains one polygon with four sides. We could use such a Brush to create a 'room', then shrink the brush (yes we can manipulate brush geometry) and use it again to stamp out a doorway in one of the walls just like you were wielding a cookie-cutter.
That's what CSG editing is like in practice, although our brushes can be much more complex and interesting... importantly, it's possible to construct a CSG brush from an existing mesh of arbitrary complexity... we can basically load brushes from existing meshes, or face-selections thereof.

Less obviously, CSG can be used to create a BSP Tree (and by extension, a Sector-and-Portal Graph), rather than generating the BSP Tree from a large mesh (or set of mesh instances), and then extracting sectors and portals from the triangle soup.

It is noteworthy that CSG Brushes share some properties with BSP Leaf Nodes.
Both are comprised of Planes and their Planar Polygons, one major difference is that there is no guarantee that the set of planes comprising a Brush form a closed space, while BSP leaf nodes have an iron-clad guarantee that the surrounding planes form a watertight seal - a truly closed space.
They are nonetheless, more similar than they are different, which is what got my attention.
We can in fact, generate a BSP Tree from a Brush, and if we wish, we can define CSG boolean operations as operations involving BSP Trees - however, this is not the only possible approach.

Now I have to be honest, my only interest in BSP Trees is their ability to spew forth a set of connected gamespaces, as well as information about the connections between the spaces - that is to say, we can extract a sector-and-portal graph from a conventional solid-leaf bsp tree.

Having said that, the process of generating BSP Trees for entire levels is exponentially expensive with respect to the complexity of the input triangle soup. CSG Brushes, on the other hand, offer a rapid way to produce large and complex BSP Trees from much smaller and simpler subtrees, as well as a way to re-use those subtrees.

Most modern game engines have deprecated CSG within their respective toolsets, although there are notable exceptions. The reasons given generally mention that CSG has been more or less replaced by static meshes (with respect to MODELLING) which is a fair statement, that CSG Brushes are notoriously difficult and error-prone to implement with respect to models of arbitrary complexity (also a fair call), and that there are better tools for modelling than the engine's CSG editing tools can provide - this last point is completely irrelevant, and really seems like a cop-out.

In my opinion, what is not being talked about is how modern game engines deal with static meshes with respect to spatial partitioning. I mean, octrees and boundingvolume hierarchies are still the order of the day, while the best features of BSP trees (the properties of their leaves) are still valid, even if we discard the tree, we still want some kind of sparse spatial mapping, and nothing I can think of does a better job of mapping mostly-empty space than a sector and portal graph.

I'm looking forward to implementing realtime in-game CSG editing tools, and I admit that I will be looking at the current crop of modelling tools for inspiration.


Friday 20 May 2016

Merging of Coplanar and Adjacent Polygons - Leith's Algorithm


I wanted to find the Union of a set of coplanar polygons, so initially, I tried to 'weld' the polygons together based on an exhaustive search for 'shared edges',  however in my 'simple test case', I am dealing with hundreds of simple polygons at a time, and missing a special case or two made debugging absolute hell.
Despite the algorithm showing promise, its performance was bound to degrade as the polygon count increase - and 'logarithmically-so'.
I had to find a better solution, abandoning this solution pathway even though I had racked up several days trying to make it work.

The (much faster, and better-scaling) alternative I am trying right now is very similar to a half-edge implementation - I feed all the edges of all coplanar polygons into a map of Directed Edges, and then having done so, I search the map for a reversed case of each input edge - if two polygons refer to the same edge, but with reversed order of vertices, then that edge is NOT a boundary edge, its an internal feature of the bounded shape, and therefore not interesting. If we fail to find a matching reversed-edge for any given input edge, then that edge must be a Boundary Edge, a part of the bounds of the (possibly convex) bounded shape. In this case, I'll collect the edge, and at the end of this process, I'll have a bunch of edges which are directed edges on the boundary of the shape - if the shape is non-manifold, I'll be able to discover each 'island' exactly the same way.

SourceCode is written, time to test the theory.


Monday 16 May 2016

Automatic Discovery of Cell-And-Portal Network Connections and Portal Geometries from a Triangle Soup - ten years, and nothing better is around


A decade ago, and in another life, I wrote about Nathan Whitaker's description of an informal algorithm for the discovery of portal geometries based on a BSP tree.
At the time, Alan Baylis had just gotten hold of it, and Al wrote up a piece on this subject, a description and sourcecode was published, the sourcecode is by my own standards, not acceptable, and not fit for production, however it is reasonably clear and at a glance it appears to work.

I can tell you now that Alan's code is not fit for human consumption, and should be excised from the internet, although his accompanying tutorial sheds some light.

What the hell am I talking about?

Imagine that you load up a 3D game level into a game engine or an art package, and you were using some importer plugin to get the job done (we've all been there, right?) - you managed to load in the geometry, but lost all the structure if any was ever there, it's a big old triangle soup.

Now imagine that you wish to cut up this worldspace into connected spaces - rooms and doorways, if you will, even if the scene is not indoors. One way to cut the space up is called Binary Space Partitioning, or BSP.

If you take the time to figure out how to write your own BSP Compiler (tree generator), that can take in a triangle soup, and emit a BSP Tree that contains all geometry in its leaf nodes, then you are already ahead of most of your peers.

Now we have a BSP Tree, and its Leaf Nodes, which we know represent 'convex subspaces' in the world - they are spaces that are separated from other spaces via planes - and not just planes, but planes with a 'hole in them', a hole that has a shape, a shape that we can discover.

We can discover the exits from each BSP leaf space to each other space - the doorways between halls and rooms - and we can find their exact shapes.

Imagine now, if we could discover the shape of a doorway, we would only need to draw what's in the same room as we are, plus what we can see through doorways (or other portals).

Portal engines have enjoyed a revival since the highly successful Portal game series, however those games are based on orthogonal worlds, and are no true examples of the power of portal based game engines. Orthogonal portals are easy to place by hand, and easy to calculate. John Carmack's DOOM games might have used BSP but they were orthogonal spaces, and as a result, there seems to be a general impression that BSP is not suitable to non-orthogonal scenarios.
This simply is nonsense, they are good for any convex spatial system, provided that the artists are aware of what they need to avoid doing (placing curved shapes in the center of open spaces is a bad idea, while a simpler shape might actually be beneficial).


Merging of Adjacent Coplanar Polygons


It's a fairly common geometric case: one or more polygons (N-GONS) reside upon a specific 3D Plane - the polygons are coplanar, and are effectively 2D - we know that they are all convex, and that they share some Edges, implying that they could be Welded into (possibly non-convex) polygons by eliminating the shared edges.



I decided early not to try to solve this via 2D projection onto a principle axis, but rather to take advantage of the circular nature of polygonal descriptors - find the matching edges, and describe the remaining polygon in pure 3D terms.

The first problem that I encountered is that my polygons have no defined winding order - they are based on 'super planes' which span the worldspace, and there's no particular way they should be viewed. Therefore, I needed a way to define whether two polygons have the same winding order, or the reverse order - I did not care if it was 'clockwise' or not, which only makes sense for a particular viewpoint, but I did care if they were not the same ordering given a fixed viewpoint.

Several days of research later, having investigated all manner of strange solutions including winged-structures, I came across a simple solution for discovering the surface-normal of an arbitrary planar polygon, I'll post this source here in case it's helpful to someone else.
It's incredibly stable - it works for lots of special cases, perhaps ALL special cases - and as such, I believe this to be a true Code Gem and worth sharing with the world.

Given a polygon as a circular list of vertices, this code will return the surface normal,  even if the polygon is not quite coplanar, it will return a reasonable average normal, which can be construed as implying a winding order (the signed dot product of the calculated normal, with some fixed base normal, is either plus or minus, indicating counterclockwise or clockwise, I believe).

At any rate, I'm certainly one step closer to my current goal of simplifying portal geometry extracted heuristically from a triangle soup!

    // Newell Brothers' Method for calculating SurfaceNormal of Arbitrary Polygon
    // Works on both convex and concave planar polygons, and even works if the polygon is not quite planar,
    // or the polygon has colinear edges, or other weird corner-cases - seems to work in ALL cases.
    // I believe this was published in around 1990/1991
    //
    glm::vec3 GetNormal (PORTAL* Polygon)
    {
       glm::vec3 Normal(0); // initialize zero vector

       for(unsigned short i=0;i<Polygon->Vertex.size();i++)
       {
          VERTEX& va=Polygon->Vertex[i];
          VERTEX& vb=Polygon->Vertex[(i+1) % Polygon->Vertex.size()];

          Normal.x +=  (va.y - vb.y) * (va.z + vb.z);
          Normal.y +=  (va.z - vb.z) * (va.x + vb.x);
          Normal.z +=  (va.x - vb.x) * (va.y + vb.y);

       }

       return glm::normalize(Normal); // return normalized vector

    }

Wednesday 4 May 2016

CharacterController, PlayerCharacter and NonPlayerCharacter


I have a physical 3D game world, so Let's Get Physical.

CharacterController class implements a 'physics character controller' - it deals with some of the basic issues we tend to encounter when trying to control a physics-based object within a physics simulator - things like dealing with stairs and stair-like vertical height changes, things like getting stuck inside other stuff, things like ducking and laying down and jumping, and whether your character can do it, given the world around your character. All that sort of stuff is just dealt with for you, nice huh.
You can attach a CharacterController to any instance of a Bone-Animated mesh - if its a mesh, and it has a skeleton, and you created an instance from it, then you can make it into a character.
Well, almost, as the CharacterController class is brainless and useless alone.

On top of the CharacterController, I have built PlayerCharacter and NonPlayerCharacter classes.
The PlayerCharacter class ensures that its owner Entity has an InputComponent, and sinks input device events, converting them into CharacterController Actions such as Jump or Walk.

NonPlayerCharacter implements the AI Component, containing PathFinding and BehaviorTree objects, and is pretty much explained in the previous blog entry, describing how Behavior Trees and NavMesh PathFinding all comes together.

It would certainly be nice to hear that someone other than myself is reading and enjoying this!


A* PathFinding on a NavMesh


The NavMesh that I've implemented is 3D - its data is sourced from an input Mesh asset.
I'm using the 'half-edge' structure to build a 'HE' representation of an input triangle soup, the resulting structure can perform rapid 'geometric-connectivity' queries, telling me all about how my geometry is connected - I can tell which 'Features' (triangles and vertices) are connected to any other 'Feature'.
This is important, because my AI Agents are using A* PathFinding, and my chosen implementation of A* requires that, given some Feature-Based Node, I can provide Connected Features which will form the basis for new Nodes during expansion of the A* pathfinding 'search-frontier'.
(Edge MidPoints are not considered as being Features in terms of NavMesh Navigation, as these MidPoints will be intersected by the theoretical Path between the Origins of any pair of Neighboring Triangles).



To be specific, each 'A* Node' is tagged with its Position in WorldSpace (just the vertex position, or in the case of triangles, the centroid), a boolean indicating whether it represents a vertex or a triangle, a Feature Index (of a vertex, or a triangle), and a Slope value (basic walkability) which holds the angle between the Feature Normal (vertex or triangle) and the World Up vector.
That, and a pair of Start and Goal Nodes, is enough information for the A* Algorithm to operate.



The implication of this idea that an 'A* Node' can represent either a Vertex or a Triangle is that the search frontier is able to expand more rapidly, and in a more intelligent 'direction', generally leading to a more rapidly-located and 'smoother' pathway being chosen, due to the PathFinder being presented with more options per node during pathway expansion, and because the options presented are directly based on the complexity of the input geometry, the potential node-tree's complexity scales automatically to the geometry on which it is based - complex geometries are not problematic, rather they represent 'opportunity'.
In this image, we can see that example vertex 13 connects to precisely 7 other vertices, and coincidentally, 7 triangles - a total of 14 features are connected to feature [Vertex] #13- that's a lot of connections, a lot of possibilities for the search frontier to expand on.



My chosen A* implementation defers both the construction of new nodes, and the calculation of their Travel Costing (or Walkability), giving me an opportunity to do that on the basis of the caller AI Agent - some AI Agents won't care about Slope, they will happily walk up walls and even across ceilings, if that's what they should do - most AI Agents will be put off by steep slopes and will seek an 'easier' path to their given Goal - what is Walkable, is determined on a per-Agent basis, not just on the basis of the NavMesh and/or any related metadata.

The Goal of an AI Agent is selected through execution of a Behavior Tree.
Each AI Agent has such a Tree, and will update it prior to searching for a Path To Goal, in order to establish a Goal (should none exist) or to allow for choosing a 'better' Goal (given a dynamic environment, and dynamic knowledge of said environment).

The execution of a Behavior Tree causes changes ONLY in the 'context object' handed in to the root Behavior Node by the caller AI Agent - which happens to be owned by that Agent, and represent the Agent's Memory - it's a Blackboard object basically, an arbitrary list of named Variant containers, and one of them identifies the Goal for PathFinding. Therefore, the Behavior Tree can DECIDE on (at least) its desired Goal (at least) once per Frame.

The NavMesh contains one more trick - it allows metadata to be associated with geometric features (on triangles only, for now).
NavMesh owns a map of named Variant containers (just like the AI Agents do) which is further keyed by Triangle ID - basically, we can write arbitrary Named And Typed Variables to any triangles we wish, and the AI PathFinder is able to 'discover' them during its A* PathFinding execution steps.
We have an opportunity to query the 'Memory' of the caller AI Agent to determine if the Agent has 'seen this Triangle recently', and if its new, to determine what action to take (or not to take) based on the State information provided by that Triangle.
At the VERY LEAST, the AI Agent can choose this Triangle as a new Goal, given its discovery prior to the Solution Node indicates a lower Travel Cost, and given that A* is a 'directed search', the AI Agent has very likely discovered a node that is very much 'en-route' to its current Goal, and if it chooses to travel there first, it will very likely decide on its original Goal once more, without any 'hysteresis' - we can forget the Original Goal, switch to the En-Route Goal, and just facilitate the decisions made by the AI instead of trying to be clever.
But the point of marking up triangles IS to be clever - we can tell our AI that here is a good place to lay an ambush, or that it can climb through this window, or jump from this cliff ledge, or sit on this chair, or anything else that you can think of.
And since the NavMesh markup mechanic is based on the same tech as the AI blackboard mechanic, it becomes easy to 'transfer knowledge' from the NavMesh to an AI Agent, or vice-versa.
Painting arbitrary data to Triangles sounds like a pain to code or script, and I would generally agree, however it should be easy to create 'brushes' representing a set of variables to be 'painted' to the world, as an in-game editor feature.
The only question I find myself asking at this point, is whether I should be marking up Edges, in addition to or instead of Triangles.




Friday 29 April 2016

Characters and Behaviors


I've implemented a sort of hybrid RagDoll-Meets-BoneAnimated-GPU-Skinned-Mesh based on two Controllers who target the same Skeleton - AnimationController and RagDoll are both optional sub-components of the AnimationComponent we can glue to any Entity.

The time came to think about other kinds of Controllers - a base CharacterController (implements enough physics to stop characters from getting stuck, etc.), and derived PlayerCharacter and NonPlayerCharacter controllers.

The PlayerCharacter is actually fairly easy - it needs an InputComponent (so it can sink events from input devices like joysticks, keyboard, mouse, etc.) which drives its base CharacterController methods.

The Non-Player-Character controller is driven by Artificial Intelligence - I'm using Behavior Trees to control general behavior, and I am now working on a hybrid NavMesh-Meets-Pathfinding solver which initially takes in a Triangle Soup and supports markups.

Sunday 17 April 2016

What's New?


It's been a while since my last post, however, development is far from stagnant!
Unfortunately, I lost my internet access for a while there, during which I've been keeping a Development Journal, which I expect to also publish in due course.

So, what's been happening with the FireStorm DOD-ECS fork?

Support for Deferred Lighting was advanced a long way - including 'omni-directional' shadows for any number of input lights.

Support for GPU-Skinned Bone-Animated Meshes was added some time ago, and since then, work has been directed toward fusing two bodies of work: skeletal animation, and ragdoll physics.
The goal is to produce a hybrid of the two techniques with respect to a given model.

The implementation of skinned meshes is sophisticated, but credit where it's due, the architecture was based loosely on Microsoft's ID3DXSkinMesh controller thingy. Basically, we have an Asset object, which is our Mesh - it may optionally reference a Skeleton object, which is the container for any Animation keyframe data as well as the Bone Hierarchy that they usually drive.
The Mesh, and its Skeleton, are an Asset - if we create an Entity (instance) from such a Mesh, the resulting EntityID will have an associated AnimationController Component, which really is what represents our Instance from FireStorm's point of view.
We can use that component to play animations, initiate animation transitions, set up complex mixtures of animations, play complex blended sequences of animations, and more.
Each instance of a Skinned Mesh is represented by its AnimationController, and multiple Skinned Mesh instances may reference the same Mesh Asset, and can safely update independently, since no Skeleton or Mesh is affected by Updating a Controller.


Here's a few images from recent development, this one shows a SpotLight interacting within a scene that is otherwise lit by a much larger (green) point light.


The same scene, but up closer, and with some other settings changed - the floor texture here is bump mapped (probably not a great example, but hey, its coder art)

Here's a skinned mesh with a ragdoll attached to it, the ragdoll is being driven by the animated bones (which are not shown), the first half of the skinned mesh to ragdoll connection is complete.