Friday 27 May 2016

CSG : Constructive Solid Geometry


Constructive Solid Geometry is a Modelling technique, and it's not a new one.

Basically you get to create or load 'CSG Brushes' which can be used to sculpt your game's level geometry.  Further, you can apply boolean operations to sets of brushes in order to create more complex brushes - which is the basis of CSG Modelling.
A brush is nothing but a set of unique 3D Planes, each of which carries one or more Coplanar Polygons - typically, textured planar n-gons.
You can imagine a simple cube as a nice example of a Brush - it has six planes, each plane contains one polygon with four sides. We could use such a Brush to create a 'room', then shrink the brush (yes we can manipulate brush geometry) and use it again to stamp out a doorway in one of the walls just like you were wielding a cookie-cutter.
That's what CSG editing is like in practice, although our brushes can be much more complex and interesting... importantly, it's possible to construct a CSG brush from an existing mesh of arbitrary complexity... we can basically load brushes from existing meshes, or face-selections thereof.

Less obviously, CSG can be used to create a BSP Tree (and by extension, a Sector-and-Portal Graph), rather than generating the BSP Tree from a large mesh (or set of mesh instances), and then extracting sectors and portals from the triangle soup.

It is noteworthy that CSG Brushes share some properties with BSP Leaf Nodes.
Both are comprised of Planes and their Planar Polygons, one major difference is that there is no guarantee that the set of planes comprising a Brush form a closed space, while BSP leaf nodes have an iron-clad guarantee that the surrounding planes form a watertight seal - a truly closed space.
They are nonetheless, more similar than they are different, which is what got my attention.
We can in fact, generate a BSP Tree from a Brush, and if we wish, we can define CSG boolean operations as operations involving BSP Trees - however, this is not the only possible approach.

Now I have to be honest, my only interest in BSP Trees is their ability to spew forth a set of connected gamespaces, as well as information about the connections between the spaces - that is to say, we can extract a sector-and-portal graph from a conventional solid-leaf bsp tree.

Having said that, the process of generating BSP Trees for entire levels is exponentially expensive with respect to the complexity of the input triangle soup. CSG Brushes, on the other hand, offer a rapid way to produce large and complex BSP Trees from much smaller and simpler subtrees, as well as a way to re-use those subtrees.

Most modern game engines have deprecated CSG within their respective toolsets, although there are notable exceptions. The reasons given generally mention that CSG has been more or less replaced by static meshes (with respect to MODELLING) which is a fair statement, that CSG Brushes are notoriously difficult and error-prone to implement with respect to models of arbitrary complexity (also a fair call), and that there are better tools for modelling than the engine's CSG editing tools can provide - this last point is completely irrelevant, and really seems like a cop-out.

In my opinion, what is not being talked about is how modern game engines deal with static meshes with respect to spatial partitioning. I mean, octrees and boundingvolume hierarchies are still the order of the day, while the best features of BSP trees (the properties of their leaves) are still valid, even if we discard the tree, we still want some kind of sparse spatial mapping, and nothing I can think of does a better job of mapping mostly-empty space than a sector and portal graph.

I'm looking forward to implementing realtime in-game CSG editing tools, and I admit that I will be looking at the current crop of modelling tools for inspiration.


Friday 20 May 2016

Merging of Coplanar and Adjacent Polygons - Leith's Algorithm


I wanted to find the Union of a set of coplanar polygons, so initially, I tried to 'weld' the polygons together based on an exhaustive search for 'shared edges',  however in my 'simple test case', I am dealing with hundreds of simple polygons at a time, and missing a special case or two made debugging absolute hell.
Despite the algorithm showing promise, its performance was bound to degrade as the polygon count increase - and 'logarithmically-so'.
I had to find a better solution, abandoning this solution pathway even though I had racked up several days trying to make it work.

The (much faster, and better-scaling) alternative I am trying right now is very similar to a half-edge implementation - I feed all the edges of all coplanar polygons into a map of Directed Edges, and then having done so, I search the map for a reversed case of each input edge - if two polygons refer to the same edge, but with reversed order of vertices, then that edge is NOT a boundary edge, its an internal feature of the bounded shape, and therefore not interesting. If we fail to find a matching reversed-edge for any given input edge, then that edge must be a Boundary Edge, a part of the bounds of the (possibly convex) bounded shape. In this case, I'll collect the edge, and at the end of this process, I'll have a bunch of edges which are directed edges on the boundary of the shape - if the shape is non-manifold, I'll be able to discover each 'island' exactly the same way.

SourceCode is written, time to test the theory.


Monday 16 May 2016

Automatic Discovery of Cell-And-Portal Network Connections and Portal Geometries from a Triangle Soup - ten years, and nothing better is around


A decade ago, and in another life, I wrote about Nathan Whitaker's description of an informal algorithm for the discovery of portal geometries based on a BSP tree.
At the time, Alan Baylis had just gotten hold of it, and Al wrote up a piece on this subject, a description and sourcecode was published, the sourcecode is by my own standards, not acceptable, and not fit for production, however it is reasonably clear and at a glance it appears to work.

I can tell you now that Alan's code is not fit for human consumption, and should be excised from the internet, although his accompanying tutorial sheds some light.

What the hell am I talking about?

Imagine that you load up a 3D game level into a game engine or an art package, and you were using some importer plugin to get the job done (we've all been there, right?) - you managed to load in the geometry, but lost all the structure if any was ever there, it's a big old triangle soup.

Now imagine that you wish to cut up this worldspace into connected spaces - rooms and doorways, if you will, even if the scene is not indoors. One way to cut the space up is called Binary Space Partitioning, or BSP.

If you take the time to figure out how to write your own BSP Compiler (tree generator), that can take in a triangle soup, and emit a BSP Tree that contains all geometry in its leaf nodes, then you are already ahead of most of your peers.

Now we have a BSP Tree, and its Leaf Nodes, which we know represent 'convex subspaces' in the world - they are spaces that are separated from other spaces via planes - and not just planes, but planes with a 'hole in them', a hole that has a shape, a shape that we can discover.

We can discover the exits from each BSP leaf space to each other space - the doorways between halls and rooms - and we can find their exact shapes.

Imagine now, if we could discover the shape of a doorway, we would only need to draw what's in the same room as we are, plus what we can see through doorways (or other portals).

Portal engines have enjoyed a revival since the highly successful Portal game series, however those games are based on orthogonal worlds, and are no true examples of the power of portal based game engines. Orthogonal portals are easy to place by hand, and easy to calculate. John Carmack's DOOM games might have used BSP but they were orthogonal spaces, and as a result, there seems to be a general impression that BSP is not suitable to non-orthogonal scenarios.
This simply is nonsense, they are good for any convex spatial system, provided that the artists are aware of what they need to avoid doing (placing curved shapes in the center of open spaces is a bad idea, while a simpler shape might actually be beneficial).


Merging of Adjacent Coplanar Polygons


It's a fairly common geometric case: one or more polygons (N-GONS) reside upon a specific 3D Plane - the polygons are coplanar, and are effectively 2D - we know that they are all convex, and that they share some Edges, implying that they could be Welded into (possibly non-convex) polygons by eliminating the shared edges.



I decided early not to try to solve this via 2D projection onto a principle axis, but rather to take advantage of the circular nature of polygonal descriptors - find the matching edges, and describe the remaining polygon in pure 3D terms.

The first problem that I encountered is that my polygons have no defined winding order - they are based on 'super planes' which span the worldspace, and there's no particular way they should be viewed. Therefore, I needed a way to define whether two polygons have the same winding order, or the reverse order - I did not care if it was 'clockwise' or not, which only makes sense for a particular viewpoint, but I did care if they were not the same ordering given a fixed viewpoint.

Several days of research later, having investigated all manner of strange solutions including winged-structures, I came across a simple solution for discovering the surface-normal of an arbitrary planar polygon, I'll post this source here in case it's helpful to someone else.
It's incredibly stable - it works for lots of special cases, perhaps ALL special cases - and as such, I believe this to be a true Code Gem and worth sharing with the world.

Given a polygon as a circular list of vertices, this code will return the surface normal,  even if the polygon is not quite coplanar, it will return a reasonable average normal, which can be construed as implying a winding order (the signed dot product of the calculated normal, with some fixed base normal, is either plus or minus, indicating counterclockwise or clockwise, I believe).

At any rate, I'm certainly one step closer to my current goal of simplifying portal geometry extracted heuristically from a triangle soup!

    // Newell Brothers' Method for calculating SurfaceNormal of Arbitrary Polygon
    // Works on both convex and concave planar polygons, and even works if the polygon is not quite planar,
    // or the polygon has colinear edges, or other weird corner-cases - seems to work in ALL cases.
    // I believe this was published in around 1990/1991
    //
    glm::vec3 GetNormal (PORTAL* Polygon)
    {
       glm::vec3 Normal(0); // initialize zero vector

       for(unsigned short i=0;i<Polygon->Vertex.size();i++)
       {
          VERTEX& va=Polygon->Vertex[i];
          VERTEX& vb=Polygon->Vertex[(i+1) % Polygon->Vertex.size()];

          Normal.x +=  (va.y - vb.y) * (va.z + vb.z);
          Normal.y +=  (va.z - vb.z) * (va.x + vb.x);
          Normal.z +=  (va.x - vb.x) * (va.y + vb.y);

       }

       return glm::normalize(Normal); // return normalized vector

    }

Wednesday 4 May 2016

CharacterController, PlayerCharacter and NonPlayerCharacter


I have a physical 3D game world, so Let's Get Physical.

CharacterController class implements a 'physics character controller' - it deals with some of the basic issues we tend to encounter when trying to control a physics-based object within a physics simulator - things like dealing with stairs and stair-like vertical height changes, things like getting stuck inside other stuff, things like ducking and laying down and jumping, and whether your character can do it, given the world around your character. All that sort of stuff is just dealt with for you, nice huh.
You can attach a CharacterController to any instance of a Bone-Animated mesh - if its a mesh, and it has a skeleton, and you created an instance from it, then you can make it into a character.
Well, almost, as the CharacterController class is brainless and useless alone.

On top of the CharacterController, I have built PlayerCharacter and NonPlayerCharacter classes.
The PlayerCharacter class ensures that its owner Entity has an InputComponent, and sinks input device events, converting them into CharacterController Actions such as Jump or Walk.

NonPlayerCharacter implements the AI Component, containing PathFinding and BehaviorTree objects, and is pretty much explained in the previous blog entry, describing how Behavior Trees and NavMesh PathFinding all comes together.

It would certainly be nice to hear that someone other than myself is reading and enjoying this!


A* PathFinding on a NavMesh


The NavMesh that I've implemented is 3D - its data is sourced from an input Mesh asset.
I'm using the 'half-edge' structure to build a 'HE' representation of an input triangle soup, the resulting structure can perform rapid 'geometric-connectivity' queries, telling me all about how my geometry is connected - I can tell which 'Features' (triangles and vertices) are connected to any other 'Feature'.
This is important, because my AI Agents are using A* PathFinding, and my chosen implementation of A* requires that, given some Feature-Based Node, I can provide Connected Features which will form the basis for new Nodes during expansion of the A* pathfinding 'search-frontier'.
(Edge MidPoints are not considered as being Features in terms of NavMesh Navigation, as these MidPoints will be intersected by the theoretical Path between the Origins of any pair of Neighboring Triangles).



To be specific, each 'A* Node' is tagged with its Position in WorldSpace (just the vertex position, or in the case of triangles, the centroid), a boolean indicating whether it represents a vertex or a triangle, a Feature Index (of a vertex, or a triangle), and a Slope value (basic walkability) which holds the angle between the Feature Normal (vertex or triangle) and the World Up vector.
That, and a pair of Start and Goal Nodes, is enough information for the A* Algorithm to operate.



The implication of this idea that an 'A* Node' can represent either a Vertex or a Triangle is that the search frontier is able to expand more rapidly, and in a more intelligent 'direction', generally leading to a more rapidly-located and 'smoother' pathway being chosen, due to the PathFinder being presented with more options per node during pathway expansion, and because the options presented are directly based on the complexity of the input geometry, the potential node-tree's complexity scales automatically to the geometry on which it is based - complex geometries are not problematic, rather they represent 'opportunity'.
In this image, we can see that example vertex 13 connects to precisely 7 other vertices, and coincidentally, 7 triangles - a total of 14 features are connected to feature [Vertex] #13- that's a lot of connections, a lot of possibilities for the search frontier to expand on.



My chosen A* implementation defers both the construction of new nodes, and the calculation of their Travel Costing (or Walkability), giving me an opportunity to do that on the basis of the caller AI Agent - some AI Agents won't care about Slope, they will happily walk up walls and even across ceilings, if that's what they should do - most AI Agents will be put off by steep slopes and will seek an 'easier' path to their given Goal - what is Walkable, is determined on a per-Agent basis, not just on the basis of the NavMesh and/or any related metadata.

The Goal of an AI Agent is selected through execution of a Behavior Tree.
Each AI Agent has such a Tree, and will update it prior to searching for a Path To Goal, in order to establish a Goal (should none exist) or to allow for choosing a 'better' Goal (given a dynamic environment, and dynamic knowledge of said environment).

The execution of a Behavior Tree causes changes ONLY in the 'context object' handed in to the root Behavior Node by the caller AI Agent - which happens to be owned by that Agent, and represent the Agent's Memory - it's a Blackboard object basically, an arbitrary list of named Variant containers, and one of them identifies the Goal for PathFinding. Therefore, the Behavior Tree can DECIDE on (at least) its desired Goal (at least) once per Frame.

The NavMesh contains one more trick - it allows metadata to be associated with geometric features (on triangles only, for now).
NavMesh owns a map of named Variant containers (just like the AI Agents do) which is further keyed by Triangle ID - basically, we can write arbitrary Named And Typed Variables to any triangles we wish, and the AI PathFinder is able to 'discover' them during its A* PathFinding execution steps.
We have an opportunity to query the 'Memory' of the caller AI Agent to determine if the Agent has 'seen this Triangle recently', and if its new, to determine what action to take (or not to take) based on the State information provided by that Triangle.
At the VERY LEAST, the AI Agent can choose this Triangle as a new Goal, given its discovery prior to the Solution Node indicates a lower Travel Cost, and given that A* is a 'directed search', the AI Agent has very likely discovered a node that is very much 'en-route' to its current Goal, and if it chooses to travel there first, it will very likely decide on its original Goal once more, without any 'hysteresis' - we can forget the Original Goal, switch to the En-Route Goal, and just facilitate the decisions made by the AI instead of trying to be clever.
But the point of marking up triangles IS to be clever - we can tell our AI that here is a good place to lay an ambush, or that it can climb through this window, or jump from this cliff ledge, or sit on this chair, or anything else that you can think of.
And since the NavMesh markup mechanic is based on the same tech as the AI blackboard mechanic, it becomes easy to 'transfer knowledge' from the NavMesh to an AI Agent, or vice-versa.
Painting arbitrary data to Triangles sounds like a pain to code or script, and I would generally agree, however it should be easy to create 'brushes' representing a set of variables to be 'painted' to the world, as an in-game editor feature.
The only question I find myself asking at this point, is whether I should be marking up Edges, in addition to or instead of Triangles.