Friday, 20 May 2016
Merging of Coplanar and Adjacent Polygons - Leith's Algorithm
I wanted to find the Union of a set of coplanar polygons, so initially, I tried to 'weld' the polygons together based on an exhaustive search for 'shared edges', however in my 'simple test case', I am dealing with hundreds of simple polygons at a time, and missing a special case or two made debugging absolute hell.
Despite the algorithm showing promise, its performance was bound to degrade as the polygon count increase - and 'logarithmically-so'.
I had to find a better solution, abandoning this solution pathway even though I had racked up several days trying to make it work.
The (much faster, and better-scaling) alternative I am trying right now is very similar to a half-edge implementation - I feed all the edges of all coplanar polygons into a map of Directed Edges, and then having done so, I search the map for a reversed case of each input edge - if two polygons refer to the same edge, but with reversed order of vertices, then that edge is NOT a boundary edge, its an internal feature of the bounded shape, and therefore not interesting. If we fail to find a matching reversed-edge for any given input edge, then that edge must be a Boundary Edge, a part of the bounds of the (possibly convex) bounded shape. In this case, I'll collect the edge, and at the end of this process, I'll have a bunch of edges which are directed edges on the boundary of the shape - if the shape is non-manifold, I'll be able to discover each 'island' exactly the same way.
SourceCode is written, time to test the theory.
Monday, 16 May 2016
Automatic Discovery of Cell-And-Portal Network Connections and Portal Geometries from a Triangle Soup - ten years, and nothing better is around
A decade ago, and in another life, I wrote about Nathan Whitaker's description of an informal algorithm for the discovery of portal geometries based on a BSP tree.
At the time, Alan Baylis had just gotten hold of it, and Al wrote up a piece on this subject, a description and sourcecode was published, the sourcecode is by my own standards, not acceptable, and not fit for production, however it is reasonably clear and at a glance it appears to work.
I can tell you now that Alan's code is not fit for human consumption, and should be excised from the internet, although his accompanying tutorial sheds some light.
What the hell am I talking about?
Imagine that you load up a 3D game level into a game engine or an art package, and you were using some importer plugin to get the job done (we've all been there, right?) - you managed to load in the geometry, but lost all the structure if any was ever there, it's a big old triangle soup.
Now imagine that you wish to cut up this worldspace into connected spaces - rooms and doorways, if you will, even if the scene is not indoors. One way to cut the space up is called Binary Space Partitioning, or BSP.
If you take the time to figure out how to write your own BSP Compiler (tree generator), that can take in a triangle soup, and emit a BSP Tree that contains all geometry in its leaf nodes, then you are already ahead of most of your peers.
Now we have a BSP Tree, and its Leaf Nodes, which we know represent 'convex subspaces' in the world - they are spaces that are separated from other spaces via planes - and not just planes, but planes with a 'hole in them', a hole that has a shape, a shape that we can discover.
We can discover the exits from each BSP leaf space to each other space - the doorways between halls and rooms - and we can find their exact shapes.
Imagine now, if we could discover the shape of a doorway, we would only need to draw what's in the same room as we are, plus what we can see through doorways (or other portals).
Portal engines have enjoyed a revival since the highly successful Portal game series, however those games are based on orthogonal worlds, and are no true examples of the power of portal based game engines. Orthogonal portals are easy to place by hand, and easy to calculate. John Carmack's DOOM games might have used BSP but they were orthogonal spaces, and as a result, there seems to be a general impression that BSP is not suitable to non-orthogonal scenarios.
This simply is nonsense, they are good for any convex spatial system, provided that the artists are aware of what they need to avoid doing (placing curved shapes in the center of open spaces is a bad idea, while a simpler shape might actually be beneficial).
Merging of Adjacent Coplanar Polygons
It's a fairly common geometric case: one or more polygons (N-GONS) reside upon a specific 3D Plane - the polygons are coplanar, and are effectively 2D - we know that they are all convex, and that they share some Edges, implying that they could be Welded into (possibly non-convex) polygons by eliminating the shared edges.
I decided early not to try to solve this via 2D projection onto a principle axis, but rather to take advantage of the circular nature of polygonal descriptors - find the matching edges, and describe the remaining polygon in pure 3D terms.
The first problem that I encountered is that my polygons have no defined winding order - they are based on 'super planes' which span the worldspace, and there's no particular way they should be viewed. Therefore, I needed a way to define whether two polygons have the same winding order, or the reverse order - I did not care if it was 'clockwise' or not, which only makes sense for a particular viewpoint, but I did care if they were not the same ordering given a fixed viewpoint.
Several days of research later, having investigated all manner of strange solutions including winged-structures, I came across a simple solution for discovering the surface-normal of an arbitrary planar polygon, I'll post this source here in case it's helpful to someone else.
It's incredibly stable - it works for lots of special cases, perhaps ALL special cases - and as such, I believe this to be a true Code Gem and worth sharing with the world.
Given a polygon as a circular list of vertices, this code will return the surface normal, even if the polygon is not quite coplanar, it will return a reasonable average normal, which can be construed as implying a winding order (the signed dot product of the calculated normal, with some fixed base normal, is either plus or minus, indicating counterclockwise or clockwise, I believe).
At any rate, I'm certainly one step closer to my current goal of simplifying portal geometry extracted heuristically from a triangle soup!
// Newell Brothers' Method for calculating SurfaceNormal of Arbitrary Polygon
// Works on both convex and concave planar polygons, and even works if the polygon is not quite planar,
// or the polygon has colinear edges, or other weird corner-cases - seems to work in ALL cases.
// I believe this was published in around 1990/1991
//
glm::vec3 GetNormal (PORTAL* Polygon)
{
glm::vec3 Normal(0); // initialize zero vector
for(unsigned short i=0;i<Polygon->Vertex.size();i++)
{
VERTEX& va=Polygon->Vertex[i];
VERTEX& vb=Polygon->Vertex[(i+1) % Polygon->Vertex.size()];
Normal.x += (va.y - vb.y) * (va.z + vb.z);
Normal.y += (va.z - vb.z) * (va.x + vb.x);
Normal.z += (va.x - vb.x) * (va.y + vb.y);
}
return glm::normalize(Normal); // return normalized vector
}
Wednesday, 4 May 2016
CharacterController, PlayerCharacter and NonPlayerCharacter
I have a physical 3D game world, so Let's Get Physical.
CharacterController class implements a 'physics character controller' - it deals with some of the basic issues we tend to encounter when trying to control a physics-based object within a physics simulator - things like dealing with stairs and stair-like vertical height changes, things like getting stuck inside other stuff, things like ducking and laying down and jumping, and whether your character can do it, given the world around your character. All that sort of stuff is just dealt with for you, nice huh.
You can attach a CharacterController to any instance of a Bone-Animated mesh - if its a mesh, and it has a skeleton, and you created an instance from it, then you can make it into a character.
Well, almost, as the CharacterController class is brainless and useless alone.
On top of the CharacterController, I have built PlayerCharacter and NonPlayerCharacter classes.
The PlayerCharacter class ensures that its owner Entity has an InputComponent, and sinks input device events, converting them into CharacterController Actions such as Jump or Walk.
NonPlayerCharacter implements the AI Component, containing PathFinding and BehaviorTree objects, and is pretty much explained in the previous blog entry, describing how Behavior Trees and NavMesh PathFinding all comes together.
It would certainly be nice to hear that someone other than myself is reading and enjoying this!
A* PathFinding on a NavMesh
The NavMesh that I've implemented is 3D - its data is sourced from an input Mesh asset.
I'm using the 'half-edge' structure to build a 'HE' representation of an input triangle soup, the resulting structure can perform rapid 'geometric-connectivity' queries, telling me all about how my geometry is connected - I can tell which 'Features' (triangles and vertices) are connected to any other 'Feature'.
This is important, because my AI Agents are using A* PathFinding, and my chosen implementation of A* requires that, given some Feature-Based Node, I can provide Connected Features which will form the basis for new Nodes during expansion of the A* pathfinding 'search-frontier'.
(Edge MidPoints are not considered as being Features in terms of NavMesh Navigation, as these MidPoints will be intersected by the theoretical Path between the Origins of any pair of Neighboring Triangles).
To be specific, each 'A* Node' is tagged with its Position in WorldSpace (just the vertex position, or in the case of triangles, the centroid), a boolean indicating whether it represents a vertex or a triangle, a Feature Index (of a vertex, or a triangle), and a Slope value (basic walkability) which holds the angle between the Feature Normal (vertex or triangle) and the World Up vector.
That, and a pair of Start and Goal Nodes, is enough information for the A* Algorithm to operate.
The implication of this idea that an 'A* Node' can represent either a Vertex or a Triangle is that the search frontier is able to expand more rapidly, and in a more intelligent 'direction', generally leading to a more rapidly-located and 'smoother' pathway being chosen, due to the PathFinder being presented with more options per node during pathway expansion, and because the options presented are directly based on the complexity of the input geometry, the potential node-tree's complexity scales automatically to the geometry on which it is based - complex geometries are not problematic, rather they represent 'opportunity'.
In this image, we can see that example vertex 13 connects to precisely 7 other vertices, and coincidentally, 7 triangles - a total of 14 features are connected to feature [Vertex] #13- that's a lot of connections, a lot of possibilities for the search frontier to expand on.
My chosen A* implementation defers both the construction of new nodes, and the calculation of their Travel Costing (or Walkability), giving me an opportunity to do that on the basis of the caller AI Agent - some AI Agents won't care about Slope, they will happily walk up walls and even across ceilings, if that's what they should do - most AI Agents will be put off by steep slopes and will seek an 'easier' path to their given Goal - what is Walkable, is determined on a per-Agent basis, not just on the basis of the NavMesh and/or any related metadata.
The Goal of an AI Agent is selected through execution of a Behavior Tree.
Each AI Agent has such a Tree, and will update it prior to searching for a Path To Goal, in order to establish a Goal (should none exist) or to allow for choosing a 'better' Goal (given a dynamic environment, and dynamic knowledge of said environment).
The execution of a Behavior Tree causes changes ONLY in the 'context object' handed in to the root Behavior Node by the caller AI Agent - which happens to be owned by that Agent, and represent the Agent's Memory - it's a Blackboard object basically, an arbitrary list of named Variant containers, and one of them identifies the Goal for PathFinding. Therefore, the Behavior Tree can DECIDE on (at least) its desired Goal (at least) once per Frame.
The NavMesh contains one more trick - it allows metadata to be associated with geometric features (on triangles only, for now).
NavMesh owns a map of named Variant containers (just like the AI Agents do) which is further keyed by Triangle ID - basically, we can write arbitrary Named And Typed Variables to any triangles we wish, and the AI PathFinder is able to 'discover' them during its A* PathFinding execution steps.
We have an opportunity to query the 'Memory' of the caller AI Agent to determine if the Agent has 'seen this Triangle recently', and if its new, to determine what action to take (or not to take) based on the State information provided by that Triangle.
At the VERY LEAST, the AI Agent can choose this Triangle as a new Goal, given its discovery prior to the Solution Node indicates a lower Travel Cost, and given that A* is a 'directed search', the AI Agent has very likely discovered a node that is very much 'en-route' to its current Goal, and if it chooses to travel there first, it will very likely decide on its original Goal once more, without any 'hysteresis' - we can forget the Original Goal, switch to the En-Route Goal, and just facilitate the decisions made by the AI instead of trying to be clever.
But the point of marking up triangles IS to be clever - we can tell our AI that here is a good place to lay an ambush, or that it can climb through this window, or jump from this cliff ledge, or sit on this chair, or anything else that you can think of.
And since the NavMesh markup mechanic is based on the same tech as the AI blackboard mechanic, it becomes easy to 'transfer knowledge' from the NavMesh to an AI Agent, or vice-versa.
Painting arbitrary data to Triangles sounds like a pain to code or script, and I would generally agree, however it should be easy to create 'brushes' representing a set of variables to be 'painted' to the world, as an in-game editor feature.
The only question I find myself asking at this point, is whether I should be marking up Edges, in addition to or instead of Triangles.
Friday, 29 April 2016
Characters and Behaviors
I've implemented a sort of hybrid RagDoll-Meets-BoneAnimated-GPU-Skinned-Mesh based on two Controllers who target the same Skeleton - AnimationController and RagDoll are both optional sub-components of the AnimationComponent we can glue to any Entity.
The time came to think about other kinds of Controllers - a base CharacterController (implements enough physics to stop characters from getting stuck, etc.), and derived PlayerCharacter and NonPlayerCharacter controllers.
The PlayerCharacter is actually fairly easy - it needs an InputComponent (so it can sink events from input devices like joysticks, keyboard, mouse, etc.) which drives its base CharacterController methods.
The Non-Player-Character controller is driven by Artificial Intelligence - I'm using Behavior Trees to control general behavior, and I am now working on a hybrid NavMesh-Meets-Pathfinding solver which initially takes in a Triangle Soup and supports markups.
Sunday, 17 April 2016
What's New?
It's been a while since my last post, however, development is far from stagnant!
Unfortunately, I lost my internet access for a while there, during which I've been keeping a Development Journal, which I expect to also publish in due course.
So, what's been happening with the FireStorm DOD-ECS fork?
Support for Deferred Lighting was advanced a long way - including 'omni-directional' shadows for any number of input lights.
Support for GPU-Skinned Bone-Animated Meshes was added some time ago, and since then, work has been directed toward fusing two bodies of work: skeletal animation, and ragdoll physics.
The goal is to produce a hybrid of the two techniques with respect to a given model.
The implementation of skinned meshes is sophisticated, but credit where it's due, the architecture was based loosely on Microsoft's ID3DXSkinMesh controller thingy. Basically, we have an Asset object, which is our Mesh - it may optionally reference a Skeleton object, which is the container for any Animation keyframe data as well as the Bone Hierarchy that they usually drive.
The Mesh, and its Skeleton, are an Asset - if we create an Entity (instance) from such a Mesh, the resulting EntityID will have an associated AnimationController Component, which really is what represents our Instance from FireStorm's point of view.
We can use that component to play animations, initiate animation transitions, set up complex mixtures of animations, play complex blended sequences of animations, and more.
Each instance of a Skinned Mesh is represented by its AnimationController, and multiple Skinned Mesh instances may reference the same Mesh Asset, and can safely update independently, since no Skeleton or Mesh is affected by Updating a Controller.
Here's a few images from recent development, this one shows a SpotLight interacting within a scene that is otherwise lit by a much larger (green) point light.
The same scene, but up closer, and with some other settings changed - the floor texture here is bump mapped (probably not a great example, but hey, its coder art)
Here's a skinned mesh with a ragdoll attached to it, the ragdoll is being driven by the animated bones (which are not shown), the first half of the skinned mesh to ragdoll connection is complete.
Subscribe to:
Comments (Atom)
