Wednesday, 4 May 2016
A* PathFinding on a NavMesh
The NavMesh that I've implemented is 3D - its data is sourced from an input Mesh asset.
I'm using the 'half-edge' structure to build a 'HE' representation of an input triangle soup, the resulting structure can perform rapid 'geometric-connectivity' queries, telling me all about how my geometry is connected - I can tell which 'Features' (triangles and vertices) are connected to any other 'Feature'.
This is important, because my AI Agents are using A* PathFinding, and my chosen implementation of A* requires that, given some Feature-Based Node, I can provide Connected Features which will form the basis for new Nodes during expansion of the A* pathfinding 'search-frontier'.
(Edge MidPoints are not considered as being Features in terms of NavMesh Navigation, as these MidPoints will be intersected by the theoretical Path between the Origins of any pair of Neighboring Triangles).
To be specific, each 'A* Node' is tagged with its Position in WorldSpace (just the vertex position, or in the case of triangles, the centroid), a boolean indicating whether it represents a vertex or a triangle, a Feature Index (of a vertex, or a triangle), and a Slope value (basic walkability) which holds the angle between the Feature Normal (vertex or triangle) and the World Up vector.
That, and a pair of Start and Goal Nodes, is enough information for the A* Algorithm to operate.
The implication of this idea that an 'A* Node' can represent either a Vertex or a Triangle is that the search frontier is able to expand more rapidly, and in a more intelligent 'direction', generally leading to a more rapidly-located and 'smoother' pathway being chosen, due to the PathFinder being presented with more options per node during pathway expansion, and because the options presented are directly based on the complexity of the input geometry, the potential node-tree's complexity scales automatically to the geometry on which it is based - complex geometries are not problematic, rather they represent 'opportunity'.
In this image, we can see that example vertex 13 connects to precisely 7 other vertices, and coincidentally, 7 triangles - a total of 14 features are connected to feature [Vertex] #13- that's a lot of connections, a lot of possibilities for the search frontier to expand on.
My chosen A* implementation defers both the construction of new nodes, and the calculation of their Travel Costing (or Walkability), giving me an opportunity to do that on the basis of the caller AI Agent - some AI Agents won't care about Slope, they will happily walk up walls and even across ceilings, if that's what they should do - most AI Agents will be put off by steep slopes and will seek an 'easier' path to their given Goal - what is Walkable, is determined on a per-Agent basis, not just on the basis of the NavMesh and/or any related metadata.
The Goal of an AI Agent is selected through execution of a Behavior Tree.
Each AI Agent has such a Tree, and will update it prior to searching for a Path To Goal, in order to establish a Goal (should none exist) or to allow for choosing a 'better' Goal (given a dynamic environment, and dynamic knowledge of said environment).
The execution of a Behavior Tree causes changes ONLY in the 'context object' handed in to the root Behavior Node by the caller AI Agent - which happens to be owned by that Agent, and represent the Agent's Memory - it's a Blackboard object basically, an arbitrary list of named Variant containers, and one of them identifies the Goal for PathFinding. Therefore, the Behavior Tree can DECIDE on (at least) its desired Goal (at least) once per Frame.
The NavMesh contains one more trick - it allows metadata to be associated with geometric features (on triangles only, for now).
NavMesh owns a map of named Variant containers (just like the AI Agents do) which is further keyed by Triangle ID - basically, we can write arbitrary Named And Typed Variables to any triangles we wish, and the AI PathFinder is able to 'discover' them during its A* PathFinding execution steps.
We have an opportunity to query the 'Memory' of the caller AI Agent to determine if the Agent has 'seen this Triangle recently', and if its new, to determine what action to take (or not to take) based on the State information provided by that Triangle.
At the VERY LEAST, the AI Agent can choose this Triangle as a new Goal, given its discovery prior to the Solution Node indicates a lower Travel Cost, and given that A* is a 'directed search', the AI Agent has very likely discovered a node that is very much 'en-route' to its current Goal, and if it chooses to travel there first, it will very likely decide on its original Goal once more, without any 'hysteresis' - we can forget the Original Goal, switch to the En-Route Goal, and just facilitate the decisions made by the AI instead of trying to be clever.
But the point of marking up triangles IS to be clever - we can tell our AI that here is a good place to lay an ambush, or that it can climb through this window, or jump from this cliff ledge, or sit on this chair, or anything else that you can think of.
And since the NavMesh markup mechanic is based on the same tech as the AI blackboard mechanic, it becomes easy to 'transfer knowledge' from the NavMesh to an AI Agent, or vice-versa.
Painting arbitrary data to Triangles sounds like a pain to code or script, and I would generally agree, however it should be easy to create 'brushes' representing a set of variables to be 'painted' to the world, as an in-game editor feature.
The only question I find myself asking at this point, is whether I should be marking up Edges, in addition to or instead of Triangles.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment