Friday, 29 April 2016

Characters and Behaviors


I've implemented a sort of hybrid RagDoll-Meets-BoneAnimated-GPU-Skinned-Mesh based on two Controllers who target the same Skeleton - AnimationController and RagDoll are both optional sub-components of the AnimationComponent we can glue to any Entity.

The time came to think about other kinds of Controllers - a base CharacterController (implements enough physics to stop characters from getting stuck, etc.), and derived PlayerCharacter and NonPlayerCharacter controllers.

The PlayerCharacter is actually fairly easy - it needs an InputComponent (so it can sink events from input devices like joysticks, keyboard, mouse, etc.) which drives its base CharacterController methods.

The Non-Player-Character controller is driven by Artificial Intelligence - I'm using Behavior Trees to control general behavior, and I am now working on a hybrid NavMesh-Meets-Pathfinding solver which initially takes in a Triangle Soup and supports markups.

Sunday, 17 April 2016

What's New?


It's been a while since my last post, however, development is far from stagnant!
Unfortunately, I lost my internet access for a while there, during which I've been keeping a Development Journal, which I expect to also publish in due course.

So, what's been happening with the FireStorm DOD-ECS fork?

Support for Deferred Lighting was advanced a long way - including 'omni-directional' shadows for any number of input lights.

Support for GPU-Skinned Bone-Animated Meshes was added some time ago, and since then, work has been directed toward fusing two bodies of work: skeletal animation, and ragdoll physics.
The goal is to produce a hybrid of the two techniques with respect to a given model.

The implementation of skinned meshes is sophisticated, but credit where it's due, the architecture was based loosely on Microsoft's ID3DXSkinMesh controller thingy. Basically, we have an Asset object, which is our Mesh - it may optionally reference a Skeleton object, which is the container for any Animation keyframe data as well as the Bone Hierarchy that they usually drive.
The Mesh, and its Skeleton, are an Asset - if we create an Entity (instance) from such a Mesh, the resulting EntityID will have an associated AnimationController Component, which really is what represents our Instance from FireStorm's point of view.
We can use that component to play animations, initiate animation transitions, set up complex mixtures of animations, play complex blended sequences of animations, and more.
Each instance of a Skinned Mesh is represented by its AnimationController, and multiple Skinned Mesh instances may reference the same Mesh Asset, and can safely update independently, since no Skeleton or Mesh is affected by Updating a Controller.


Here's a few images from recent development, this one shows a SpotLight interacting within a scene that is otherwise lit by a much larger (green) point light.


The same scene, but up closer, and with some other settings changed - the floor texture here is bump mapped (probably not a great example, but hey, its coder art)

Here's a skinned mesh with a ragdoll attached to it, the ragdoll is being driven by the animated bones (which are not shown), the first half of the skinned mesh to ragdoll connection is complete.

Tuesday, 1 December 2015

DOD-ECS: A Post-Mortem

I'm pretty excited about this whole DOD-ECS concept, since the architecture offers so many benefits, for example, it solves several outstanding problems relating to concurrent programming in games such as thread safety guarantees, and does so without any formal mutexes (lock-free),
but more interestingly I think that this architecture can easily be adapted to non-gaming / business applications, and even to general applications.
That is to say, the wider software development community could benefit from advances in this direction, as much as game developers, or more.
The amount of sourcecode contained in this project gives no indication to the amount of effort and number of iterations required to produce it.
This implementation of DOD-ECS is the product of a long series of failures and wrong turns, many resulting from my own assumptions, my programming habits, and just the way I've been conditioned to think about code in terms of objects instead of functions and data.
Having said all that, the resulting source is not too bad, considering it has to work on MSVC 2010, which has no support for things like variadic templates, things that would have helped me, had I chosen a more modern compiler, and the underlying design is sound in theory.
There were some points that came up late in the development where I realized that I could save on some internal lookups by deliberately allowing some data fields to belong to more than one component - to have some redundant copies of data can actually be good for performance.
I didn't have time to experiment with all of those ideas, so more performance can still be had, I'm sure of that.
There were entire systems omitted from this project that would be must-haves in any game engine, however they were outside of the scope I'd set for myself
within my original proposal: I wasn't setting out to create a game engine, the whole idea was to create a game logic system that can still play nicely within existing engines, while leveraging them for things like resource handling, but not relying on them for frame processing.
In retrospect, my implementation smells suspiciously like the guts of Gamebryo - that engine is highly data-driven, and the data is chock-full of various IDs, I suspect that Gamebryo's internal design shares features with my implementation of DOD-ECS.
I'm glad I chose this project as I've learned a lot about hardware-friendly and data-centric approaches to programming, and furthermore, I can actually see myself using this stuff as an alternative to licensing a full engine. I found that making a DOD-ECS application was a lot like making a sandwich: you have to think about what fillings you want on it, what order you'd prefer them in, and whether any of your ingredients bring their own requirements (maybe some pickled tomato relish). The ability to slap together a very specific game engine with just the needed functionality, and have everything in a game data-driven including the engine itself, is just too enticing to not follow up.



Wednesday, 30 September 2015

MRT fails to write to one or more output textures - cause and solution


I had a strange issue when I was trying to write to multiple textures in a single shader pass (MRT), that having activated several targets (glDrawBuffers), only the first one was getting any output, the others were not being changed, other than being cleared correctly to my desired clearcolor.

The problem turned out to be a combination of two things.
Firstly, I had Blending enabled, which meant that my current blendfunc would be applied to output pixels, and secondly, my fragshader was outputting RGB vec3 values, leaving alpha unaltered.

If you're playing around with MRT, and one or more outputs refuse to get written to, doublecheck that you have blending disabled, and/or check your pixel alpha.


Tuesday, 29 September 2015

Sometimes, in order to move forwards, you need to take a few steps backwards.

 Here's a few screenshots showing some work from the current refactor of the FireStorm Renderer.

The first two images just show per-pixel (phong) lighting being applied to a 3D textured cube,  and rendered courtesy of a regular forwards-rendering setup, with support for a single pass per light.





 Here's another view
The third image shows work on the Deferred Lighting pipeline, and is far more interesting to me.

Tex 1 is an input texture being applied to the geometry. The ''geompass' shader writes to three output color textures (2,3 and 4) and also writes to one depth texture (5 - it's quite faint, but it's there)

Tex 2 holds the WorldSpace Position of the 3D surfaces as determined at each pixel.
If we had a software raycaster that was casting 3D rays from each pixel on the near plane of the camera's view frustum until they hit some surface in the world, the position of each intersection point of a cast ray with the world surfaces is what we're recording per pixel, and doing so with no raycasting. The operation is laughably cheap in the geompass fragment shader.


Tex 3 is our regular diffuse texture output, that we'd get from forward-rendering with no lighting.
It carries the texture detail for all the objects in our scene for the current frame.

Tex 4 is similar to Tex2 - but it holds per-pixel WorldSpace Surface Normals, instead of Surface Position.

Tex 5 is a Depth Texture - it's capturing depth info just like the regular depthmap, but in the form of a texture, so it can be read back in another shader pass, where the data from these textures is read back in and the final lighting calculation is performed per pixel.

Tex 6 is waiting to capture the final results of the per-pixel lighting (baking) pass.

Friday, 18 September 2015

Single-Pass OmniDirectional Shadows in a Deferred Shading Environment


Until recently, I believed that the only way to achieve omidirectional shadows was to use six passes to generate the faces of a cubemap, or to use two passes (one with MRT), to generate the faces of a dual paraboloid.

It turns out that we can in fact, attach a cubemap to a framebuffer such that we may draw to all six faces in a single pass, without using MRT, and the trick is to use a Geometry Shader (inbetween the VS and FS shader stages) whose job is to transform the shadowcaster geometry we're drawing , via one of six virtual camera views, into one of six output FS executions(!)
Our utilization of a Geometry Shader for 'multiple projections' of the rendered geometry leads to the setting of a built-in GLSL variable called gl_Layer, which is a zero based integer whose value can be read from the FS, indicating which CubeMap Face (or plane) is the target of the FS execution.

Unfortunately, unlike the Lighting stage, we can't use GPU Instancing to help with batching our lights - we can use it for drawing geometry to the shadow cubemap, but not to send the lights themselves.

So we have a choice then, to either perform a shadowing pass per light, or to write several shadowmap shader variants that each cope with a specific number of lights, just like we'd be doing if this was a Forwards rendering environment.

Due to the cost of shadowcasting, and in order to reduce the amount of work involved therein, I've introduced a boolean switch in the Material class, which denotes whether objects made of a given material can cast shadows. This will allow the user to make premium decisions about which elements of a scene may cast shadows, and reduce the average number of potential shadowcasters per frame.
If required, I can put an override in the RenderableComponent which will allow the user to allow or disallow shadowcasting on a per-renderable level and without regard to material settings.



Monday, 14 September 2015

Combining Light and Shadow - Can lighting passes be used to create shadow maps?


Shadow-Casting FragmentShaders are always written relative to a single LightSource.
They replace the usual 'camera view' - the virtual camera origin they choose is the position of the light source, and the direction they choose is the direction from the light to the fragment (or its reverse). That's the usual case, anyway.
Deferred Shaders can support multiple lights cheaply, because most of the lighting equation's required components are output in a preliminary pass, and later, a final pass is used to bake the various components of the lighting equation using a relatively cheap post-process pass, which takes in the lighting equation components and calculates the per-pixel-lighting final values (ignoring anything else we might do to improve the quality of our final image).
Another reason that Deferred Shaders are fast is that most things happen in Camera View Space.

Do you think that we might be able to write a shader that can operate in ViewSpace, which can transform depth information (written during the preliminary pass) into per-output-texture-space for say, 6 output targets (shadow cubemap faces), and also using gpu-instancing, do this for one shape of light emitter,  in one shader pass?

http://cg.siomax.ru/index.php/computer-graphics/10-one-pass-rendering-to-cube-map

Apparently, with the help of a Geometry Shader, we can indeed render our shadow cubemap in a single pass, at least for a single Light. This requires the use of a built-in GLSL primitive called gl_Layer, which although technically this was introduced in OpenGL 4.1, it's available on a wide range of OpenGL 3.3 hardware.

I am immensely curious as to how wide a range that is.