Wednesday 30 September 2015

MRT fails to write to one or more output textures - cause and solution


I had a strange issue when I was trying to write to multiple textures in a single shader pass (MRT), that having activated several targets (glDrawBuffers), only the first one was getting any output, the others were not being changed, other than being cleared correctly to my desired clearcolor.

The problem turned out to be a combination of two things.
Firstly, I had Blending enabled, which meant that my current blendfunc would be applied to output pixels, and secondly, my fragshader was outputting RGB vec3 values, leaving alpha unaltered.

If you're playing around with MRT, and one or more outputs refuse to get written to, doublecheck that you have blending disabled, and/or check your pixel alpha.


Tuesday 29 September 2015

Sometimes, in order to move forwards, you need to take a few steps backwards.

 Here's a few screenshots showing some work from the current refactor of the FireStorm Renderer.

The first two images just show per-pixel (phong) lighting being applied to a 3D textured cube,  and rendered courtesy of a regular forwards-rendering setup, with support for a single pass per light.





 Here's another view
The third image shows work on the Deferred Lighting pipeline, and is far more interesting to me.

Tex 1 is an input texture being applied to the geometry. The ''geompass' shader writes to three output color textures (2,3 and 4) and also writes to one depth texture (5 - it's quite faint, but it's there)

Tex 2 holds the WorldSpace Position of the 3D surfaces as determined at each pixel.
If we had a software raycaster that was casting 3D rays from each pixel on the near plane of the camera's view frustum until they hit some surface in the world, the position of each intersection point of a cast ray with the world surfaces is what we're recording per pixel, and doing so with no raycasting. The operation is laughably cheap in the geompass fragment shader.


Tex 3 is our regular diffuse texture output, that we'd get from forward-rendering with no lighting.
It carries the texture detail for all the objects in our scene for the current frame.

Tex 4 is similar to Tex2 - but it holds per-pixel WorldSpace Surface Normals, instead of Surface Position.

Tex 5 is a Depth Texture - it's capturing depth info just like the regular depthmap, but in the form of a texture, so it can be read back in another shader pass, where the data from these textures is read back in and the final lighting calculation is performed per pixel.

Tex 6 is waiting to capture the final results of the per-pixel lighting (baking) pass.

Friday 18 September 2015

Single-Pass OmniDirectional Shadows in a Deferred Shading Environment


Until recently, I believed that the only way to achieve omidirectional shadows was to use six passes to generate the faces of a cubemap, or to use two passes (one with MRT), to generate the faces of a dual paraboloid.

It turns out that we can in fact, attach a cubemap to a framebuffer such that we may draw to all six faces in a single pass, without using MRT, and the trick is to use a Geometry Shader (inbetween the VS and FS shader stages) whose job is to transform the shadowcaster geometry we're drawing , via one of six virtual camera views, into one of six output FS executions(!)
Our utilization of a Geometry Shader for 'multiple projections' of the rendered geometry leads to the setting of a built-in GLSL variable called gl_Layer, which is a zero based integer whose value can be read from the FS, indicating which CubeMap Face (or plane) is the target of the FS execution.

Unfortunately, unlike the Lighting stage, we can't use GPU Instancing to help with batching our lights - we can use it for drawing geometry to the shadow cubemap, but not to send the lights themselves.

So we have a choice then, to either perform a shadowing pass per light, or to write several shadowmap shader variants that each cope with a specific number of lights, just like we'd be doing if this was a Forwards rendering environment.

Due to the cost of shadowcasting, and in order to reduce the amount of work involved therein, I've introduced a boolean switch in the Material class, which denotes whether objects made of a given material can cast shadows. This will allow the user to make premium decisions about which elements of a scene may cast shadows, and reduce the average number of potential shadowcasters per frame.
If required, I can put an override in the RenderableComponent which will allow the user to allow or disallow shadowcasting on a per-renderable level and without regard to material settings.



Monday 14 September 2015

Combining Light and Shadow - Can lighting passes be used to create shadow maps?


Shadow-Casting FragmentShaders are always written relative to a single LightSource.
They replace the usual 'camera view' - the virtual camera origin they choose is the position of the light source, and the direction they choose is the direction from the light to the fragment (or its reverse). That's the usual case, anyway.
Deferred Shaders can support multiple lights cheaply, because most of the lighting equation's required components are output in a preliminary pass, and later, a final pass is used to bake the various components of the lighting equation using a relatively cheap post-process pass, which takes in the lighting equation components and calculates the per-pixel-lighting final values (ignoring anything else we might do to improve the quality of our final image).
Another reason that Deferred Shaders are fast is that most things happen in Camera View Space.

Do you think that we might be able to write a shader that can operate in ViewSpace, which can transform depth information (written during the preliminary pass) into per-output-texture-space for say, 6 output targets (shadow cubemap faces), and also using gpu-instancing, do this for one shape of light emitter,  in one shader pass?

http://cg.siomax.ru/index.php/computer-graphics/10-one-pass-rendering-to-cube-map

Apparently, with the help of a Geometry Shader, we can indeed render our shadow cubemap in a single pass, at least for a single Light. This requires the use of a built-in GLSL primitive called gl_Layer, which although technically this was introduced in OpenGL 4.1, it's available on a wide range of OpenGL 3.3 hardware.

I am immensely curious as to how wide a range that is.






Friday 11 September 2015

Light Components: High numbers of dynamic lights, GPU-Instancing of Multiple Light Volumes in a Deferred Shader


Lighting has been implemented in FireStorm as a Component - any Entity that supports Frustum Culling (i.e. has a Cullable Component) may also have a Light component, and therefore, any cullable entity can potentially be a light-caster.
The Light component adopts the Transform of its owner Entity, which means that light will appear to be cast from the origin of said Entity - if you wish to offset the transform of a light (for example, to position a headlight on the front of a vehicle), you'll need to create a separate (and likely more simple) entity for such a light, and use Transform Parenting to attach the light entity to its parent.
Both pointlights and spotlights are supported, while directional lighting is handled separately (FireStorm only supports one Directional light, but any number of Spot or Point lights).

Not everything in a game engine should necessarily be a Component, however there was a good reason behind the decision to implement Lights as components of entities - assuming there are a high number of possibly dynamic light sources in the game world, we'd like to cull the lights that have no effect on the rendered scene, which is easy if we treat them as cullable entities, given that a mechanism for performing visibility queries had already been implemented: FireStorm uses a Dynamic AABB Tree to track the spatial relationship of all entities in the game world, and uses this structure to accelerate camera visibility queries - while collecting the list of Renderable entities which intersect the camera's view frustum (our list of stuff to draw), we can also collect a separate list of Light-Caster entities which will be applied via deferred-shading, and a subset of which may also be applied during shadow-casting passes.

FireStorm's new render pipe leverages several features of modern (GL3+) graphics cards - Deferred Shading is implemented via Multiple RenderTargets (MRT), Deferred Lighting is implemented via GPU-Instancing in tandem with Stencil-Rejection, and work has begun on some new Shadow-Casting technology.

Friday 4 September 2015

Some things should not be Components - but they can still benefit from being Data-Oriented!

So today was an interesting day.
I finally had a use-case for keeping Materials by (16-bit) ID, rather than by Pointer.
Although one does not necessarily exclude the other, I was not able to boil down a 'pure pointer' (as allocated by the 'new' operator) into a flat index into my global Materials container, since that container was based on STL map. Basically, my Material class was one of a number of 'self-managing' classes, that provided its own static map, and managed that map internally.

Component Systems are all being based on a (templated) class called ObjectPool - it's how Systems internally manage their data elements (or class objects), the ObjectPool class provides its own 'placement new' based allocation scheme, as well as code to convert between flat indices and pointers.

Changing my global Materials container from STL's map template to my own ObjectPool-based container was not problematic, but it did require that I code up support for STL-LIKE iterators for my ObjectPool implementations. That was actually kind of fun, and productive, my Materials are now sure to occupy contiguous flat memory addresses, and can be referenced at runtime by their ID, which can be stored using less bits than a Pointer (4 times less on a 64 bit system), but is effectively just as fast to access.

More interesting and potentially valuable, the template that powers ECS Systems now supports forwards-iteration of the active Components in any System.