Friday 10 July 2015

Cameras and Culling in a DOD-ECS


FireStorm ECS provides a Renderable component, managed by the RenderSystem.
In order to facilitate complex rendering scenarios, this component contains references to several NON-ECS objects: a RenderTarget surface, a source of Geometry, a 'ShaderPass Mask', and so on. A less obvious candidate is the Camera.

I spent a lot of time thinking about cameras - the previous engine iteration has a full-blown Camera object that carries its own Projection and View matrices, a Frustum that can be updated on demand, etc. I wanted to support the notion of multiple active cameras, and view-directed renderables.

I recognized that Cameras typically have a Transform of their own, and usually support being 'attached' to other entities (realized through transform parenting).
'Other entities', eh? The clue was there - the ECS Cameras should not be OOP objects, nor should they be Components with their own glorious System to manage them - they should be Entities.

Let's look inside a Camera class, and see what a Camera really needs, in terms of rendering or culling. At minimum, all Cameras need a Projection matrix, and a View matrix, which will be passed (in some form) to our Vertex Shaders - these represent a camera's 'focal aspect' and 'position aspect' - note they can both be expressed as Transform components, since the Camera is now an Entity - and if we follow this path, then 'attaching' a Camera to any other entity in the game is trivial, since that only involves parenting the View transform. Taking this logic to its extreme, the Camera might lead us toward a special kind of Component in the form of a VP (ModelSpace to CameraSpace) Matrix, which has two parents - Proj and View, which is a TransformComponent, and which is Parent to the TransformComponents of any Renderables associated with a given Camera, allowing us to automate the full MVP Baking of Renderables (unless otherwise indicated) but I digress...

What else does a Camera class usually contain?
It might also contain some Bounds derived from the Camera View, aka the Frustum.
The CullableComponent models spatial bounds, so Camera frustums can be treated as first-class citizens of the CullingSystem, potentially simplifying visibility queries per camera.

And it might contain some Direction Vectors, which are easily extracted from the latest View transform.

There seems to be little reason to retain a full-blown OOP model of a Camera, in light of the fact that all the data requirements can be expressed in Component terms - HOWEVER! Camera Transforms need to be baked some time during the rendering procedure, so it would seem there is an argument for a Camera System, even if it does not manage any Components of its own.

This does not sit well with me immediately, given my existing System model presumes that each System manages 'some' kind of data. So I'll eat my words, and find an excuse to create a Camera Component, even if it's just a neat way to locate all Camera Entities.

Here's what I'm proposing, and will be testing over the next few days:
With respect to View-directed renderables (Frustum Culling), I also spent a lot of time thinking about how best to handle this for potentially multiple cameras per Render Target while still shoving everything into one Octree and/or Quadtree, and I decided that since now that Cameras are Entities, I can safely add a Camera Reference (in the form of an Entity ID, which is totally safe from any Component ID Manipulation going on under the hood, so its safer than either a Pointer or a Component Index) to the RenderableComponent, such that each Renderable knows which Camera will be used for culling and drawing it.
The SpatialSystem implements the spatial tree, where entity ids are stored according to knowledge of their Bounds and spatial transforms. The CameraSystem will be responsible for querying the visible set of entities for each Camera based on the values in the RenderableComponent, CullableComponent and TransformComponent (which camera to use, bounds of object, transform of object), resulting in a stream of data for visible graphics objects, including cameraspace z depth and instance transforms, ready to be handed to the RenderGraph for further sorting.

No comments:

Post a Comment