Friday 17 June 2016

Material Instancing on the GPU


It seems like a lifetime ago that I learned how to perform 'gpu instancing' by sending to the GPU a special VertexBuffer containing an array of ModelToWorld transforms, and telling the underlying renderer (OpenGL) that this buffer's content is not per vertex, its per mesh instance (so all vertices).
We can issue one single draw call, but draw the current mesh subset multiple times, at different positions and orientations. I noted at the time, this meant we could render all instances of a mesh with as few as ONE DRAW CALL PER SUBSET, based on the premise that a mesh is divided into per-material subsets of faces (or triangles), and that we at least need to change Materials inbetween draw calls.

More recently, I expounded that I believed it to be quite possible to further reduce draw calls, by supplying a second 'Per Instance VertexBuffer' containing an array of Material data, and by tagging individual Triangles with a local material index.

Today, I'm adding a 'Tumbler' 3D Widget to the Editor interface - it will display the current camera view's orientation, and also act as a tool to select from multiple fixed views (top, front, etc.)
I've created a special mesh for this widget, which has SEVEN materials, identifying the (signed) major axes plus one extra 'arbitrary' perspective view.

Note that I only have one instance of this guy, but it contains seven materials - this implies that using the render technology described thus far, I would need seven draw calls to draw this ONE instance of this multi-material mesh.

This seems like a good time to experiment with the concept that we can use 'gpu instance buffers' to send *anything* that we need per instance, not just the Model transform, but ANYTHING.

Anyone done this already, or know of any references?

No comments:

Post a Comment