Monday, 14 September 2015
Combining Light and Shadow - Can lighting passes be used to create shadow maps?
Shadow-Casting FragmentShaders are always written relative to a single LightSource.
They replace the usual 'camera view' - the virtual camera origin they choose is the position of the light source, and the direction they choose is the direction from the light to the fragment (or its reverse). That's the usual case, anyway.
Deferred Shaders can support multiple lights cheaply, because most of the lighting equation's required components are output in a preliminary pass, and later, a final pass is used to bake the various components of the lighting equation using a relatively cheap post-process pass, which takes in the lighting equation components and calculates the per-pixel-lighting final values (ignoring anything else we might do to improve the quality of our final image).
Another reason that Deferred Shaders are fast is that most things happen in Camera View Space.
Do you think that we might be able to write a shader that can operate in ViewSpace, which can transform depth information (written during the preliminary pass) into per-output-texture-space for say, 6 output targets (shadow cubemap faces), and also using gpu-instancing, do this for one shape of light emitter, in one shader pass?
http://cg.siomax.ru/index.php/computer-graphics/10-one-pass-rendering-to-cube-map
Apparently, with the help of a Geometry Shader, we can indeed render our shadow cubemap in a single pass, at least for a single Light. This requires the use of a built-in GLSL primitive called gl_Layer, which although technically this was introduced in OpenGL 4.1, it's available on a wide range of OpenGL 3.3 hardware.
I am immensely curious as to how wide a range that is.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment