Friday 23 July 2010

GBuffer Layout

The geometry buffer (or gbuffer) is where all material properties for each object in your scene are stored. This includes (but is not limited to) for each pixel; the position of the geometry, its surface normal, and diffuse colour. All this information cannot fit into a single render target, so multiple render targets are drawn onto concurrently. The more information you want to store, and the more precision you want it stored at, the larger your gbuffer will have to be.

The choice of gbuffer format can be the most critical decision in the entire renderer design. The performance of deferred renderers is often bottlenecked by fillrate and texture read bandwith. The gbuffer phase writes a lot more information than is usually written in a forward renderer, and this "fat" gbuffer is sampled for every pixel effected by every light. As such, it is a good idea to use as compact a gbuffer as possible. However, storing data at a lower precision can greatly effect quality, and storing less information will restrict what kind of materials you can implement in your lighting pass.

On DirectX9 class hardware (such as that targetted by XNA), each render target being drawn onto concurrently needs to have both the same resolution and bit-depth. This poses some difficulties, as if you want to use a high bit-depth target to precisely store position, you will also need to use a precision target to store information which you don't care so much about, such as diffuse colour. Designing the gbuffer layout then, becomes a juggling act of getting just enough precision to look good on the important attributes, while not wasting bandwidth and memory where it is not needed.

As I am only planning on implementing simple Phong shading in my lighting pass, I need to store position, diffuse colour, normals, specular intensity and specular power in my gbuffer.

Position
The simplest way to store position is to put x, y, and z into the red, green, and blue components of a 32-bit R8G8B8A8 target. This is super cheap to encode and decode, as you don't need to do anything. It also leaves us with an extra 8-bit channel spare, which would do nicely for one of the specular attributes. However, the precision on this is terrible. If precision is stored in-precisely, then your lighting may show signs of banding. Not nice.

You could use a 64 bit target instead, which would give alright precision, but then you would have to use 64 bit targets for all the other attributes, which would double the size of the gbuffer. I would much rather stick to 32 bits if possible. Fortunately, you implicitly know the screen-space position of a pixel as you are shading it, so from knowing depth, you can reconstruct the full 3D position. This means you only need to store depth, and so can stick it in an R32F target, and get a full 32 bit floating point precision. MJP has a great post comparing the quality of different ways of storing depth, for the purpose of reconstructing position.

Normals
Like position, the simplest way to store normals is to put x, y, and z into a R8G8B8A8 target. And like position, the quality is not good enough. Early deferred renderers observed that the normal is always a unit vector, and assuming that z cannot be negative (in view space,  cannot be pointing away from the viewer), you only need to store x and y. You can then reconstruct z as "sqrt(1 - x*x - y*y)". As far as I know, this was first popularised by Guerrilla Studios with Killzone 2. However, you cannot rely on z always being positive, due to normal mapping and a perspective projection, as demonstrated by Insomniac Games.

Until recently, I was planning on using spherical coordinates as suggested in another post by MJP. This is another way of encoding normals into 2 channels (as you can assume one of the coordinates is 1), at a fairly high quality. I was set on implementing this, until I found another page comparing various different normal storage formats. Here, Aras demonstrated what amazing quality you can get by storing normals with a spheremap transform. Not only that, but it's faster to encode than spherical. I had seen this before in a presentation by Crytek on CryEngine 3, but -for some reason- I had not thought much of it.

Diffuse
Diffuse information does not need to be very precise, and so storing r,g,b in an R8G8B8A8 target works just fine.

Specular
Like diffuse, the specular attributes don't need much precision, and so a single 8 bit channel for each of the 2 attributes would be preferred.

Layout
In the end, I went with the following layout:
R32F
Linear Viewspace Depth
R10G10B10A2
Normal.X
Normal.Y
Specular Intensity
Unused
R8G8B8A8
Diffuse.R
Diffuse.G
Diffuse.B
Specular Power

This layout fits everything I wanted into 3 32bit targets, which is pretty compact. While I am "wasting" 2 bits in the alpha channel of the R10B10G10A2 target, there are no 3 channel render target formats, so it will have to do. If I later want to add an extra attribute, and those 2 bits aren't enough, then it should be easy enough to change this target to an R8B8G8A8 format instead, thereby trading some normal precision for an extra usable channel.

Renderer - Phases

Most of the time that I'm drawing something, it is to create some resource which is used later on in the frame. For example, drawing the gbuffer, or a shadow map. Each time, you set a render target, draw things with a particular sort of shader, and then save the render target for use as a texture later.

I have encapsulated this within render "phases". Each phase defines a set of inputs an outputs. The renderer then topologically sorts all the phases, to ensure that resources have been created before a phase needs to use them as input. As a bonus, as the renderer knows when each resource is taken as input, it can work out when each resource is last used, and automatically mark the render target for re-use by a later phase.

Sunday 18 July 2010

Renderer - Materials

I'm about to start my final year at university, and this year the largest module is our individual project. We get to choose to do whatever we want for this project, provided that it is a non-trivial programming problem; so I am going to take this as an opportunity to write the deferred renderer I have been planning for some time - and have it contribute towards my degree. I'm pretty exited about writing this, but unfortunately a university project and a long report are synonymous.

I hate writing reports. It means I have to write notes on what I'm doing.. thats almost like writing documentation! No matter how well my intentions are to document what I'm doing, when I finish a module or piece of functionality, I want to get right on into the next one. Writing it all up seems such a chore.

So, thinking of ways to make this bit a little more interesting, I thought I could keep a project log on this blog. I've learned probably most of what I know about game development from reading various blogs, so it feels good to give a little back to the community.

Materials
Everything you draw in XNA (and the last couple versions of DirectX) has to be drawn with a shader. These shaders contain code which executes on the graphics card, which ultimately calculates the colour of each rasterised pixel.

Shaders usually require a set of parameters. Some of these describe properties of the material being rendered; for example the diffuse texture to describe the colour. Others are information about the object being rendered, like the world transform matrix, or camera view matrix.

The XNA framework represents shaders with the Effect class. By default, each effect in a model is pre-loaded with the material parameters for the model. However, the rest need to be set by the game as things are drawn. This is fine if you always know exactly what shader parameters need to be set every time you use an effect, but it means you have to effectively hard-code what parameters are available to a particular model when it is drawn.

What I really wanted was some way of data binding information which the game is aware of, into whatever parameters the shader wants. Conveniently, HLSL provides two ways of adding metadata to parameters. Semantics, and Annotations. There is already a standard for using annotations to bind values to parameters, called Standard Annotations and Semantics (SAS). However, SAS is only really used in shader development programs like FXComposer and RenderMonkey. The annotations in SAS are a little verbose, and don't include many of the bindings which I'm likely to want to use in my renderer. Following this standard doesn't really make sense, then. Instead, I'm just going to use semantics, which are simple strings attached to the parameters. The engine can look at these semantics, and lookup the corresponding value to assign from a dictionary.

There are a couple of implementation difficulties with this system. Firstly, the Compact Framework running on the  Xbox 360 has a crap (non-generational) garbage collector. That means that using a Dictionary to store metadata in the game is not a good idea, as most data which will go into it will be value types (such as matrices), and so will be boxed. Allocating during gameplay on the compact framework will cause the GC to run, which can cause the game to stutter. Instead of using a dictionary of objects, I used a dictionary of my own generic box class. This means that the values are manually boxed the first time something is assigned to that specific key, but from then on the box is re-used by simply replacing its internal value. This also allows shaders to cache the box, thereby avoiding the dictionary lookup.

The 2nd difficulty is that setting the value of a shader parameter can only be done through the SetValue methods. There is an overload of this method for each type which a shader parameter can store. The problem then is in calling the correct overload. We could look at the type of value we are assigning, and then use reflection to find the correct method. But calling methods via reflection is far too slow. Instead, I wrote an adapter class for each overload, and the shader instantiates the correct adaptor when it is constructed. This is not neat, but it works.

With this in place, I can now draw things with any arbitrary shader, and the shader will automatically go out and grab whatever data it needs from the engine, without me needing to know at the time of writing what that data will be.