Friday, 23 July 2010

GBuffer Layout

The geometry buffer (or gbuffer) is where all material properties for each object in your scene are stored. This includes (but is not limited to) for each pixel; the position of the geometry, its surface normal, and diffuse colour. All this information cannot fit into a single render target, so multiple render targets are drawn onto concurrently. The more information you want to store, and the more precision you want it stored at, the larger your gbuffer will have to be.

The choice of gbuffer format can be the most critical decision in the entire renderer design. The performance of deferred renderers is often bottlenecked by fillrate and texture read bandwith. The gbuffer phase writes a lot more information than is usually written in a forward renderer, and this "fat" gbuffer is sampled for every pixel effected by every light. As such, it is a good idea to use as compact a gbuffer as possible. However, storing data at a lower precision can greatly effect quality, and storing less information will restrict what kind of materials you can implement in your lighting pass.

On DirectX9 class hardware (such as that targetted by XNA), each render target being drawn onto concurrently needs to have both the same resolution and bit-depth. This poses some difficulties, as if you want to use a high bit-depth target to precisely store position, you will also need to use a precision target to store information which you don't care so much about, such as diffuse colour. Designing the gbuffer layout then, becomes a juggling act of getting just enough precision to look good on the important attributes, while not wasting bandwidth and memory where it is not needed.

As I am only planning on implementing simple Phong shading in my lighting pass, I need to store position, diffuse colour, normals, specular intensity and specular power in my gbuffer.

Position
The simplest way to store position is to put x, y, and z into the red, green, and blue components of a 32-bit R8G8B8A8 target. This is super cheap to encode and decode, as you don't need to do anything. It also leaves us with an extra 8-bit channel spare, which would do nicely for one of the specular attributes. However, the precision on this is terrible. If precision is stored in-precisely, then your lighting may show signs of banding. Not nice.

You could use a 64 bit target instead, which would give alright precision, but then you would have to use 64 bit targets for all the other attributes, which would double the size of the gbuffer. I would much rather stick to 32 bits if possible. Fortunately, you implicitly know the screen-space position of a pixel as you are shading it, so from knowing depth, you can reconstruct the full 3D position. This means you only need to store depth, and so can stick it in an R32F target, and get a full 32 bit floating point precision. MJP has a great post comparing the quality of different ways of storing depth, for the purpose of reconstructing position.

Normals
Like position, the simplest way to store normals is to put x, y, and z into a R8G8B8A8 target. And like position, the quality is not good enough. Early deferred renderers observed that the normal is always a unit vector, and assuming that z cannot be negative (in view space,  cannot be pointing away from the viewer), you only need to store x and y. You can then reconstruct z as "sqrt(1 - x*x - y*y)". As far as I know, this was first popularised by Guerrilla Studios with Killzone 2. However, you cannot rely on z always being positive, due to normal mapping and a perspective projection, as demonstrated by Insomniac Games.

Until recently, I was planning on using spherical coordinates as suggested in another post by MJP. This is another way of encoding normals into 2 channels (as you can assume one of the coordinates is 1), at a fairly high quality. I was set on implementing this, until I found another page comparing various different normal storage formats. Here, Aras demonstrated what amazing quality you can get by storing normals with a spheremap transform. Not only that, but it's faster to encode than spherical. I had seen this before in a presentation by Crytek on CryEngine 3, but -for some reason- I had not thought much of it.

Diffuse
Diffuse information does not need to be very precise, and so storing r,g,b in an R8G8B8A8 target works just fine.

Specular
Like diffuse, the specular attributes don't need much precision, and so a single 8 bit channel for each of the 2 attributes would be preferred.

Layout
In the end, I went with the following layout:
R32F
Linear Viewspace Depth
R10G10B10A2
Normal.X
Normal.Y
Specular Intensity
Unused
R8G8B8A8
Diffuse.R
Diffuse.G
Diffuse.B
Specular Power

This layout fits everything I wanted into 3 32bit targets, which is pretty compact. While I am "wasting" 2 bits in the alpha channel of the R10B10G10A2 target, there are no 3 channel render target formats, so it will have to do. If I later want to add an extra attribute, and those 2 bits aren't enough, then it should be easy enough to change this target to an R8B8G8A8 format instead, thereby trading some normal precision for an extra usable channel.

Renderer - Phases

Most of the time that I'm drawing something, it is to create some resource which is used later on in the frame. For example, drawing the gbuffer, or a shadow map. Each time, you set a render target, draw things with a particular sort of shader, and then save the render target for use as a texture later.

I have encapsulated this within render "phases". Each phase defines a set of inputs an outputs. The renderer then topologically sorts all the phases, to ensure that resources have been created before a phase needs to use them as input. As a bonus, as the renderer knows when each resource is taken as input, it can work out when each resource is last used, and automatically mark the render target for re-use by a later phase.

Sunday, 18 July 2010

Renderer - Materials

I'm about to start my final year at university, and this year the largest module is our individual project. We get to choose to do whatever we want for this project, provided that it is a non-trivial programming problem; so I am going to take this as an opportunity to write the deferred renderer I have been planning for some time - and have it contribute towards my degree. I'm pretty exited about writing this, but unfortunately a university project and a long report are synonymous.

I hate writing reports. It means I have to write notes on what I'm doing.. thats almost like writing documentation! No matter how well my intentions are to document what I'm doing, when I finish a module or piece of functionality, I want to get right on into the next one. Writing it all up seems such a chore.

So, thinking of ways to make this bit a little more interesting, I thought I could keep a project log on this blog. I've learned probably most of what I know about game development from reading various blogs, so it feels good to give a little back to the community.

Materials
Everything you draw in XNA (and the last couple versions of DirectX) has to be drawn with a shader. These shaders contain code which executes on the graphics card, which ultimately calculates the colour of each rasterised pixel.

Shaders usually require a set of parameters. Some of these describe properties of the material being rendered; for example the diffuse texture to describe the colour. Others are information about the object being rendered, like the world transform matrix, or camera view matrix.

The XNA framework represents shaders with the Effect class. By default, each effect in a model is pre-loaded with the material parameters for the model. However, the rest need to be set by the game as things are drawn. This is fine if you always know exactly what shader parameters need to be set every time you use an effect, but it means you have to effectively hard-code what parameters are available to a particular model when it is drawn.

What I really wanted was some way of data binding information which the game is aware of, into whatever parameters the shader wants. Conveniently, HLSL provides two ways of adding metadata to parameters. Semantics, and Annotations. There is already a standard for using annotations to bind values to parameters, called Standard Annotations and Semantics (SAS). However, SAS is only really used in shader development programs like FXComposer and RenderMonkey. The annotations in SAS are a little verbose, and don't include many of the bindings which I'm likely to want to use in my renderer. Following this standard doesn't really make sense, then. Instead, I'm just going to use semantics, which are simple strings attached to the parameters. The engine can look at these semantics, and lookup the corresponding value to assign from a dictionary.

There are a couple of implementation difficulties with this system. Firstly, the Compact Framework running on the  Xbox 360 has a crap (non-generational) garbage collector. That means that using a Dictionary to store metadata in the game is not a good idea, as most data which will go into it will be value types (such as matrices), and so will be boxed. Allocating during gameplay on the compact framework will cause the GC to run, which can cause the game to stutter. Instead of using a dictionary of objects, I used a dictionary of my own generic box class. This means that the values are manually boxed the first time something is assigned to that specific key, but from then on the box is re-used by simply replacing its internal value. This also allows shaders to cache the box, thereby avoiding the dictionary lookup.

The 2nd difficulty is that setting the value of a shader parameter can only be done through the SetValue methods. There is an overload of this method for each type which a shader parameter can store. The problem then is in calling the correct overload. We could look at the type of value we are assigning, and then use reflection to find the correct method. But calling methods via reflection is far too slow. Instead, I wrote an adapter class for each overload, and the shader instantiates the correct adaptor when it is constructed. This is not neat, but it works.

With this in place, I can now draw things with any arbitrary shader, and the shader will automatically go out and grab whatever data it needs from the engine, without me needing to know at the time of writing what that data will be.

Sunday, 11 October 2009

Reflection - a lazy programmers best friend

I've been working on a replacement text rendering system recently, and as part of that, I needed access to font information such as the source rectangles for each character. Unfortunately, the SpriteFont class does not expose these, so I needed my own font class. If only it were possible to specify a new replacement content reader... but no problem, I thought, I'll just have to implement a new processor.

Except that's a lot of work. And I'm lazy. So I'm just gunna reflect over a loaded SpriteFont and steal it's data.

Thursday, 17 September 2009

Command Engine


The CommandEngine class contains the core of my command console. It is basically a simple script interpreter, which can execute methods and read/write properties. Methods can be nested, and can return any type and have any parameters.

You can make a static property or method available to the engine by sticking the [Command] attribute onto it. The engine will scan the assembly and pick out all these methods and properties.

[Command]
public static string Print(object o)
{
    var s = o.ToString();
    Debug.WriteLine(s);
    return s;
}


[Command]
public static string Hellp { get; set; }


[Command]
public static int Framerate
{
    get { return fpsCounter.Value; }
}

Adding instance methods and properties has to be done by calling the AddCommand or AddOption method on the engine instance respectively.

engine.AddOption(player, "Position", "Player.Position");


[Command]
public static Vector3 Vector3(float x, float y, float z) 
{ return new Vector3(x, y, z); }

Once you have some commands registered, you can call them with the execute method:

engine.Execute("Print("Testing"));
engine.Execute("Print(Framerate)")
var helloString = engine.Execute("Hello").Result;
engine.Execute("Player.Position = Vector3(0, 10, 3)");


You can download the source here.

Wednesday, 16 September 2009

Materials and Render Phases

Materials and Render Phases are going to be the core of my renderer.

A material defines an effect, any render states it wants set, a set of constant effect parameters (e.g. a texture), and a set of public application-settable effect parameters (e.g. world transform matrix).

A render phase will usually represent drawing onto a new render target. For example, drawing a shadow map for a light will be a phase, and so will drawing the g-buffer.

When a render phase requests items to draw from the scene, the manager for the DrawableComponent entity components will walk through each component, and if it was determined to be visible, it will ask the component can draw for this phase.

If it can, it will submit to the phase a render chunk (piece of geometry to draw), together with a material to use to draw this chunk with this phase, and a (mostly) pre-computed integer key. This key contains stuff like depth and material ID, and is used by the phase to sort each chunk for efficient rendering.

I still haven't decided on what this key is going to be, because exactly how you should order your draw calls can vary a lot depending on your scene (and hardware, but don't think about that too much!). What I'm thinking right now is to have a few bits (2 or 3) for depth, and then everything else by material. This just sticks things into a kind of foreground/background depth buckets and then does mateiral. I might put a more precise depth measurement on the end too, so materials within a bucket go back to sorting by depth.

Saturday, 29 August 2009

Scene graphs? Yuch!

So one of my friends has been nagging me for some time to start on the graphics library I have been planing on writing for a few months now. Always had other work I needed to do first. But now I'm all done with that, and I can finally start on it.

I was thinking about how the scene system would work, and how different behaver would be added to different entities (ok, that's more an overall engine design point, rather than graphics, but meh), and I can't for the life of me figure out why so many people try to do this through inheritance. It's like the idea of inheritance has been so ingrained in people as A Good Thing, that it's heresy to do anything else.

But it makes things so hard. Imagine, say, you have a DrawableEntity class. From this you inherit a NpcUnit class. Now you want to add some AI. As you probably want all NPCs to have some AI, you could stick that in between Drawable and Npc. Alright so far.

Now lets say you have a door, which you make inherit off DrawableEntity.
One of the characters in your game is a talking door. It has the characteristics of a door, and it has AI. What do you do now? You could have Door inherit from AIUnit instead, but now all doors have AI..

The solution is to use composition. Have an Entity class, which contains a collection of EntityComponents, each of which represents a piece of functionality which you want to add to the entity.

As I was searching the net for what other people had done with this, I stumbled upon a post from Nick Gravelyn, in which he talks about such a system. Except that in his implementation, any components which have properties which return another EntityComponent type, his entity will automatically hook it up with an instance of that type if it has one available in its components collection.

I loved his implementation, and decided that I would use that, but with one addition.

As I was thinking about how my graphics library would handle the scene graph at the time, and how so much of a bad idea scene graphs are, I thought I would modify Nicks entity system so that it could fully operate as a scene graph replacement.

All that needed adding was the idea of each EntiyComponent type defining a manager to manage its instances. For example, when a PositionComponent is added to an Entity, the entity will stick the component into a manager dedicated to managing PositionComponents. This manager could implement a tree to show parent/child spacial relationships (like a swords' position being relative to the players' hand). On the other hand, a StaticDrawableComponent would register itself with a manager which places it into an octree, for efficient visibility determination.

And there you go, we are no longer trying to stuff many entirely different relationships into the same, messed up convoluted, tree.