Tech Interview: Split/Second • Page 2

Digital Foundry gets technical with tech director David Jefferies.

Digital Foundry: Bearing in mind that some of the most successful racing games - the Forzas, the Gran Turismos, the Burnouts - ship at 60 frames per second, what was the decision making process involved in going for 30?

David Jefferies: The only factor you need to take into account when choosing whether to run at 30FPS or 60FPS is which option will give the consumer the best possible experience. The answer to that question changes on a game-by-game basis.

Regardless of how optimised your engine is you can, by definition, render twice as much detail at 30FPS as at 60FPS. For Split/Second we felt that the consumer experience would be best enhanced by having more detailed environments, more physics, more VFX and explosions, more lighting and more impressive Power Plays.

If done properly, a 30FPS game that stretches the console to its limits is more time-consuming to create than a 60FPS game that stretches the console to its limits. This is because if you're going to draw twice as much on-screen then your artists need to generate more assets at a higher level of detail.

We've released a lot of 60FPS games and we've released a lot of 30FPS games and we make the decision about frame-rate at the beginning of each project - 30FPS is definitely the right choice for Split/Second but it may be that in the future with a different type of game we run at 60.

Digital Foundry: Whether it's 30 or 60, consistent frame-rate is something that's a common factor in all the best racing titles. You achieved a pretty solid 30 with Pure: can you talk us through the challenges of achieving the same with Split/Second? Those set-pieces and enormous explosions must be hugely stressful in terms of both physics and GPU load...

David Jefferies: This was by far the biggest technical challenge on Split/Second. Our largest Power Plays can be 1.8km in length, have over a thousand joints, hundreds of physics objects, dozens of lights and particles and be rendered in an environment containing a couple of million visible polygons not to mention the bloom, colour grade, anamorphic lens flare, HDR and motion blur.

If you were to add up all of the programmer time spent optimising the engine it would be measured in decades, and we've invented some cool new rendering paradigms along the way. Catch our talk "Screen Space Classification for Efficient Deferred Shading" at SIGGRAPH this year for some more in-depth insights.

Digital Foundry: Sustaining that frame-rate consistently throughout the game must involve a huge level of co-operation between designers, artists and programmers. What are the systems you have in place for keeping any given scene in budget, bearing in mind the huge variance that the number of cars and inclusion of set-pieces must introduce?

David Jefferies: Yes, optimisation is just as much about the artists as the programmers. One of the cool internal tools we provide for them is called Megabowles, and it runs through each environment in the game activating each Power Play and measuring the game's performance at every point on the track.

It updates a huge database that contains information about where the game is going over its rendering budget and why. This information is then fed back to the artists who iterate over their assets until they are in budget.

We deal with the issue of the variance that the cars and set-pieces introduce by having separate budgets for the different components of the game. So the particles must always be within the particle budget, all the cars must be able to be rendered on screen and within the vehicle budget, the Power Plays must be within the Power Play budget and so on.

All the budgets added together give 33ms (a single 30Hz update takes 33.33ms). So it doesn't matter what crazy stuff is going on screen - as long as all the components are within their budgets then the frame will finish rendering in time.

Digital Foundry: Let's talk Split/Second engine tech in more detail. What approach was used for the lighting? The deferred shading approach would imply certain limitations, particularly on consoles. The lighting of objects not directly in the sunlight looks good, especially when cars pass by.

David Jefferies: The Split/Second renderer is a gamma-correct, deferred shading renderer. The gamma-correct bit is very important. What it means is that we correctly convert the pixel shader input values into linear space before applying the lighting calculations using a high-precision render target.

If you don't do this then the lighting calculations are performed in gamma space, which is wrong. If you take the number 1.0 and divide by 2 in gamma space then you'll get about 0.73, which is not what you want.

You can massage your pixel shader inputs (i.e. textures and lighting) to give roughly the right result in certain circumstances but you'll never get the right result all the time, especially with low-intensity pixel values.

Most games don't bother with gamma correctness because it takes many months to develop a full gamma-correct pipeline and you can get it 'nearly' right without. If you do take the time to develop the pipeline then you guarantee that your lighting calculations are always absolutely correct. It's this that you're seeing when you look at the objects not directly in sunlight in Split/Second - the low-intensity pixels are lit correctly rather than being fudged.

Another important factor here is the anti-aliasing. Essentially MSAA averages the colour of the sub-pixels on a polygon edge, so for 2xMSAA it does something like P = (Pa + Pb) / 2.0. As explained earlier, maths doesn't work as you'd expect when in a non-linear space such as gamma space so the equation will not give the correct results.

Most games ship with incorrect anti-aliasing and the developer either ignores the artifacts that get produced or they have low-contrast lighting which means that the artifacts are less prominent.

The lighting style of Split/Second called for very high-contrast lighting and so it was essential that we got this correct.

The deferred shading part of the renderer means we break the old connection between geometry and lighting. With a traditional renderer, when rendering an object you need to specify up front how many lights are going to affect it (usually four) because the lights are rendered during the 3D geometry pass.

With a deferred shading renderer we defer the lighting of the scene until after the 3D geometry pass. At this point we have all the geometry rendered with albedo colour and then we apply the lighting in screen space.

This means we can render as many lights as we have fill-rate for - in some of our night-time levels the number of visible lights runs into the hundreds.

On the PS3 the lighting pass is done on the SPUs to take some of the load from the GPU. This technique means that every explosion, every spark, every tunnel light is a true light source. It also allows the artists to place down hundreds of lights to simulate bounce light in the environment - these lights subtly affect the car as it moves through the scene and help ground it in the world.

Comments (79)

Comments for this article are now closed, but please feel free to continue chatting on the forum!