Skip to main content

Tech Interview: Halo: Reach

An exclusive, in-depth chat with Bungie on the making of Reach.

Dark blue icons of video game controllers on a light blue background
Image credit: Eurogamer

Stop us if we're getting too technical... on the other hand, don't bother! When the opportunity arose to talk tech with Bungie on any topic of our choosing, let's just say that we didn't hold back. And luckily for us, neither did the studio. What we have here is a titanic 6000 word insight into the technical make-up of the Xbox 360's biggest exclusive of the year.

In this piece, we go into depth on a vast array of technical topics: we talk about the deferred rendering solutions employed by the Halo games, the enhancements made for Reach, anti-aliasing, atmospherics and alpha, plus we talk about how high-dynamic range lighting is handled in the new game.

It's fair to say that this is pretty high-level stuff, but our other discussions on performance, co-op/split-screen, artificial intelligence, animation, motion capture, the return of the Elites, plus the process by which Bungie polishes its game before release should be accessible to all. Also, we get to the bottom of the mystery of the deleted campaign level that had the player in control of a Covenant Scarab...

Joining us for this interview are graphics engineer Chris Tchou, character designer Chris Opdahl, community/writer Eric Osborne, senior animation lead Richard Lico and character engineer Max Dyckhoff. Thanks to each and every one of them for putting so much time and effort into what is one of the most expansive and comprehensive tech interviews we've ever published here at Digital Foundry.

The last hurrah for the 1.2TB of lossless HDMI captures Digital Foundry has of Halo: Reach. Here's the entire single-player campaign condensed into 16 minutes, 34 seconds. Um, spoilers...
Digital FoundryLet's kick off by talking about upgrades to the renderer. You're able to handle far more light sources than previously - have you adopted a deferred model? What research did you undertake and what solution did you eventually settle upon?
Chris Tchou

Halo 3 and Halo 3: ODST used a 'semi'-deferred two-pass rendering approach, except for small decoration objects like grass and pebbles which were forward-rendered in a single pass for speed. Semi-deferred rendering allowed us to have cheap decals, but we didn't use it for deferred lighting; the lighting was rendered in the second pass over the geometry so we could have complex light map lighting and nice-looking metallic shininess (i.e. something better than Phong specular). For Halo: Reach, we rebuilt the deferred buffers so they could better approximate our specular models, which let us use fast deferred dynamic lights everywhere without losing shininess.

On top of that, we also built a system to determine when objects were not taking advantage of the deferred path (ie. they had no decals or complex deferred lights touching them) and switch those objects on the fly to the faster one-pass forward rendering. Yaohua Hu also spent a lot of time researching an improved light-map representation (it's better than spherical harmonics!) that gives us the same support for area light sources, improved contrast, fewer artifacts, a smaller memory footprint and much better performance. This helped to free up a lot of GPU time to use for the dynamic deferred lights and other graphical goodies.

Digital FoundryPreviously, there was brief mention of being able to render many more particles in Reach, and in the first ViDoc we saw a fleeting look at a demo - what's new here and how is the tech used throughout the game?
Chris Tchou

We built a particle system to handle the specific case of numerous small transient particles - basically rock chips, dirt puffs, rain drops, splashes, sparks and that kind of thing. I am presenting it in more detail at the next GDC, but the neat part is that it can handle tens of thousands of collisions/bounces each frame by reading the depth and normal buffers, and the whole thing takes less than 0.3 ms (about 1/100th of a frame); which looks pretty good compared to the seven (7) standard particle collisions per frame allowed by the effects budget.

The new particle system allowed the effect artists to use huge numbers of these small colliding particles in their effects without worrying about performance at all. Oh and it's used for the rain too: if you watch the rain in slow-motion in the theatre mode you can follow a single rain drop as it falls until it splashes onto something!

Digital FoundryHow is HDR being handled this time? The dual framebuffer seemed to get a lot of flak in Halo 3 in terms of the resolution downgrade, but there wasn't much explained about it. Were other framebuffer formats (7e3/FP10 or INT16) just nowhere near comparable? Your previous GDC presentation only described the differences in terms of numbers, but the real-world comparison is difficult to visualise otherwise. What's the approach in Reach?
Chris Tchou

We use a single 7e3 buffer for our final render target in Reach. This results in a more limited HDR (about 8x over the white point, as opposed to 128x in Halo 3) but is much faster for transparents and post-processing. In practice, the difference between 8x and 128x HDR is slight - the main thing you may notice is that the bloom around bright areas loses its color more often, desaturating to white.

And yes, a single 7e3 buffer gives us more available EDRAM for the final lighting pass, but the render resolution is still limited by the three buffers used in the main deferred pass. The resolution in Halo 3 was more limited because we save some EDRAM for dynamic shadows during the lighting pass, alongside the 2 HDR buffers and a depth buffer. But with a single 7e3 buffer, we have plenty of extra room available for the shadows, and it's only limited by the 3 buffers used during the deferred pass.

Digital FoundryThe set-ups used in Halo 3, ODST and Reach suggest that you're not fans of tiling out the EDRAM. What are your reasons here?
Chris Tchou

Multiple tiles are problematic: they either add too much controller latency (because predicated tiling delays the GPU start), or they result in too many passes over the geometry, eating up large amounts of CPU (basically rendering everything twice). Another factor is the 360's DAC, which has the super fancy up-sampling filter that hides any artifacts - we actually ran user tests on various resolutions and no one could tell the difference! So we chose to take the additional performance and reduced controller latency over the nearly imperceptible resolution increase.