Digital Foundry: On a related issue, you've opted out of hardware multi-sampling anti-aliasing (MSAA) in favour of a temporal solution which sometimes adds a ghosting artifact - much reduced from the beta. We've seen MLAA, DLAA, edge detect/blur - what was the thinking behind a temporal solution and how exactly did you refine it post-beta?
Chris Tchou: The idea behind temporal anti-aliasing is fairly simple: the stuff you're rendering in a given frame is very likely to be nearly the same as the previous frame, so why not leverage all that work you did drawing the previous frame to help improve the current frame? Our particular approach does half-pixel offsets in the projection matrix every other frame, and does a selective quincunx blend between the last two frames.
It is designed to turn off the frame blending per-pixel, based on the calculated screen-space motion. That is, if the pixel has not moved, we blend it, and if it has moved, we don't blend it. On the static parts of a scene, it's much more effective than standard 2x MSAA because we do gamma-correct blending, which looks much better than the blending implemented in hardware, and we are using the quincunx pattern as well.
The downside is that motion flips it off, and although aliasing is less noticeable when you are moving around, you can still see it. Another disadvantage is that it can't handle multiple layers of transparency, where some layers are stationary and others moving. So any transparent has to decide if it will overwrite the pixel's motion data or not, depending on how opaque it is. The huge advantage to temporal anti-aliasing is that it's nearly free - much cheaper than MSAA with tiling.
The ghosting artifact in the beta was caused by the first-person-view geometry (your arms and weapon) not properly calculating screen-space motion, so they were failing to switch off the frame blending when they moved. We just fixed that bug and it worked.
Digital Foundry: The screen-space ambient occlusion (SSAO) seems to conveniently take the place of shadowmaps for objects that are further out compared to objects closer to the screen. Is that intentional, a coincidence, or just part of the algorithm?
Chris Tchou: The AO replacing the shadow-map is just a happy coincidence, but we'll take advantage of it, intentional or not. The algorithm is actually a heavily modified and optimised form of HDAO, so it's naturally a screen-space effect: the ambient shadow is a constant size, in screen pixels, no matter how far away you are. This means objects that are far away appear to have large AO shadows, and the nearby ones have only a slight contact shadow near their feet. The artists preferred the look over constant world-size shadows, and it was also more efficient, so we killed two birds with one stone.
Digital Foundry: Motion blur adds to the fluidity of the game significantly. It was there in Halo 3, but it looks like you've upgraded the system significantly. What were your aims here and what were the key achievements in the final shipping solution?
Chris Tchou: It's actually almost exactly the same algorithm as Halo 3, but the appearance was improved by several changes. When we calculate the pixel motion/blur direction, we were clamping it to a square in Halo 3, and now we clamp to a circle. Clamping to a square has the problem that fast motions always end up in the corners of the square, resulting in diagonal blurs that don't follow the actual direction of the motion. On top of that the improved per-pixel motion estimation for the temporal anti-aliasing helped give better results for the motion blur as well. Oh and the motion blur is no longer gamma-correct, which makes it less physically accurate, but also faster and more noticeable.
Digital Foundry: You've discussed many of your systems in public before for SIGGRAPH or GDC, but we've never heard much about your water tech. It's obviously been radically upgraded for Reach. What are the principles here - do you use the 360's tesselator, for example?
Chris Tchou: It's a pretty big topic, but in a nutshell, it basically calculates the waves in an offscreen texture as the super-position of many splash/wave particles. It uses the GPU tesselator to convert it into a mesh on screen, and runs a custom refraction/reflection/fog/foam shader to render it. For Reach we spent a lot of time optimising the heck out of it, so we could use it on a much grander scale. We sped up the shader several-fold, turning off things like refraction when you're far away, and stopped animation when you weren't looking at it. The visual improvements were mainly the result of more polish in setting up the shaders.
Digital Foundry: You already had a pretty impressive draw distance with Halo 3, but you've taken it to a new level in Reach. What are the major achievements for you here?
Chris Tchou: The single biggest factor was our new system to automatically generate a low-LOD version of every object and piece of level geometry in the game. This will actually be presented by Xi Wang at GDC. To give you a short summary, it builds a very efficient vertex-shaded version of each object and piece of level geometry. These LOD models render extremely fast, can be batched, and look nearly the same at distance. And because it was an automatic process we didn't have to take time from the artists. We also improved our visibility culling algorithms and made use of amortised GPU occlusion queries to reduce the amount of stuff we had to consider each frame.
Digital Foundry: One of the most immediately obvious elements of the new engine is the generous use of alpha and some superb atmospheric rendering. You talked a little about this at SIGGRAPH 09, but can you tell us more?
Chris Tchou: Thanks! I'm going to be presenting a little of this in my GDC talk as well. We created a low-resolution transparent rendering solution to get around the fill-rate/overdraw bottleneck and render a lot more transparent layers. It doesn't use the 360's MSAA fill rate trick, so it costs a little more, but you don't get the crunchy edges or up-sampling artifacts. I also chopped about 70 per cent off the cost of our patchy fog system, which gave the artists free reign to use it anywhere and everywhere; I think the only area that doesn't use it is the last half of Long Night of Solace, when you're flying around in space.
Digital Foundry: Were the updated dev kits with the 1GB RAM of any use? One of the older Bungie GDC talks mentioned something about unused memory in Halo 3...
Chris Tchou: Yes, the 1GB dev kits were pretty useful - they let us run debug versions of nearly full builds of the game, although the major beneficiaries were the artists and designers, who could load levels in editing mode but still see the high resolution textures of the final game.
And I believe you're talking about the back-buffer used by the 360's UI, which amounted to about 3 megabytes I think. When you launch a game it keeps the previous application's back-buffer around for one frame so you can do a fancy fade or transition if you want to. The original version of Halo 3 didn't free that memory, which meant you had 3 megabytes less memory available for streaming in high resolution textures. But one of the Halo 3 title updates fixed it, so now that memory is available for the game. The fix was in ODST and Reach from the start.