Digital Foundry: Can you run us through the relationship between the game's engineers and the content creators? What constraints and conditions do the creators have to work with? How do you gauge whether a new piece of game world is going to run smoothly in-game?
Eric Arnold: This was a very tight relationship by necessity. Even with all the custom tools it was still hard at times to figure out why something wasn't working due to the complexity of the engine. They did have one custom tool that would give them a good idea of what the performance of the asset would be before it got in game though. The tool loaded the building and ran a number of tests on it, both in the pristine and destroyed state, and gave them some metrics to look at. It was not as simple as "pass/fail" since a large part of the equation was how it was used in game, but it gave them a good place to start. In the end there was a lot of back and forth that had to happen to bring their ideas of what would be cool in line with what the engine can realistically handle.
Dave Baranec: This is a classic game development problem, and is particularly difficult when you are dealing with a new engine. The simple fact is, you often don't know how the engine is going to perform until you have spent a lot of time developing it. But you have to keep your artists and designers moving in the meantime – so how do you do it? Well, they need time to work up their own ideas as the tech is coming together. No game designer in the world sits down and writes out the perfect design on the first try. So as the tech starts to trickle out and systems come together, art and design can refine their ideas.
Later on in the process when the tech is more mature, there are several important classes of tools. We provide tools so that individual art assets can be analyzed in a vaccuum. How many polys does the model have? How many different materials? How fine-grained is the physics setup? How expensive is it to drop into a level, memory-wise? Can we assign an overall cost value to the asset? In the case of RFG, we developed a class system for buildings – we rated them from "one" to "five" in terms of intensity. This rating was an indicator to designers as to how much complexity using the building would add to the scene.
We provide numerous reporting tools for the level designers in the world editing tool. In particular, they have to keep a careful eye out for memory usage and streaming utilisation. They also need to make sure that they keep overall object counts under control (an object could be a chair, or a table, or something huge like a building, or even something more immaterial like a cover node, or a navpoint for AI).
Testing overall frame rate in-game is perhaps the most important thing we can do. To this end, we have a wide range of tools. There are very low level analysis tools for programmers to stare at all the threads and figure out where their code is taking time to execute. We have automated tools for flying through the world and collecting broad swaths of data about areas with poor overall frame rate. We have a range of in-game displays that can give feedback to designers about what exactly is expensive for a given view from both a simulation and a rendering perspective. We also co-opt QA as general frame rate reporting tools – they play the game more than anyone else so they are uniquely qualified to report when they find a problem area.
Digital Foundry: Can you take us through the basic principles of your destruction model?
Eric Arnold: What most games mean when they say "destruction" is "visual destruction" – things like tiles chipping off the wall, but the wall remains intact beneath it, or a destroyed version of the object swapping in when enough damage is done. Our goal was always to fully realise "physical destruction" – if a section of building looks like a main structural support it should behave as such and the building should realistically fall apart when it is taken out. That's where the stress system comes in to play. It is constantly evaluating the structural stability of objects in the game as they take damage. It doesn't care if the object is a knee high section of retaining wall or a bridge the size of a football field, it will run the same simulation on them so we get a consistent result.
The actual number crunching is done in a number of discrete steps so processing can be spread out over time. First we have to take in to account if there are objects being supported by the object being analysed, these could be anything from an enemy tank to a sky bridge connecting two towers. After that is done the stress code walks over the object from top to bottom adding up the force generated by the mass above (along with the mass of supported objects) and compares that to the strength of the material at that point. If the force is greater than the strength the material is broken which can result in a section completely breaking free and falling if that was the last connection.
As all of this is going on we also play audio and video cues to let the player know which areas are getting close to breaking. Beyond making the world more believable they serve as a warning system that the structure is unstable and could collapse on the player's head if they aren't careful and hang around too long. This small addition took the system from a neat tech demo to pulling the player in to the game world and generating very real chills as they flee from a creaking, groaning building while tendrils of dust and debris rain down around them.
The end result is a world that physically reacts to the player in the same way that real objects would – snap off two support legs of a tower and it will tip over sideways, if there happens to be building next to it the tower will crush the roof and tear a hole in the wall, if there happens to be enemy troops inside that building they will wake up with a splitting headache if they get up at all. And the best part of it all is that the engine is entirely player driven, they are given a set of tools, a list of goals to accomplish, and the freedom to solve them in any way they see fit. Rather than force premade solutions down their throats we wanted to liberate them to devise their own battle plan and succeed or fail on their own terms. Thankfully some of the most memorable moments can come from spectacular failures, so rather than being frustrating failure encourages the player to come back and try something new.
Digital Foundry: The splash screens tell us that you're using the Havoc engine in RFG, but clearly we're seeing physics here way ahead of what we see in the usual Havoc-licensed game. What bearing does the third party tech have on the final game? Did you take it and enhance it, or is being used for more mundane elements not related to the more crazy stuff your engine is handling?
Eric Arnold: We used Havok mainly for rigid body collisions, vehicle simulation, and ray casts. The entire destruction engine was custom built to sit on top of Havok, and we did have to customize a good bit of their internals (especially for the PS3 to get it all running fast on the SPUs). The guys at Havok were great to work with and joked that they all groaned when I sent them an email because we were stressing their code in ways no one else was coming close to, so the bugs I uncovered were particularly nasty. Together we were able to make our vision a reality and they keep telling us how impressed they are with how far we were able to take it.
Dave Baranec: The best way to think about it is that Havok is to Geo Mod 2.0, as DirectX is to the Unreal engine or Crysis. It provides some core functionality, but the engine itself where all the fun stuff happens. Havok is an amazingly extensible piece of code. They provide all kinds of ways to enhance the core code (a Havok license gets you very nearly all of the source). Havok is essentially an extremely fancy bouncy object simulator that lets you poke around at the objects at various points in the simulation. The core interaction the destruction system provides is a wrapper which lets us receive notifications from Havok about things like "X hit Y at such-and-such velocity" and respond to it at various stages of the simulation. What we developed was a model that allowed us to take a very large complex object like a whole building – watch when collisions happen to it, modify the existing objects and spew out new objects. So when Havok tells us "X hit Y", we can respond and say "change X like so, change Y like so, and create Z and W flying off in these directions". The magic of the destruction system is all the internal logic which allows us to make those decisions from those simple inputs.
A second non-trivial issue is making sure not to overload Havok. Internally, the destruction system is capable of modelling and processing buildings of super high complexity. But if you let a simulation of that fidelity run, it is very easy to get into a situation where you are just presenting the console hardware with too much work to do. So we spent a large amount of time balancing out extreme detail with what the hardware can reasonably do.
In tomorrow's concluding episode, we talk more in-depth about the physics and the simulation model in Red Faction: Guerrilla, the challenges of producing a cross-platform console project and touch briefly on the forthcoming PC version. Not only that, but we'll be talking DLC too...