Digital Foundry: Can you talk us through the decision to target 30FPS when the likes of Forza and Gran Turismo go for 60 frames? Isn't there a sense there that the optimum response for a physics-based experience like this should be full-fat 60FPS?
Bryan Marshall: That sort of judgement is always difficult, but we felt we wanted as much attention to detail as possible in all areas. We wanted to go very 'next-gen' in the original DiRT with post-processing, complex dynamic shadows and lighting, real-time deformable physics, 3D audio, the lot.
Take a close look at our games compared to the rivals... we have a awful lot of stuff going on on-screen (and in your ears!). The popular response to all our racing games so far has been extremely positive, so we believe we got the balance right for our games.
Digital Foundry: While DiRT and GRID have run at 30, there's a great deal of post-processing work going on in terms of motion blur etc, to give the impression that the game is smoother... they don't 'look' like 30FPS racers...
Bryan Marshall: Motion blur isn't just about making the game 'smoother', it also gives an atmospheric effect to our games that suits their style. It's about that attention-to-detail again that adds up to an overall impression of quality.
Digital Foundry: Was there any kind of tech post-mortem when GRID shipped in terms of engine strengths and weaknesses, and what you wanted to address for the tech's deployment in the next game?
Bryan Marshall: As I said earlier, we always want to improve over the last game. Big pushes from GRID to DiRT were in the number of polys per car, improving the quality of distant shadows and performance of particles as examples. Car engine audio was completely re-written to give DiRT 2 some great sounding engine growls.
Digital Foundry: There's a perception - right or wrong - that the DiRT games have been addressing a more mainstream audience than the old CMR titles. How would you address those comments from a tech perspective in terms of the physics, the handling model, etc?
Bryan Marshall: Interesting one. We certainly want our games to appeal to as many people as possible, loyal and new. The underlying physics model is a fundamentally detailed car simulation, and allows us to basically tune the cars in the same way an automotive engineer would. This sort of system requires some good car tuning skills so our car handling guys are all car nuts who love tinkering. What drives us mad is seeing references on the web to donkeys-year-old 'pivot point' physics. I'm afraid to tell the world that that stuff went out with the PS1 (or even before)!
Digital Foundry: As this generation is now effectively mature, just how much more do you think can be squeezed from PS3 and Xbox 360? Is there any mileage in the 360's tesselator for example, bearing in mind that tessellation is part and parcel of the DirectX 11 spec?
Bryan Marshall: There's plenty more to be squeezed out of these consoles. They both have strengths, and yes after our experience with the DX11 tessalation, we'll definitely look closer at the 360 side. The SPUs have plenty of mileage left in them. Over time we are moving more and more CPU-based systems over to them, and more importantly some of the graphics processing too.
I think we'll see a second wind on the PS3 where the developers are really starting to get to grips with it. I think a lot of developers ported their PC engines over to PS3 and it really hurt them to begin with. Luckily, we started afresh from day one.
Digital Foundry: As racing specialists, what are your views on the practicality of motion control on driving games? I've yet to play a Wii game where wand-waggle beats a traditional control set-up, and the control experience in the Natal Burnout Paradise demo felt oddly detached and lacking in feedback when I tried it.
Bryan Marshall: I think it's quite simple and you have to play to the strengths of the control mechanism and build your games around that. It has to make sense within the game context and add to the overall experience.
Digital Foundry: Do you have any thoughts on where the platform holders can go from here, assuming there are new consoles in 2011/2012? As developers what would you most want architecturally from a new console?
Bryan Marshall: With my management hat on, I think the industry could do with an 'evolution' rather than the 'revolution' in terms of architectures. The cost of moving to completely new architectures was painful in the last gen and I'm not sure the industry can stomach it again.
Certainly we should all expect more cores on the CPUs and more memory. With the publicised delay of Larrabee it looks unlikely that will be in a console in your given timeframe so will the GPUs most likely be an evolution of the current generation of PC GPUs again? I think the interesting question will be whether they will ship with a drive or not given the momentum of digital distribution.
At Codemasters we've prepared ourselves with EGO being fundamentally multi-core base and being pro-active with new APIs like DX11. Having that knowledge now, very early on, will put us in very good standing for the future. We're also developing a whole new tools and asset pipeline that will allow us to handle much more data in a fast iterative way during development. Data sizes always increase between generations so we're doing the work now and we'll get benefits to this generation in the process.
Our tools are also all network-based so when the next gen comes, we won't be worrying about porting them across. Our cross-platform strengths on the runtime side will also reap big benefits in being able to move across with limited pain (it'll always involve some pain!). With my pure tech hat on, our guys love the challenge of a new bit of kit, so we wouldn't be disappointed if it was silly new architecture either!