Tech Interview: LucasArts' frame-rate upscaler • Page 2

Digital Foundry talks in-depth with creator of the 60FPS "miracle" tech.

Digital Foundry: Isn't there a latency issue though? Surely you will have a 60FPS refresh but controller lag similar to a 30FPS game? Isn't game logic still updating at 30Hz?

Dmitry Andreev: With the one frame based solution, which you can see in Xbox 360 demo, there is no any extra latency added. Technically speaking, the latency is reduced as you get the new result even before it's been constructed.

Another thing is that I only talked about the visual aspect of 60FPS rendering. Going any further than that would make it a lot more confusing. Moreover, all the timing and delays are relative to GPU, between the rendering and gameplay. There are ways to help with that. Racing games, for instance, run the simulation and gameplay at even higher frame rates like 120FPS but render only at 30FPS.

The important thing is that the technique doesn't introduce an extra latency on top of what you already have. Two-frame based solution would introduce an extra frame of latency though, but not our current real-time implementation.

Digital Foundry: Console games appear to have settled at 30FPS, but even here we have frame-drops and screen-tear often taking the performance level down. Aside from the exciting 60FPS usage, could the technique be used to "level out" performance in 30FPS games?

Dmitry Andreev: You are absolutely right. Making a 60FPS game is really tough and you might think that making a 30FPS one is easier. But it turns out that even 30FPS games are not that easy to make while maintaining a solid 30FPS. Why? Well, that would be another good question. And this is one of the difficulties that I expect.

If someone told me that we have a solid 30FPS game, can you make it 60? Yes, "easily". Because when the frame-rate drops at 30 it does look bad, but when it drops from 60 it looks even more noticeable, mostly because of the screen tearing. You don't really want that. But as a fall back mechanism I had to discuss it anyway. Again, it depends of the scene. If the frame-rate drops during the explosion or something, when the camera doesn't move much, screen tearing will be less noticeable. So it's still a good thing to do. But just keep it solid 30.

Digital Foundry: Can you foresee any game types where it wouldn't be appropriate due to too much variance in the frames? Would it work in a fast motion FPS or driving game?

Dmitry Andreev: From my early tests that I did, first-person shooters were the easiest case to handle, because the camera is mostly rotating around itself. Then comes a third-person title with a more or less fixed camera, like a racing game. You don't usually control it so things move more or less even. Then comes a third-person free camera without the near alpha objects. Characters cause a few more problems, but it's still easy to manage.

Finally in terms of difficulty there's the third-person game with a free camera with a lot of alpha blending around the point of rotation. It is this alpha that causes a lot of issues unless you interpolate it separately and do a bunch of other tricks.

But all these things are easy to handle either on the rendering side, the art side or the gameplay side.

Digital Foundry: You describe the SPU version of your motion blur code being of a higher quality. Parallelised over five SPUs, it's also faster than the 360 solution. What is the quality advantage? Would this quality and speed increase also apply to the process of frame-rate upscaling?

Dmitry Andreev: The main bottleneck of the GPU implementation is number of samples you're processing, not even the bandwidth itself, especially on the Xbox 360. But the fact of additional quality using SPUs would be true in the case of interpolation as well. You could do a lot more checks and fix-ups with the current data on SPUs as we did for our motion blur solution.

For instance, in case of interpolation of the SPUs the actual computation takes about 0.3ms or so out of 1.2ms. Those 0.3ms are in parallel with 1.2ms. So the complexity of the algorithm could be increased four times without any extra performance hit as it is memory transfer bound anyway.

In the case of the motion blur, the quality advantage is... quality. The number of samples and other extra checks allows you to get a smoother blur. In case of the interpolation that would translate into fewer possible artifacts as you can spend more time fixing those.

Comments (47)

Comments for this article are now closed, but please feel free to continue chatting on the forum!