On Saturday, Digital Foundry covered a SIGGRAPH 2010 presentation by LucasArts' Dmitry Andreev on the process of frame-rate upscaling. This intriguing concept potentially offers all the advantages of rendering a console game at 30FPS, along with the visual smoothness and potentially even the crisper response of 60FPS gaming.
Almost five years into the lifespan of the Xbox 360, it's a fascinating insight into the kind of tricks, techniques and thinking that developers are employing in order to produce ever more impressive console titles.
If you've not downloaded the eye-opening footage of Andreev's demo showing the technique in play on the Star Wars: The Force Unleashed II engine, it's well worth a look, with both HD and standard def versions available for download, and the original presentation also online for public viewing. Be sure to have an h264 decoder installed on your machine so you can view the AVI files (Windows 7 has one installed by default).
In this follow-up tech interview with Dmitry, we go over the basics, discuss the implementation in the tech demo in more detail and talk about the potential for the technique in future console titles.
Digital Foundry: Can you give us a basic outline of your technique in layman's terms? Are we really seeing an effective 60FPS with all the advantages of rendering at 30FPS?
Dmitry Andreev: The basic idea is to build an extra frame based of the previous one [using] new information available for the current, new frame, and present it in the middle of 30FPS rendering while still working on the current frame. This way, technically, we are seeing an effective 60FPS with all the advantages of rendering at 30FPS.
Digital Foundry: You talk in your presentation about the blind spots in the human eye - how are you using them to your advantage with this technique?
Dmitry Andreev: Well, one of the points that I was trying to make with this presentation is that quite complex things can be actually very simple once you start thinking about them. Working things out by analogy, by trying to see why something happens. All those have indirect influence and that's why it is harder to see a simpler solution. Very often people ask, "Why are you doing this or that? It has nothing to do with the problem whatsoever." The point is that it does. You might not find the solution to the original problem, but very often you find some other interesting things that can lead your curiosity further, and this way you can find unexpected solutions to various problems.
Human eye and the whole visual system is one big inspiration to me. It makes you think, it makes you wonder. A while ago, after reading a book called "On Intelligence" by Jeff Hawkins, I prototyped a few neural networks with feedback connections, simulating the missing input from the blind spot. In fact, the network can predict what you would see as well as respond to certain optical illusions the way the real visual system does.
I also did a lot of experimenting, trying to put different patterns around the blind spot and try to observe what happens. That gave an idea that it is definitely pattern-based, it is localised but not around the "edge" and it is not too wide. Then I thought about what it would look like in motion and the fact that it has to not violate predictions our brain makes about the image. We would tend to notice things that change rather than things that don't change.
It is a set of ideas like this that are used to our advantage.
Digital Foundry: Can you talk about the process in which you remove the characters from the scene? Why is this necessary? Is it about reducing the artefacting in the interpolation process on the characters?
Dmitry Andreev: This is not necessary in general. As I discussed in comments, all the characters could be re-rendered at 60FPS, or could be rendered separately from the environment. It happens that they can move any way they want. So the easiest way is to have a version of the environment without the characters, and you use that version whenever we detect an artefact.
Now, with forward rendering it is easy to render the environment first, store it somewhere and then render characters on top of it. But it becomes very difficult to do the same thing with the use of deferred techniques. So when I was working on a console implementation I didn't want even to think about redoing most of our deferred rendering pipeline. It is already crazily optimised with different sorts of tricks. That's why I thought it would be easier to just somehow remove them from the existing frame and then use that to remove the artifacts.
But I must note that it is more than just a character removal. It is also to remove all other problematic regions. In the demo that you've seen it is only used for characters though. This is what I mean by saying that the talk should not be understood literally.