The concept of native resolution is becoming less and less relevant in the modern era of games and instead, image reconstruction techniques are coming to the fore. The idea here is remarkably simple: in the age of the 4K display, why expend so much GPU power in painting 8.3m pixels per frame when instead, processing power can be directed at higher quality pixels instead, interpolated up to an ultra HD output? Numerous techniques have been trialled, but Kojima Productions' Death Stranding is an interesting example. On PS4 Pro, it features one of the best checkerboarding implementations we've seen. Meanwhile, on PC, we see a 'next-gen' image reconstruction technique - Nvidia's DLSS - which delivers image quality better than native resolution rendering.
Checkerboard rendering as found in Death Stranding is non-standard and it's the result of months of intensive work by Guerrilla Games during the production of Horizon Zero Dawn. Curiously, it does not use PS4 Pro's bespoke checkerboarding hardware. Base resolution is 1920x2160 in a checkerboard configuration, with 'missing pixels' interpolated from the previous frame. Importantly, Decima does not sample a pixel from its centre, but from its corners over two frames. By combining these results over time in a specialised way similar to the game's TAA and a very unique pass of FXAA, a 4K pixel grid is resolved and the perception of much higher resolution is achieved. According to presentations from Guerrilla, of the engine's 33.3ms per-frame render budget, 1.8ms is spent on the checkerboard resolve.
Although it upsamples from a much lower resolution, DLSS works differently. These is no checkerboard, no pixel-sized holes to fill. Rather, it works more like accumulation temporal anti-aliasing, where multiple frames from the past are queued up and information from these frames is used to smooth lines and add detail into an image - but instead of adding detail to an image of the same resolution as TAA does, it generates a much higher output resolution instead. As a part of these frames from the past, motion vectors from those frames for every object and pixel on screen are integral for DLSS working properly. How all of this information is used to create the upscaled image is decided upon by an AI program running on the GPU, accelerated by the tensor cores in an RTX GPU. So while DLSS has fewer base pixels to work from, it has access a vast amount of compute power to help the reconstruction process.
On an RTX 2080 Ti at 4K, DLSS completes in around 1.5ms - meaning it's faster than checkerboarding on PS4 Pro. However, the hardware comparison is obviously lopsided. The weakest capable GPU is the RTX 2060 (still significantly more powerful than the Pro) and DLSS has an overhead in excess of 2.5ms on this card. That's heavy, especially if you're targeting 60fps where the entire frame rendering budget is just 16.7ms. However, the core advantage is that the base resolution is so much lower. DLSS in Death Stranding comes in two flavours: the performance mode achieves 4K quality from just a 1080p internal resolution. Meanwhile, the quality mode delivers better-than-native results from a 1440p base image. In both cases, that's much lower than PS4 Pro's 1920x2160 core resolution. By running everything else in the GPU pipeline at much lower resolutions, the cost of processing DLSS is more than offset - to the point where mildly overclocking the RTX 2060 allows Death Stranding to deliver 4K gaming at 60fps.
In the video on this page, you'll see detailed comparisons of how Death Stranding's checkerboarding on PS4 Pro stands up against DLSS on PC and it's fascinating to see what is effectively a state of the art current generation reconstruction technique and how it compares to a next-gen equivalent. Despite running with much lower pixel count, DLSS is undoubtedly sharper and delivers more detail and clarity than checkerboard rendering. Transparent elements like hair see checkerboard artefacts on PS4 Pro that are totally gone with DLSS. In motion, this translates to more temporal stability with DLSS, with the subtle flicker seen on the PS4 Pro version completely gone. In general, pixel crawl and popping is also much reduced. DLSS does have a weakness though: certain objects at a distance exhibit particle trails that are not visible on PS4 Pro, nor in native rendering. It's a small blemish and the only negative point in the presentation.
Comparisons between the two techniques are fascinating but the big takeaway is that DLSS image reconstruction from 1440p looks cleaner overall than native resolution rendering. We've reached the point where upscaling is quantifiably cleaner and more detailed - which sounds absurd, but there is an explanation. DLSS replaces temporal anti-aliasing, where all flavours of TAA exhibit softening or ghosting artefacts that Nvidia's AI upscaling has somehow managed to mostly eliminate. And this poses an interesting question: why render at native resolution at all, if image reconstruction is better and cheaper? And what are the applications for next-gen consoles?
There's an important point of differentiation between Nvidia's hardware and AMD's, however. The green team is deeply invested in AI acceleration across its entire business and it's investing significantly in die-space on the processor for dedicated AI tasks. AMD has not shared its plans for machine learning support with RDNA 2, and there is some confusion about its implementation in the next-gen consoles. Microsoft has confirmed support for accelerated INT4/INT8 processing for Xbox Series X (for the record, DLSS uses INT8) but Sony has not confirmed ML support for PlayStation 5 nor a clutch of other RDNA 2 features that are present for the next generation Xbox and in PC via DirectX 12 Ultimate support on upcoming AMD products.
Broadly speaking then, the Xbox Series X GPU has around 50 per cent of the RTX 2060's machine learning processing power. A notional DLSS port would see AI upscaling take 5ms to complete, rather than a 2060's circa 2.5ms. That's heavy, but still nowhere near as expensive as generating a full 4K image - and that's assuming that Microsoft isn't working on its own machine learning upscaling solution better suited to console development (spoilers: it is - or at least it was a few years back). In the meantime though, DLSS is the most exciting tech of its type - we're sure to see the technology evolve and for Nvidia to leverage a key hardware/software advantage. The only barrier I can see is its status as a proprietary technology requiring bespoke integration. DLSS only works as long as developers add it to their games, after all.
As exciting as the prospects for machine learning upscaling are, I also expect to see continued development of existing non-ML reconstruction techniques for the next-gen machines - Insomniac's temporal injection technique (as seen in Ratchet and Clank and Marvel's Spider-Man) is tremendous and I'm fascinated to see how this could evolve given access to the PS5's additional horsepower. Maybe the developer is leaning into this to achieve its 60fps mode for Marvel's Spider-Man: Miles Morales? Even with the arrival of Xbox One X and its 'true 4K' marketing - and even the focus on native 4K at the PS5 games reveal - the truth is that the concepts of dynamic resolution scaling, temporal supersampling and perhaps even checkerboarding may well persist into the next generation. Despite the move to next-gen hardware, GPU resources will still be finite - and it's likely that ultra HD will still be more of a 'destination' and the route taken to get there will vary very much on a game by game basis.