Skip to main content

Long read: How TikTok's most intriguing geolocator makes a story out of a game

Where in the world is Josemonkey?

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Inside DLSS 3.5 and Cyberpunk 2077 Phantom Liberty: discussing the future of PC graphics

Updated with the DF deep dive into DLSS 3.5 in CP2077 2.0.

cyberpunk 2077 phantom liberty key art
Image credit: CDPR

UPDATE: Digital Foundry has just completed its review of Cyberpunk 2077 2.0 on PC, with a strong focus on the new ray tracing innovations added to the game in the form of DLSS 3.5 ray reconstruction. This new technology aims to drastically improve denoising. With current forms of RT, rays are traced - but they are limited in number compared to reality, producing a 'noisy' image that requires processing to increase coherency. Different effects may require different denoisers and all of them are tuned by human input. Ray reconstruction aims to take that task out of the hands of the developers, relying on machine learning to do a better and faster job.

The end result is a cleaner image with more detail, less ghosting and faster response when lighting conditions drastically change. Alex Battaglia calls it "a wastershed moment for real-time ray tracing" while at the same time acknowledging that it's an emergent technology that still needs finessing - similar to DLSS 2.0 when it launch a few years back. Posterisation effects, oversharpening and some smearing artefacts are eerily reminiscent of the weaknesses we saw with DLSS 2.0 and particularly in low-light scenarios, they can surface here. If the fidelity of the tech follows the same course as DLSS 2.0, we would expect these artefacts to improve and eventually resolve in due course.

We hope you enjoy the video and find it to be a useful addition to the DLSS 3.5 roundtable we published earlier in the week. The embargo for publishing video review content for Phantom Liberty lifts next week, and we'll be reporting back on that, looking more closely at the console releases.

With Cyberpunk 2077 2.0 now available and the DLSS 3.5 enhancements in place, here's Digital Foundry's 26 minute deep dive into ray reconstruction, how it works, why we need it, where it excels and where we hope to see improvement.Watch on YouTube

Original Story: How's this for a crossover episode? Alex Battaglia recently hosted a roundtable discussion on AI and the future of game graphics, sharing the stage with representatives from GPU maker Nvidia, Cyberpunk 2077 developers CD Projekt RED and the PCMR subreddit. The talk comes on the eve of the release of Cyberpunk 2077 2.0 and the game's Phantom Liberty expansion, featuring a newly expanded range of technologies including DLSS 3.5 ray reconstruction.

It's a fascinating chat well worth watching in full via the embedded video below, but I'd like to dig into one aspect of the conversation that I found particularly interesting: the idea that while image upscaling and frame generation techniques could be seen as crutches used by developers to ignore optimisation work, but viewed another way they are tools that allow visuals to reach heights that wouldn't otherwise be possible.

Jakub Knapik of CDPR makes the point quite eloquently by comparing Cyberpunk's path-traced graphics with that of Big Hero 6, the 2014 release that was one of the first path-traced animated films - and another project that renders a dense cityscape in many of its scenes. Rendering each frame of the movie took several hours, but less than a decade later, consumer graphics cards are able to render a scene within roughly the same ballpark of graphical complexity in Cyberpunk 2077 at 60 frames per second.

Here's the full roundtable interview, featuring DF's Alex Battaglia alongside Bryan Catanzaro (vice president, applied deep learning research at Nvidia), Jakub Knapik (VP and global art director at CD Projekt RED), Jacob Freeman (GeForce evangelist at Nvidia) and Pedro Valadas (PCMR subreddit founder). Watch on YouTube
  • 00:00:00 Introduction
  • 00:01:10 When did the DLSS 3.5 Ray Reconstruction project start and why?
  • 00:04:16 How did you get DLSS 3.5 Ray Reconstruction up and running?
  • 00:06:17 What was it like to integrate DLSS 3.5 for Cyberpunk 2077?
  • 00:10:21 What are the new game inputs for DLSS 3.5?
  • 00:11:25 Can DLSS 3.5 be used for hybrid ray tracing titles and not just path traced ones?
  • 00:12:41 What is the target performance budget for DLSS 3.5?
  • 00:14:10 Is DLSS a crutch for bad performance optimisation in PC games?
  • 00:20:19 What makes machine learning specifically useful for denoising?
  • 00:24:00 Why is DLSS naming kind of confusing?
  • 00:27:03 What did the new denoising enable for Cyberpunk 2077's graphical vision?
  • 00:32:10 Will Nvidia still focus on performance without DLSS at native resolutions?
  • 00:38:26 What prompted the change internally at Nvidia to move away from DLSS 1.0 and pursue DLSS 2.0?
  • 00:43:43 What do you think about DLSS mods for games that lack DLSS?
  • 00:49:52 Where can machine learning go in the future for games beyond DLSS 3.5?

It's incredible progress and only possible thanks to these many 'cheats' - image upscaling, frame generation and now ray reconstruction - alongside significant progress in terms of hardware that accelerates many parts of the ray tracing pipeline.

As Jakub says, basically anything that allows for greater performance could be considered a cheat in the same way - even now basic technologies like level of detail (LOD), which simplifies distant geometry to save performance - so what really matters is how these tools are used.

As we saw in Immortals of Aveum, it's possible to be perhaps rely too heavily on image reconstruction, which exhibited an overly soft image on consoles and high hardware requirements on PC... whereas Starfield didn't use all the tools that were available, arriving without DLSS or XeSS support and with subpar performance on Nvidia and Intel graphics cards.

Here's the initial DLSS 3.5 announcement video from Nvidia, featuring roundtable participant Bryan Catanzaro.Watch on YouTube

As Bryan and Jacob go on to mention, the development of these sorts of technologies are all about being smarter with how you render each frame and while there can be differences in how effectively developers use the technologies available to them, the goal is to let both developers and end users attain a good balance of performance and fidelity.

There's plenty more discussed than I have time to cover here - including the development of different DLSS versions, how denoising improvements impacted the development of Cyberpunk's art and more general musings about what machine learning could be used for in game graphics beyond DLSS 3.5.

It's all fascinating stuff, so do check out the video embedded below. While there is a visual component, I found the talk also worked well as an audio-only podcast, so if you fancy an hour-long discussion to accompany you as you drive or do chores, you've got that option too.

Thanks to Bryan, Jakub, Jacob and Pedro for joining us for this one, and of course stick with us for more on the graphics of Cyberpunk 2077: Phantom Liberty. We'll be taking a much closer look at DLSS 3.5's innovations as presented in the 2.0 version of Cyberpunk, and of course we'll be examining PlayStation 5 and Xbox Series versions of the game in due course.

Read this next