Skip to main content

Intel's XeSS tested in depth vs DLSS - the Digital Foundry technology review

A strong start for a new, more open AI upscaler.

Intel's debut Arc graphics cards are arriving soon and ahead of the launch, Digital Foundry was granted exclusive access to XeSS - the firm's promising upscaling technology, based on machine learning. This test - and indeed our recent interview with Intel - came about in the wake of our AMD FSR 2.0 vs DLSS vs native rendering analysis, where we devised a gauntlet of image quality scenarios to really put these new technologies through their paces. We suggested to Intel that we'd love to put XeSS to the test in a similar way and the company answered the challenge, providing us with pre-release builds and a top-of-the-line Arc A770 GPU to test them on.

XeSS is exciting stuff. It's what I consider to be a second generation upscaler. First-gen efforts, such as checkerboarding, DLSS 1.0 and various temporal super samplers attempted to make half resolution images look like full resolution images, which they achieved to various degrees of quality. Second generation upscalers such as DLSS 2.x, FSR 2.x and Epic's Temporal Super Resolution aim to reconstruct from quarter resolution. So in the case of 4K, the aim is to make a native-like image from just a 1080p base pixel count. XeSS takes its place alongside these technologies.

To do this, XeSS uses information from current and previous frames, jittered over time. This information is combined or discarded based on an advanced machine learning model running on the GPU - and in the case of Arc graphics, it's running directly on its XMX (matrix multiplication) units on its Xe cores. The Arc A770 is the largest GPU in the Arc stack, with 32 Xe cores in total. Arc's XMX units process using the int-8 format in a massively parallel way, making it quick. For non-Arc GPUs, XeSS works differently. They use a "standard" (less advanced) machine learning model, with Intel's integrated GPUs using a dp4a kernel and non-intel GPUs using kernel using technologies enabled by DX12's Shader Model 6.4. That means there's nothing to stop you running XeSS on, say, an Nvidia RTX card - but as it's not tapping into Nvidia's own ML hardware, you shouldn't expect it to run as fast as DLSS. Similarly, as it is using the "standard" model, there may be image quality concession in comparison to the more advanced XMX model exclusive to Arc GPUs. The performance and quality of XeSS on non-Intel cards is something we'll be looking at in future.

Here it is - a 30 minute deep-dive into the quality of Intel's XeSS upscaler, stacked up against native resolution rendering and DLSS, tested across a range of resolutions. Watch on YouTube

Even though XeSS has a cost in its own right - even with XMX unit acceleration - its benefit comes in the performance it saves compared to native resolution rendering. We did the vast bulk of our XeSS tests using a build of Shadow of the Tomb Raider, with integration into the game carried out by Intel itself. Testing a 4K output, the performance mode increased frame-rate by 88 percent, balanced mode by 66 percent, quality mode by 47 percent and ultra quality by 23 percent. The amount of performance saved over native rendering quality depends on the rendering load in question. In Tomb Raider with everything maxed at 4K, XeSS offers a good performance uptick on its performance or balanced modes.

If you reduce the output resolution though, or reduce the settings quality, the GPU is less taxed and gains will be less impressive. For example, at 1440p at those same settings, performance mode only increases performance by 52 percent. Conversely, the heavier the rendering load is, the greater the savings will be. For example in 3D Mark's XeSS test with tons of ray traced reflections and more at 4K, XeSS in performance mode delivers 177 percent more performance than native rendering. Put simply, the more intensive the rendering, the bigger the gain using XeSS.

I mentioned ultra quality mode and this is a setting not offered by competing upscalers, so here's a table showing how all of these scalers and their modes compare. DLSS, for example, tops out with its quality mode, upscaling from a native 1440p for a 4K output. XeSS ultra quality mode goes further, at 1656p. With all of these scalers, the more raw data you give via higher internal resolutions, typically the higher quality the output.

Quality Mode 1080p Output 1440p Output 2160p Output
AMD FSR 2.x Performance 960x540 1280x720 1920x1080
Nvidia DLSS 2.x Performance 960x540 1280x720 1920x1080
Intel XeSS Performance 960x540 1280x720 1920x1080
AMD FSR 2.x Balanced 1129x635 1506x847 2227x1253
Nvidia DLSS 2.x Balanced 1114x626 1486x835 2259x1270
Intel XeSS Balanced 1120x630 1493x840 2240x1260
AMD FSR 2.x Quality 1280x720 1706x960 2560x1440
Nvidia DLSS 2.x Quality 1280x720 1706x960 2560x1440
Intel XeSS Quality 1280x720 1706x960 2560x1440
Intel XeSS Ultra Quality 1472x828 1962x1104 2944x1656

Let's talk about the actual test methodology. To a certain degree, there's only so far the written word can go when talking about image quality, so my advice would be to watch the video embedded at the top of the page and to take a look at the screenshot comparison zoomer further on down. However, in putting these scalers through their paces, I primarily test bright scenes with no motion blur, so darkness and distortion don't impact the comparisons. I also typically test with sharpness sliders at the minimum - the issue here being that XeSS does not have a sharpness slider. Instead, the ML training model is tuned to product what Intel believes are the best results - something that may change in future. Also: I typically test 4K outputs in performance mode, 1440p in balanced mode and 1080p in quality mode. The lower the output resolution, typically the more raw image data reconstruction techniques require to get a good image.

My tests start with a look at still imagery - where I would expect all upscalers to produce nigh-on perfect results. From there on out, it's about testing scenarios where temporally-based upscalers traditionally have problems: starting with flickering in fine detail or moiré patterns along with associated challenges such as vegetation and hair rendering and what happens with transparencies including water particle effects. The impact on post-process quality (for example, depth of field) is also tested and in the case of Shadow of the Tomb Raider, I also tested ray tracing quality too. RT is a challenge for these upscaling algorithms as the amount of rays traced correlates with native resolution - and since we're upscaling, we should assess the impact to quality vs native resolution RT.

Another area to address concerns movement. The basic concept of these upscaling techniques is to re-use information from prior frames and inject it into the current one. However, fast camera movement, on-screen animation and 'disocclusion' (for example, the on-screen Lara suddenly moving to reveal new detail) is a stern challenge in that new detail needs to be resolved with less data to work with.

4K Native/TAA
4K XeSS Quality
4K DLSS Quality
XeSS in quality or performance mode at 4K can look vastly superior to native 4K with TAA in terms of detail.
1080p Native/TAA
1080p XeSS Quality
1080p DLSS Quality
XeSS still can beat native with TAA at lower resolution, but it is visibly less robust than DLSS in the same mode at times.
4K Native/TAA
4K XeSS Performance
4K DLSS Performance
Extreme animations close to the camera have impressive quality, with no fizzle or ghosting, and can be superior to DLSS.
4K Native/TAA
4K XeSS Quality
4K DLSS Quality
Moiré artefacts are one of XeSS' issues - check out how it manifests on the tablet there in the candlelight.

The video and screenshots go into much more depth via more useful media, but I can summarise my overall findings and the good news is that on its first attempt, Intel has delivered an upscaling technology that's comparable to DLSS and similar to the Nvidia technique, can exceed the quality of native resolution rendering with standard TAA. There is still some work for Intel though. For starters, the moiré effect stands out as one of its clearest weaknesses. Even DLSS is not completely immune to this, but XeSS clearly presents this artefact in more scenarios. This is definitely the biggest difference seen between XeSS, DLSS and native resolution rendering and the area where I think Intel still needs to do the most work.

In terms of transparencies, differences between DLSS and XeSS are minor, with XeSS being a touch blurrier - though we don't know the extent to which DLSS sharpening may be influencing the comparison. Water is something quite different and another major difference, but I imagine that this one is more of an integration issue more than anything else. At native resolution and using DLSS, it works fine, but with XeSS it does not, causing the water to jitter, the effect increasing the lower the input resolution. As of right now, it is a pretty distracting artefact, especially since it means increased aliasing for anything that is in the water itself when the camera moves. DLSS has its issues in terms of clarity with water, but the jitter artefact is more offputting.

Particle rendering can be challenging for these upscalers, but XeSS and DLSS both work well here. However, hair rendering has a pixellated effect on XeSS that puts it both behind DLSS and native resolution rendering - in fact, owing to the game's standard TAA method, I think DLSS actually beats native in this respect. However, the game's RT shadows in both DLSS and XeSS lack the clarity of native output owing to the lower internal resolution meaning fewer rays being traced. Usually, XeSS and DLSS look the same, but occasionally some shadows with XeSS exhibit a slight wobble to their edges - a slight flicker almost - not visible on the DLSS side. But that was about it.

Rich Leadbetter and Alex Battaglia talk to Intel Fellow Tom Petersen about Arc graphics - what's happened in the last year, what we should expect at launch - and how the tech pushes machine learning and ray tracing features. Watch on YouTube

Interestingly, the areas where I expected XeSS to be most challenged shows some good results against DLSS. Disocclusion is where I found AMD's FSR 2.0 to struggle most against native resolution rendering and DLSS and in my tests, I found no discernable difference between native rendering, DLSS and XeSS. Disocclusion does not seem to cause large image discontuinities with XeSS, which is an excellent result. XeSS is also adept in handling extreme movement close to the camera, even beating out DLSS, with clearer image features on movement when Lara attacks.

Shadow of the Tomb Raider is an excellent test case owing to its high quality, high frequency visuals, ray tracing support and the fact that it's a third-person action game - the main character often revealing detail that could be hard for an upscaler to track. And having looked in extreme depth at literally hundreds of captions, I think I can draw an informed verdict of XeSS's quality in its debut showing - in this implementation, at least.

Firstly, I think the performance uplift is great, but that comes with the territory with these reconstruction techniques. I honestly think it is almost always worth it to use image reconstruction over native rendering, especially since XeSS will work on nearly all modern GPUs from any vendor. In regards to the quality of the resolve, I am generally very positive about XeSS as it did not seem to have issues in those areas that are notoriously hard to get right. Fast movement had a coherent look and did not alias or smear, disocclusion did not cause large fizzling and particles and transparencies generally look good, at least at 4K performance mode or higher. As a testament to its quality, I had to double and triple check my comparisons to make sure I was not mixing up DLSS and XeSS.

The God of War FSR 2.0 vs DLSS vs native resolution rendering video was DF's attempt to set the standard for assessing upscaling technologies and was the content we used to pitch Intel in getting a first look at XeSS.Watch on YouTube

But still, there are issues here, the biggest being the moiré artefacts, which manifests nearly every time tiling detail shows up, tarnishing an otherwise very good presentation. Similarly, there is also the jittered water issue which needs looking at, although like I said earlier, I think this is an integration problem as opposed to some kind of fundamental weakness with XeSS itself.

Even with these issues, I would say XeSS is shaping up to be a great success. Like the best upscaling solutions, it can beat the look of native 4K in some scenarios - even in performance mode, which uses a 1080p base image. It is directly competitive with DLSS in scenarios that I consider to be 'hard mode' for image reconstruction techniques. That said, it must be mentioned that it is competitive with a slightly older version of DLSS in Shadow of the Tomb Raider and a more recent title with a more modern integration could see differences. And yes, it goes without saying that our tests were based on just one implementation. There are many others coming out soon enough when Arc launches. We did get access to some other XeSS implementations though and based on what I have seen in Diofield Chronicles or 3D Mark's XeSS test, it's clear that the Intel tech can produce great imagery, outshining native rendering at times.

We'll have more on XeSS soon, including how it runs on other GPUs. Yes, it does work - I tested Shadow of the Tomb Raider on an RTX 3070, for example - and of course, we can't wait to see it running on more games. Modern Warfare 2, for example, should include it on day one, while the tech will also find a good home in the likes of Death Stranding and Hitman 3, amongst many others. Plug-ins for Unreal Engine and Unity? They're already done apparently. So despite the extreme depth of this review, in a sense, the XeSS story has only just begun and we look forward to sharing more in the future.

Read this next