We originally ran this article in May 2015, but with the release this week of Windows 10, we thought we'd revisit DirectX 12 on the launch version of the OS, using the latest drivers in order to update the benchmark data. We've also replaced the AMD A10 7800 benches with the same tests run on an FX 6300 - this is a more direct equivalent to the Core i3 4130. We also re-tested Call of Duty: Advanced Warfare and found that significant issues remain with AMD's DX11 performance on less capable processors on both Windows 8.1 and Windows 10.
There's a palpable air of excitement surrounding the arrival of Windows 10 and DirectX 12 - a sense that the PC will finally shrug off the shackles holding it back and that the cutting-edge components released by AMD, Nvidia and Intel will finally reach something approaching their full potential. We experimented with Windows 10 this week and came to a highly satisfying conclusion - DX12 offers huge advantages to virtually all PC owners, but it will be a boon to AMD in particular, perhaps going some way to restoring a degree of plurality to the PC hardware market.
In the here and how - in the era of DirectX 11 - life isn't particularly easy for AMD. Its problems in the CPU market are well documented. Its Bulldozer architecture bet the farm on numerous, slower cores in a world where DX11-driven gaming benefits more from fewer, faster cores, giving Intel a virtually unassailable advantage. AMD still produces 32nm and 28nm processors, while Intel is now down to 14nm, giving it power efficiency advantages on top of its inherent performance improvements.
In the graphics card market, AMD is more competitive - but still faces significant challenges from its implacable rival, Nvidia. Thanks to some well-judged price adjustments and the recent arrival of 300 series graphics hardware, the red team has worthy hardware to compete with most of Nvidia's product line, but what's become increasingly apparent over the last nine months is that AMD's DirectX 11 driver is sub-optimal, particularly relevant for those looking to build a budget PC - an area where AMD offers the best theoretical price/performance level in the market.
We first noticed the issue back in November 2014, when we tested Call of Duty: Advanced Warfare. A Core i3 and i7 run the game in a very similar manner if you have an Nvidia card, but if you're using an AMD GPU, performance collapses whenever the system is drawing a more complex scene. Advanced Warfare isn't a one-off scenario either. Tune your system to favour frame-rate over visual effects and you'll run into a CPU bottleneck on AMD hardware much faster than you will with the Nvidia equivalent. Take a look at this shot of The Crew. The R9 280 is a great piece of hardware and phenomenal value at £130-£140, but pair it with a Core i3 instead of a more capable quad-core processor, and a third of its performance vanishes in draw-intensive areas. Meanwhile, once again, the Nvidia equivalent card holds up effectively.
To cut to the chase - most PC hardware reviews will tell you that the AMD graphics cards aimed at the budget gamer are more capable than the Nvidia equivalents, and in a benchmarking scenario where the GPU is paired with a high-end CPU, that is undoubtedly the case. However, in CPU-limited scenarios, AMD's hardware is let down heavily by the sub-optimal driver, meaning that in many modern games (but we should stress - not all), Nvidia's less capable parts actually hand in more consistent performance. It's for this reason that our budget PC build features an Nvidia GeForce GTX 750 Ti, even though AMD offers a competing product often on sale for just a few pounds more - the R9 270X - which absolutely monsters it in terms of raw benchmarks.
So, what's going on? Well, before your graphics card renders any scene, the CPU needs to simulate the in-game world, then prepare the instructions for the GPU to draw the scene. The more complex the scene, the more 'draw calls' are prepared by the CPU. Frame-rate tanks on Call of Duty in more complex scenes - when there's more stuff to draw - then normal service resumes in less complex areas. It's the same with The Crew: frame-rates are fine outside of city scenes, but once you enter more complex environments, performance suffers. In short, Nvidia's driver is processing the same draw calls much more efficiently than its AMD equivalent, maintaining high frame-rates and leaving more CPU resources open to the actual game logic.
We kept AMD fully up to date with our tests, with the firm telling us that "there's work going on behind the scenes" earlier this year, and at an AMD press event in Munich earlier this year, there was talk of DirectX improvements in the driver released with the 300 series graphics hardware, but the most recent results in the API tests here are still under-par. Bearing in mind that DirectX 11 is going nowhere and will continue to dominate gaming for 2015 at least, we hope that more wide-ranging improvements will follow. But the good news is that a key component of DirectX 12 is all about radically more efficient draw call management and benchmarks reveal that AMD's driver performance on DX12 is looking exceptionally impressive. It's a game-changer - both for the firm's graphics cards and potentially for its processors too.
We know this because while there are no DX12 games out there right now, AMD and Nvidia's drivers for DX12 are ready, while benchmarking specialist Futuremark has updated its 3DMark tool with an API overhead measurement tool that floods the system with draw calls, allowing us to compare driver performance across AMD and Nvidia cards on DX11 and DX12. There's even support for AMD's now-defunct Mantle API, which illustrates that the firm was clearly aware of its DirectX issues and was looking towards more radical solutions, even while DX12 was in its genesis.
Looking at the results, some trends become clear. The sub-optimal nature of the AMD's DirectX 11 driver, amplified here with a draw call specific bench, are put into sharp relief. Firstly, not only is AMD's single-thread performance slower, but the driver has very little - if any - optimisation for multi-core CPU architectures. Nvidia is faster and it can scale its load over three threads. The Core i5 - the processor with the fastest single-core performance in all of these tests - is the only chip capable of breaking the 1m draw call threshold on AMD hardware, somewhat vindicating our previous contention that enthusiast-level GPUs require an Intel quad-core CPU to get the most out of them. By contrast, Nvidia's draw call results still hold up well on less capable processors.
Remarkably, the results also suggest that Nvidia's driver is more much more suited to AMD CPUs than AMD graphics cards, particularly when it comes to scalability of lower-end Nvidia GPUs on the eight-core FX 8350. However, curiously, Nvidia's scaling over multiple threads isn't so effective on the less capable Core i3 4130 and AMD's FX 6300 - bearing in mind the great results seen on the FX 8350, the very similar FX 6300's poor results on multi-threaded tests are puzzling. However, even without multi-threading, Nvidia's driver is still significantly faster at processing the same draw call load on a single core, as seen in the single-thread tests.
|Entry Level CPUs||GeForce GTX 750 Ti||Radeon R7 260X||GeForce GTX 970||Radeon R9 290X|
|i3 4130 DX11 Single Thread||1.1m||0.7m||1.2m||0.7m|
|i3 4130 DX11 Multi-Thread||1.2m||0.7m||1.2m||0.7m|
|i3 4130 DX11 Mantle||-||7.6m||-||7.9m|
|i3 4130 DX12||8.1m||8.5m||9.6m||8.8m|
|FX 6300 DX11 Single Thread||1.1m||0.8m||1.1m||0.8m|
|FX 6300 DX11 Multi-Thread||1.3m||0.7m||1.3m||0.7m|
|FX 6300 DX11 Mantle||-||10.1m||-||10.1m|
|FX 6300 DX12||7.7m||12.6m||12.5m||12.7m|
Once we move onto the Mantle and DirectX 12 results, AMD more than redeems itself. There are immense boosts to draw call throughput from start to finish on every processor tested, the largest boost coming from the FX 8350 where the R9 290X receives a frankly monumental boost to performance in the order of 1,600 per cent when single-core DX11 and DX12 scores are compared. Remember, we are only benching one particular element of the rendering process - but regardless, the uplift is phenomenal. Also note that the FX 6300's DX12 results on R7 260X, GTX 970 and R9 290X comprehensively beat the more expensive Core i3 4130.
The leap in performance applies to both of the more recent APIs, and we happily noted that AMD's DX12 showing actually shows an improvement over Mantle (its own tech, remember) in every test. Also worthy of comment is that AMD is clearly back in the game against Nvidia in terms of DX12 driver overhead - indeed, its lower-end GPUs actually process draw calls faster than their Nvidia equivalents (presumably a hardware limits on the GTX 750 Ti bearing in mind the storming GTX 970 results). But the good news is that every piece of hardware we tested sees a boost courtesy of DX12 - we're seeing a far higher utilisation of both CPU and GPU. The figures demonstrate in particular how under-utilised the geometry engines are on our GPUs - what other areas of the graphics hardware are also under-utilised that DX12 could potentially access? The prospects are tantalising.
The good news doesn't end there. In typical DirectX 11 gaming, the Core i5 4690K is one of fastest reasonably-priced CPUs on the market and runs rings around similarly priced FX 8350. We have to remember that draw call processing is just one element of CPU workload, but the gap in this area closes significantly with DX12 and the AMD chip is much more competitive - not bad considering that we're comparing a 2014 Intel processor with an AMD rival that's actually two years older.
|Mainstream CPUs||GeForce GTX 750 Ti||Radeon R7 260X||GeForce GTX 970||Radeon R9 290X|
|i5 4690K DX11 Single Thread||1.4m||1.1m||1.3m||1.1m|
|i5 4690K DX11 Multi-Thread||2.1m||1.0m||2.1m||1.0m|
|i5 4690K DX11 Mantle||-||13.0m||-||13.2m|
|i5 4690K DX12||8.1m||14.1m||14.5m||14.7m|
|FX 8350 DX11 Single Thread||1.2m||0.9m||1.2m||0.9m|
|FX 8350 DX11 Multi-Thread||2.1m||0.8m||2.1m||0.8m|
|FX 8350 DX11 Mantle||-||12.9m||-||13.3m|
|FX 8350 DX12||7.7m||14.1m||16.0m||14.8m|
The data presented in this article should be put into context. Massively increasing draw calls is a fascinating metric, but it is only one small component of a typical game engine. It's going to take new engines built explicitly around the new API to see real gains in terms of denser, more richer worlds, but the opportunities of the inevitable DX12 patches we'll see in the short term are still exciting: the PC experience is built around scalability, but as we've noted recently, particularly in the underperformance of the top-tier Titan X, GTX 980 Ti and R9 Fury X in certain scenarios, something is holding back PC gaming from making the most of its hardware advantage. We're fascinated to see if DX12 can make the difference.
But from a hardware perspective, all the signs are that DX12 is a key component in bringing more competition to the market. The figures on this page strongly suggest that AMD's many-core CPU strategy could finally start to pay off. In combination with the recent announcement that its upcoming Zen architecture is 40 per cent faster, Intel may no longer be the default choice for gamers - we'll just have to see, but competition drives performance and we really want to see AMD back in the game.
In the graphics market, AMD has often been criticised for its lacklustre approach to driver support. In truth, both vendors have their issues, but in terms of DX11 driver efficiency, Nvidia is still significantly ahead. We would like to see parity between AMD and Nvidia on driver API overhead, but the benches strongly suggest that the groundwork is in place for the red team to be much more competitive on the software side once DX12 is the focus for PC development.
But the real question is how long we'll have to wait until that is the case. Microsoft is effectively giving away Windows 10 for free right now - a big boost to DX12 adoption, which should help the API take over sooner rather than later. In the short term, we could also see selected games arriving with both DX11 and DX12 support. However, low-level integration - where we'll see the largest gains - could be some way off yet. Games take years to develop, and key releases this year will almost certainly still target DX11. Indeed, prominent developers - DICE's Johan Andersson amongst them - are still considering whether to adopt DX12 as the minimum spec for next year's games. As Andersson says, clearly there are major benefits - so here's hoping the transition occurs sooner rather than later.
Will you support the Digital Foundry team?
Digital Foundry specialises in technical analysis of gaming hardware and software, using state-of-the-art capture systems and bespoke software to show you how well games and hardware run, visualising precisely what they're capable of. In order to show you what 4K gaming actually looks like we needed to build our own platform to supply high quality 4K video for offline viewing. So we did.
Our videos are multi-gigabyte files and we've chosen a high quality provider to ensure fast downloads. However, that bandwidth isn't free and so we charge a small monthly subscription fee of £4.50. We think it's a small price to pay for unlimited access to top-tier quality encodes of our content. Thank you.Support Digital Foundry