If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

How powerful is Nvidia's new 12GB Titan X?

All signs point to a new benchmark in PC graphics power.

UPDATE 18/3/15 1:24pm: We now know exactly how powerful Titan X is - check out the full Digital Foundry Titan X review for detailed performance metrics on demanding games at multiple resolutions.

Nvidia has done it again. Just like its predecessor two short years ago, the evidence suggests that the new Titan X will be the fastest single-chip graphics card on the market, loaded down with enough RAM to cope with even the most demanding games - most likely with plenty more to spare. And again, similar to its predecessor, we should also expect the Titan X to be inordinately expensive - an aspect that only seemed to make it even more desirable to enthusiasts back in the day, if sales figures are anything to go by.

Right now, not much is known with absolute certainty about how powerful the Titan X is - but it was the GPU of choice for VR demos at last week's GDC 2015. Crytek used it for the Back to Dinosaur Island demo, while Epic showcased WETA Digital's Thief in the Shadows and its own Showdown demo with the new technology. We can reasonably assume that it's more powerful than the current top dog, the GeForce GTX 980, but to what degree?

Actual figures on the technical make-up of the card are limited right now - full disclosure is planned for Nvidia's own GTC event a couple of weeks from now. Nvidia CEO Jen-Hsun Huang revealed the card at Epic's Unreal Engine 4 keynote at GDC last Wednesday, giving away just two facts about the product - firstly that it has 12GB of memory, and secondly an eight billion transistor count. Subsequently we learned that the new chip at the heart of the card is called GM200 - effectively confirming 28nm Maxwell architecture similar to the existing GTX 980, making this the true successor to the original Titan's Kepler-based GK110 processor. Photography of the card by PCPer confirms six and eight-pin power inputs, meaning that power consumption should be in the same rough ballpark as the original Titan.

Cover image for YouTube videoA Boy & His Kite: An Animated Short | Unreal Engine
The Unreal Engine 4 kite demo, based on an open world with over 100 square miles of terrain, was rendered completely in real-time on one Titan X at the recent GDC 2015.

However, in terms of actual GPU performance, all we have to go on is the eight billion transistor figure. With the 28nm process and Maxwell architecture all but confirmed, we can compare the transistor count with the GTX 980 to give us some ballpark idea of how much faster the new card is - after all, the vast majority of the extra space on the larger chip will be used to house extra CUDA processing cores. And that's where things get exciting, as potentially we're looking at something in the region of an extra 50 per cent of processing power. Assuming that's the case, we might even consider the Titan X as a GTX 980 and a GTX 960 combined and centralised into one mammoth piece of silicon.

The reveal of an overkill 12GB framebuffer also offers up more clues as to the technical make-up of the card. Short of any memory-partitioning shenanigans, it's almost certain the memory bandwidth will increase significantly compared to GTX 980 with the utilisation of a 384-bit interface between the GM200 chip and the surrounding GDDR5 modules. Of more use to CUDA developers, the vast framebuffer probably won't be maxed out by any gaming applications for years to come. That said, in testing we recently carried out for our forthcoming GTX 970 RAM investigation, Assassin's Creed Unity with 8x MSAA at 2560x1440 could tap out the full 6GB allocation of the existing Titan (albeit with single-digit frame-rates). Regardless, in an era where nobody seems to know for sure how much memory is required to future-proof a current GPU purchase, a full 12GB of RAM is the equivalent of taking off and nuking the problem from orbit.

Even with the meagre information available right now, it seems clear that Titan X will significantly outperform everything on the market right now - albeit with the same mammoth price-point as its predecessor (indeed, some rumours have suggested that the cost may even rise). With that in mind, looking towards the future, the question is whether any GM200-based product will be released at a more realistic price. Well, realistically, that's not really a question of if, but when. The GTX 980 chip - codenamed GM204 - is 398mm2 in size, and based on these recent figures on how many 28nm chips are perfectly fabricated without defects, the sheer size of GM200 guarantees only a modest yield of perfect chips. That suggests that Nvidia will almost certainly be sitting on a large cache of GM200 chips that may not make the grade as Titan X processors, but will find a use elsewhere.

Crytek's Return to Dinosaur Island - a VR showcase running on Titan X at this year's GDC 2015. Suffice to say that the image doesn't really do justice to the full virtual reality experience.

The obvious use for these less-than-perfect processors is to disable CUDA cores on the defective areas, pair the chip with less GDDR5 memory and release it as a cut-down graphics card - this is exactly what happened with 2013's GK110 processor, where the higher-grade chips powered GTX Titan and Nvidia's compute-based Tesla products, with the rest of the yield used for the slightly less capable GTX 780. In the case of the Titan X, the question is really how long Nvidia wants to reserve the GM200 technology for the high-end premium market. Our guess? That'll all depend on the power level of AMD's forthcoming replacements to the R9 290 and 290X - the two cards that brought prices crashing down on Titan-level rendering power at the tail-end of 2013.

AMD's next generation of high-end GPUs are almost certain to surface within the next couple of months, so it'll be fascinating to see just how well they stack up against the GTX 980 and the Titan X. The effectiveness of Nvidia's Maxwell architecture has caused real issues for AMD in the most lucrative sectors of the GPU market, and we hope to see some real competition from the red corner. We'll have full coverage of both AMD and Nvidia's upcoming behemoths as soon as samples become available.