Skip to main content

Long read: How TikTok's most intriguing geolocator makes a story out of a game

Where in the world is Josemonkey?

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Nvidia GTX 1180/2080 specs and performance: what should we expect?

A new GPU flagship is coming - but how powerful will it be?

There's surely not long to wait now. After two years with Nvidia's 10-series GPUs based on the Pascal architecture, we're finally due an upgrade. With several sources strongly suggesting a July launch date, Nvidia's next 'GPU King' may well be just around the corner, but what specs will it have? How powerful it be? In a market notorious for its often-accurate leaks, it's actually surprising how little we know, but there's enough information out there to at least give a broad overview of what we can expect.

First of all though, it's worth commending Nvidia on just how tightly its next-gen GPU line-up has been locked down. Even the name of the new series and the upcoming flagship remains unconfirmed. A GTX 1180 to kick off an 11-series product line-up? Or a GTX 2080 with a brand shift to a 20-series? In a world where specs often arrive months before release, even the name of the new line-up is unknown right now. Certainly, there's been nothing like the Pascal production line leak that confirmed 2016's 10-series branding and gave us a pretty good idea of the Founders Edition ID weeks ahead of the official reveal.

Only one publication - Wccftech - has released tentative specs and while there's a chance it's a genuine leak, there's enough ambiguity in there to suggest that it's a series of best guesses based on downsizing Nvidia's enormous Titan V processor - the only GPU currently available built using the next-gen architecture. In this sense, the 'leak' may well be inaccurate, but it may be 'ballpark' - after all, Nvidia has a proven formula whereby it starts with a big chip design aimed at its high-end datacentre customers, then shrinks it down, repurposing the same architecture for gamers, with reduced CUDA core counts and memory configuration, depending on the market sector. The only problem here is that Titan V is a product featuring a configuration with bespoke elements that'll never be released for gamers in its current form making guestimates on smaller, gaming-orientated versions hard to figure out. So what can we say about the next-gen architecture, for sure?

As divorced from gaming markets as it may be, there is still much to learn from Titan V, built using a new architecture dubbed Volta. However, no less a source than Reuters revealed that the new gaming cards use a 'Turing' architecture. Again, nothing is confirmed, but it's entirely logical and reasonable to believe that the two are one and the same - or rather that Turing is essentially just the GPU component of Volta, shorn of the AI/deep-learning 'tensor cores' that are part and parcel of the Titan V and other products using the GV100 processor. These elements are designed to further Nvidia's massive commercial success in non-gaming markets, but would be essentially dead silicon on a GeForce graphics card.

A video look at the facts we know about the next-gen Nvidia GPUs along with assessments of the various specs leaks.Watch on YouTube

Something else we can take as read is that the next-gen Nvidia GPUs will be fabricated by Taiwanese chip giant TSMC on its 12nmFFN node - a production technology exclusive to Nvidia (indeed, the 'N' in FFN stands for Nvidia). As we understand it, this is actually more of a refinement of the 16nmFF process used in the current 10-series and while we don't expect to see much reduction in transistor sizes, it should at the very least offer power efficiency advantages.

We can assume this is the case by looking at Titan V: it has 5120 CUDA cores vs the 3840 in the current Pascal Titan Xp - and even if we strip out GV100's non-gaming components, the die size would still be very large, significantly bigger than the Xp. So with that in mind, our best guess here is that next-gen Nvidia GPUs will be larger chips than their Pascal counterparts, the potential increase in power consumption offset to a certain extent by the advantages of the new 12nmFFN production process.

We should also expect to see new memory technology in the new GPUs. Pascal pushed hard with GDDR5X, but Titan V utilises super-expensive HBM2 - and neither are a good fit for the hardware to come. Samsung and Micron have revealed that GDDR6 is in mass production now with a summer delivery date - just in time for next-gen Nvidia products. There's a huge increase in bandwidth with this new technology, meaning that a prospective GTX 1180/2080 can deliver 576GB/s over a mere 256-bit bus - the memory interface typically associated with Nvidia's x80 cards. That's higher than the 547GB/s in Titan XP, and positively dwarfs GTX 1080's 320GB/s. Several flavours of GDDR6 are coming online though, so lower numbers may be possible. We're hopeful that Nvidia retains the top-end speed though: 4K resolution gaming at 60 frames per second demands a lot of bandwidth.

Clearly though, the actual facts we have about the hardware design of the next-gen Nvidia GPUs are limited. It's a new architecture built on a refined process, and the timings of GDDR6 coming online are unlikely to be coincidental - using this technology should offer a big cost saving for Nvidia compared to using HBM2. Beyond that, there are some hints from Titan V that may inform us of what to expect from the new GPUs - specifically, a 33 per cent increase in CUDA cores vs the existing Pascal big-chip Titan, signifying that perhaps most of the basic performance increase will come from more shaders, meaning more silicon area required to house them.

Ray-tracing is going to be a big deal for Nvidia - here's Alex Battaglia's assessment of what was revealed at GDC this year.Watch on YouTube

On top of that, benchmarks from GamersNexus (for our money, one of the best PC hardware sites out there) are worth checking out as they present compelling evidence that Nvidia has finally addressed its lacklustre support for a key DX12/Vulkan feature - asynchronous compute, the concept of occupying more of the GPU more of the time by giving dormant areas of the processor less time-sensitive tasks to carry out.

A deeper focus on lower-level API support is long overdue, but if we're looking for more evidence of more new features in the Volta architecture, we can look back to GDC 2018, where Nvidia revealed a new focus on what it calls RTX - hardware-accelerated ray-tracing - a very forward-looking feature. Plenty of demos were revealed, the most notable being Epic's Star Wars render, running at 1080p at 24fps using four GV100 processors. However, a more realistic assessment of what the tech offers may be found in 4A Games' beautiful Metro Exodus demonstration, which uses ray-tracing for indirect lighting and ambient occlusion, with some excellent results.

Our understanding is that 4A's more strategic use for the tech ran on one Titan V, suggesting that what we may think is far-flung future tech may be coming to gaming PCs a lot sooner than we might otherwise have thought. What remains is the question of how much support RTX will garner, what the performance hit is for using it and the extent to which more mainstream GPUs - like the inevitable GTX 1160/2060, for example - will be able to tap into the technology.

So, as things stand, there's quite a lot about the next-gen Nvidia GPUs that we do know, but nothing concrete that answers the question of how powerful a new GTX 1180 would be, and more specifically, how it measures up against the established Pascal yardstick for performance leadership - Titan Xp. If you look at the single-source leaked spec or make your own guestimate by mapping Titan V's boost in CUDA cores to the x80 line, you would see the GTX 1180 inching ahead of Titan Xp - and potentially eclipsing it significantly in titles with extensive use of asynchronous compute. But before we move on, check out this roadmap of performance boosts between the x80 and x80 Ti GPUs from the Kepler, Maxwell and Pascal architectures. The initial impression is that Nvidia is gaining performance momentum with each generation (click play on the video to kick off the detailed analysis).

Assassin's Creed Unity: Kepler vs Maxwell vs Pascal

However, if the new architecture gains most of its performance boost from more CUDA cores, you're more likely to get a gen-on-gen bump more in line with the leap from Nvidia's Kepler to Maxwell, rather than the larger leap seen from Maxwell to Pascal. Look back to the release of initial Maxwell flagship GTX 980 and its lead over the outgoing GTX 780 Ti wasn't exactly a game-changer (though continued driver support and the extra gig of RAM certainly helped a great deal in the years to follow). You can see that from the testing benchmark embedded above where GTX 980 is only 16 per cent faster than GTX 780 Ti. Meanwhile, the leap from GTX 980 Ti to GTX 1080 is in the region of 32 per cent - a remarkable achievement. Only the resolution selected here limits the GTX 1080 Ti's 25 per cent uptick in performance vs GTX 1080.

Assuming we are looking at a Maxwell-style boost, even with ballpark Titan Xp+ performance from a prospective GTX 1180, the lack of AMD competition means that it will be the most powerful single-chip GPU on the market - but what if Nvidia wants to push harder? The alternative is that the firm allocates more silicon than expected to its new designs, or that we see the 12nmFFN process pushed to higher frequencies. As we saw when we compared the PS4 Pro and Xbox One X processors, relatively small but significant bumps in shader count and clocks can add up to a large increase in performance.

Overall then, a smattering of facts and a fair amount of speculation pretty much sums up what we know about the next generation of Nvidia hardware. The bottom line is that we don't even know for sure what the name of the product actually is, or indeed when it's going to come out. On the latter point though, July/August does look likely as up until a couple of days ago, the Hotchips symposium, slated for mid-August, revealed that Nvidia would be talking about its next mainstream GPU architecture on the first day of the conference - something that would be unlikely to happen if the product hadn't at least been announced by that point. That said, revisiting the site today, that conference appointment has been removed in favour of a non-descript 'TBD'. As I mentioned earlier, Nvidia has been pretty good at locking down those leaks - but it shouldn't be too long until those full specs are finally revealed.

Read this next