AMD has revealed its first RDNA 3 graphics cards, the $1000 Radeon RX 7900 XTX and $900 RX 7900 XT. As the numbering scheme and price points indicate, these are high-end models designed to compete with Nvidia's RTX 4090 and 4080, but they come with more substantial changes than you might expect - including FSR 3, designed to counter Nvidia's DLSS 3 frame generation, and a brand new chiplet-based design.
Before we get into the features though, let's take a look at the cards themselves. The chiplet design breaks a traditional monolithic GPU into several interconnected sections. For RDNA 3, that's a single 5nm graphics compute die (GCD) that's 300mm² and six 6nm memory cache dies (MCDs) that are each 37mm². This design means that only the most critical areas need to be made with a cutting-edge 5nm process, helping to improve CPU yields, reducing costs and ultimately consumer prices. However, it also requires a fast interconnect between the different chips - which runs at 5.3TB/s here. This same chiplet approach worked brilliantly with Ryzen, transforming AMD from an also-run to a growing giant, so it'll be fascinating to see how it if works miracles in the GPU space too.
Each of the dies are impressive in their own right. The memory cache die uses a 64-bit memory controller and a second-gen Infinity Cache, which combine to provide 2.7 times the peak bandwidth of RDNA2 designs. Meanwhile, the graphics compute die offers unified RDNA 3 compute units with hardware for stream processing, AI acceleration and RT. The design also decouples shader and front end clocks speed, with 2.3GHz for the former and 2.5GHz for the latter, which AMD says will result in a more efficient design - with up to 25 percent power savings in terms of shaders and 15 percent higher front-end clock frequency.
RT has long been an AMD bugbear, so their second-gen solution which supports '1.5x more rays in flight', 'new dedicated instructions and 'new ray box sorting and traversal' should result in up to 50 percent more performance per compute unit - although it doesn't look like AMD is speeding up some parts of the RT pipeline that Nvidia is, so we may not see a big jump in RT performance relative to rasterised performance as you'd otherwise expect.
The display engine built into these cards is pretty nutty, supporting DisplayPort 2.1 and up to 54Gbps of display link bandwidth, allowing for 8K 165Hz (!) or 4K 480Hz (!!) with 12-bit colour. Suffice it to say, we're some distance away from these sorts of displays, but it's an effective rebuttal of Nvidia's 40-series cards which are limited to DisplayPort 1.4. A dual media engine, meanwhile, should shore up AMD's weak reputation for streaming and media encoding, with AV1 encode/decode support, simultaneous encode/decode for AVC/HEVC and 'AI Enhanced Video Encode' which I look forward to hearing more about.
|Model||CUs||Game clock||VRAM||Mem. bus||Board power||Launch MSRP|
|RX 7900 XTX||96||2.3GHz||24GB||384-bit||355W||$999|
|RX 7900 XT||84||2.0GHz||20GB||320-bit||300W||$899|
|RX 6950 XT||80||2.1GHz||16GB||256-bit||335W||$1299|
|RX 6900 XT||80||2.0GHz||16GB||256-bit||300W||$999|
|RX 6800 XT||72||2.0GHz||16GB||256-bit||300W||$649|
In terms of the cards themselves, it looks like there's a substantial gulf between the 7900 XTX and 7900 XT. RAM counts are the most obvious difference - 24GB on the XTX and 20GB on the XT, with a correspondingly smaller memory bus (384-bit vs 320-bit). The rated 'game clock' also drops from 2.3GHz to 2GHz, while a reduction from 96 to 84 compute units is also significant. However, both cards sip power compared to the likes of the RTX 4090, with a total board power rated at 355W for the XTX and 300W for the XT. Both cards support DisplayPort 2.1 and AV1 encode/decode.
In terms of expected performance, AMD provided frame-rate data for the RX 7900 XTX, but only against the RX 6950 XT. Here, they recorded a performance improvement of 50 to 70 percent for the new-gen card, with 1.5x in COD: MW2, Watch Dogs Legion, Resident Evil Village (RT) and Metro Exodus (RT), 1.6x in Doom Eternal (RT), and 1.7x in Cyberpunk 2077. I expected to see more performance data than this - such as comparisons to Nvidia's cards, perchance? - but as always we'll need to wait until the cards reach reviewers to see how well these GPUs perform in real-world testing.
Finally, AMD announced FSR 3, promising up to double the frame-rate of FSR 2. Based on this wording - and the presentation endnote that references 'Fluid Motion Frames' - it seems a pretty safe bet that this is frame generation a la DLSS 3. This technology slightly increases input latency, but improves visual fluidity substantially as AI-generated frames are inserted between 'real' ones. As it doesn't appear that RDNA 3 hardware is used for this, the tech could be made available for older AMD GPUs as well - or even Nvidia/Intel models. That would give it a unique advantage against DLSS 3, which is exclusive to the expensive RTX 4090 (and its forthcoming 40-series counterparts) at present. FSR 3's release date was given as '2023', so presumably we'll see much more information about it over the next few months.
So - those were AMD's RDNA 3 announcements! It'll be fascinating to see how the new hardware stacks up as we approach the release date of December 13th, as with a whole new architecture there's room for signifiant performance improvements - and the potential for some interesting edge cases as well.
AMD is certainly making the right noises to appeal to those turned off by Nvidia - substantially higher frame-rates, future-looking display standards, reasonable power targets and no 12-pin connectors - but the performance and features will need to be in place too.
What did you make of the announcements? Let us know in the comments below.