When 11GB isn't enough —

AMD Radeon VII: A 7nm-long step in the right direction, but is that enough?

Sadly for AMD, the story doesn't end at "bigger specs, better card than Nvidia."

Specs at a glance: AMD Radeon VII
STREAM PROCESSORS 3,840
TEXTURE UNITS 240
ROPS 64
CORE CLOCK 1,400MHz
BOOST CLOCK 1,800MHz
MEMORY BUS WIDTH 4,096-bit
MEMORY BANDWIDTH 1,024GB/s
MEMORY SIZE 16GB HBM2
Outputs 3x DisplayPort 1.4, 1x HDMI 2.0b
Release date February 7, 2019
PRICE $699 directly from AMD
AMD Radeon VII product image

AMD Radeon VII

(Ars Technica may earn compensation for sales from links on this post through affiliate programs.)

In the world of computer graphics cards, AMD has been behind its only rival, Nvidia, for as long as we can remember. But a confluence of recent events finally left AMD with a sizable opportunity in the market.

Having established a serious lead with its 2016 and 2017 GTX graphics cards, Nvidia tried something completely different last year. Its RTX line of cards essentially arrived with near-equivalent power as its prior generation for the same price (along with a new, staggering $1,200 card in its "consumer" line). The catch was that these cards' new, proprietary cores were supposed to enable a few killer perks in higher-end graphics rendering. But that big bet faltered, largely because only one truly RTX-compatible retail game currently exists, and Nvidia took the unusual step of warning investors about this fact.

Meanwhile, AMD finally pulled off a holy-grail number for its graphics cards: 7nm. As in, a tiny fabrication process that packs even more components onto a GPU's silicon for other hardware and features (the Radeon VII's HBM2 RAM shares die space with the GPU). In the case of this week's AMD Radeon VII—which goes on sale today, February 7, for $699—that extra space is dedicated to a whopping 16GB VRAM, well above the 11GB maximum of any consumer-grade Nvidia product. AMD also insists that its memory bandwidth has been streamlined to make that VII-specific perk valuable for any 3D application.

No proprietary nonsense there. An emphasis on more straight-up speed and power is better for everyone in the PC gaming space, right?

Not if Nvidia's own cards, full of proprietary perks, still manage to meet or exceed the Radeon VII at roughly the same price point. And that's where we are today. The AMD Radeon VII's architecture and design priorities offer an interesting peek at how the battle against Nvidia could heat up in the near future. For now, AMD has to settle for merely clawing its way back to striking distance of its current $700 GPU rival.

Quick, to the benchmarks!

There's a lot to get excited about when reading the Radeon VII's stat sheet: its maximum boost clock of 1,800MHz, its 1TB/sec memory bandwidth, and its 16GB of what AMD calls "HBM2" memory, aka the second generation of its proprietary High-Bandwidth Memory (which AMD has previously described as a more power-efficient take on GDDR5). Those three stat points soundly surpass 2017's AMD RX Vega 64 card, not to mention anything Nvidia markets—though the RTX line favors GDDR6 memory.

In fact, the Radeon VII also beats the comparably priced RTX 2080 in pretty much every stat category, including stream processors (what Nvidia likes to call CUDA cores) and texture units. If this review were nothing more than a contest of numbers, AMD would come out on top. So let's cut to the chase:

Our gamut of game benchmarks includes a few lengthy runs through modern game engines, and the ones I've chosen can make graphics cards sweat with alpha transparencies, sweeping views, and dense shadow maps. But by and large, the Radeon VII—using a set of pre-release drivers provided to the press—basically nudges up against the very edge of the comparably priced RTX 2080 (also $699 MSRP depending on the model and manufacturer, though RTX 2080 also comes in a $799 "founders edition" model; this is not necessarily the "best" RTX 2080 model on the market, but it does come with an Nvidia-approved factory overclock).

What gives? AMD hasn't offered an in-depth response about how we should expect existing games to tap into the Radeon VII's impressive stats. Instead, the company offers a bevy of forward-thinking statements. For one, it lists the growing appetite for higher "recommended" VRAM footprints in games on an annual basis. The problem with this logic is that it assumes PC game makers are developing games whose content scales most notably because of available VRAM—as opposed to games whose settings shrink and grow with clock speeds and SPs/CUDA cores in mind.

The current market of PC games is scaling up from consoles with a respectable amount of VRAM—though both Xbox One and PS4 share their total memory pool for both standard RAM and VRAM purposes. Which is to say, a "pro" console-equivalent video card can operate in the 2-4GB range of VRAM. AMD is clearly in a position to know what's to come from whatever we're calling the PlayStation 5 and Xbox One Point Two, so it's fair to assume that today's "ridiculous" amount of VRAM will probably seem quaint tomorrow.

For now, PC games aren't truly tapping into that aspect of the VII. However, they will tap into a higher power demand. Get ready for a whopping 300W power draw, which edges just past the 295W TDP of its Vega 64 predecessor and the 215W draw of the RTX 2080.

AMD also points to a very important metric that gets lost when we run a benchmark, walk away, and jot down "average" numbers: frame-time spikes. A relatively high frames-per-second average is worthless when a graphics card's performance dips and chugs at random spots, and AMD insists that its 16GB framebuffer can make these annoyances vanish for current-gen PC games. But in my frequent watching and rewatching of identical benchmarks (and in my testing of general gameplay), I didn't see 4K game demos reach a consistently smoother rate when pushing the kinds of resolutions and settings a gamer might expect from a $699 GPU. If anything, I noticed Radeon VII stutters far more in equivalent settings, especially while testing the brand-new Respawn shooter Apex Legends, than an RTX 2080.

Context for AMD's provided numbers

Additionally, AMD served us its own revealing 4K gaming benchmarks on a platter. In a chart sent to press, the Radeon VII apparently enjoys significant gains versus the likes of AMD's Vega 64 and Nvidia's GTX 1080—not surprising, considering the age and specs of those cards. Yet that same chart also includes test results compared to the equivalently priced RTX 2080. Many of the tested games do not include their own bespoke benchmarks, so it's unclear exactly what kinds of gameplay sequences were compared to yield these results. But AMD didn't hide behind Radeon-friendly results when comparing against its price rival.

Here are some significant comparisons from AMD's officially distributed list, which traffics entirely in average frame rates:

Game Preset API Radeon VII RTX 2080 Margin
Call of Duty Black Ops 4 “very high” DX11 82.3fps 81.8fps (+0.5fps)
Deus Ex: Mankind Divided “high” DX12 53.2fps 50.8fps (+2.4fps)
Doom 2016 “ultra” Vulkan 91.6fps 91.5fps (+0.1fps)
Monster Hunter World “high” DX11 52fps 52.6fps (-0.6fps)
Resident Evil 2 remake “max” DX11 52.9fps 52.5fps (+0.4fps)
Star Control Origins “great” DX11 88.3fps 72.5fps (+15.8fps)
The Witcher 3 “ultra” DX11 55.2fps 59.9fps (-4.7fps)
Wolfenstein II: The New Colossus “mein leben” Vulkan 96.7fps 110.3fps (-13.6fps)

If you're wondering: only Far Cry 5 had its "frame-time" performance plotted out by AMD, and AMD's assurances that the Radeon VII was smoother than the RTX 2080 weren't definitively proven out by my own tests (which we've already benchmarked in the earlier gallery).

The above chart is reflective of AMD's 20-game list. The difference margin on that list is typically in the 2 percent to 5 percent range, more often in favor of the RTX 2080. In some cases, AMD claims that switching to the DirectX 12 API results in gains against Nvidia's card, like with Battlefield V. But in other cases, like the last two Tomb Raider PC games, even a switch to DX12 fails to give the Radeon VII a leg up.

And in general gameplay tests for Battlefield V and Deus Ex: Mankind Divided in DX11 mode, Nvidia's RTX 2080 makes up the difference. It equals or surpasses the best that Radeon VII can muster. My run of the DE:MD benchmark, in particular, shows that a stock RTX 2080 in DX11 mode exceeds the frame rate and stability of the Radeon VII using either API. (In that game's case, Radeon VII gets a 2.5fps boost by opting for DX12.)

AMD's admission about the wide performance gap for Wolfenstein II: The New Colossus made me do a double-take and install that game to double-check, particularly because it picks Vulkan as an API in favor of Direct3D. In the past, games that used Vulkan temporarily benefited from AMD having proper shader drivers that allowed the software and GPU to communicate more directly with each other, but Nvidia has since rectified that disparity.

With my test rig set to 4K resolution and nearly all maximum settings in the game (minus a slight downgrade from the maximum anti-aliasing solution) the results were pretty clear: an overclocked 2080Ti absolutely trounced the Radeon VII. AMD's card clearly struggled with explosions and alpha particle effects, dropping frame rates down into the 50s and even high 40s when these revved up in one indoor firefight. Loading the same sequence and playing through it on the Nvidia card rarely produced an equivalent deviation from its 60fps target.

Star Control: Origins has enjoyed AMD-specific updates and patches since its launch, so its appearance as a feather in AMD's cap isn't surprising. However, this late-2018 game isn't all that handsome and doesn't really drive either video card to serious geometry-pushing limits. Both tests turn in 80fps-and-up averages at "great" settings. But again, a roughly 10-percent gain for AMD for any game is good news for this card.

Still, AMD's list doesn't include any overclocks for the RTX 2080, which—as I already wrote last year—is painfully easy to overclock safely. EVGA's Precision X1 utility has since been patched up to make that process a stable cinch, eliciting a no-brainer 2-3fps bonus for just about any game you throw at it. The result is a list that should include far fewer "AMD wins" in its 20 chosen games.

Another GPU-future raffle?

Test system specifications
OS Windows 10
CPU Intel Core i5-6600K, OC'ed @ 4.1GHz
RAM 16GB G.Skill DDR4 @ 2,400MHz
HDD 1TB WD Black SSD
Motherboard MSI Z-170A
Power Supply EVGA Supernova 850-G2
Cooling Corsair liquid cooler
Monitor LG B6 4K/60Hz

As we're primarily a game-testing site, we didn't have a workflow ready to robustly test AMD's other allegation: that the Radeon VII is a great choice for video encoding, rendering, and effects testing.

AMD once again provided a series of its own charts for these kinds of workflows, which we will not post at length because we can't offer nearly as much insight there as we could about the gaming performance figures. One notable takeaway that AMD offered with its own results is that Adobe Premiere's ability to encode video files with custom effects is more or less identical on both the RTX 2080 and the Radeon VII. Meanwhile, AMD claims Luxmark's brute-force ray-tracing benchmark showed a substantial lead over the RTX 2080—which reminds us that, without the RTX's ray-tracing-specific tensor core enabled, that particularly intensive visual trick absolutely stands to benefit from the Radeon VII's beastly memory specs.

Ultimately, AMD didn't offer any rendering test numbers that swayed us to double down and build a separate testing suite. If AMD had promised the rendering moon with the Radeon VII, we might have been more pressed to confirm its claims.

The question, really, is this: which graphics card raffle ticket would you rather buy with your $700 (or more)? The most promising thing to expect from RTX graphics cards is its access to Nvidia's DLSS. This anti-aliasing solution lets Nvidia's newer cards render a game that already has TAA as an option at roughly 1440p, then "intelligently" upscale to 4K resolution with results that look darned good—better than PS4 checkerboarding, for sure. In some cases, the results look nearly as good as a pure 4K signal. But no major retail game has rolled this feature out yet, which makes us wonder whether Nvidia is still working out any kinks (particularly artifacts that result from failed upscaling).

Otherwise, RTX has a few proprietary boosts as possible options in future games, but its dedicated ray-tracing tensor core has so far proven to be pretty draining to a game's frame rates. Meanwhile, its GPU-fueled geometry culling features only exist in spacey graphics demos, not real games. The latter would require some serious Nvidia-specific tuning for major in-game 3D geometry, so I don't buy its shot at major adoption. The former, I gotta say, looks so good in Battlefield V that I always toggle it on, and frame rates be damned. But when the heck are we going to see a second major game take Nvidia's ray-tracing goodness for a spin?

So, that's one raffle ticket. The other, really, is an expectation that hugely boosted VRAM specs will pay off in future games. In many ways, this is the safer bet, because it invites developers of all types to jump into a massive, general-use pool. But handsome games with huge texture options already exist, and there's no telling what the holdup is at this point for any of them to capitalize on Radeon VII. AMD has famously left developers stuck waiting for driver updates to access its cards' best bits. It will take a serious publicity drive, aimed at game developers, to kick off AMD's dream revolution of a bigger VRAM world.

As juicy as AMD's offer may sound, it will ultimately be safer to wait to buy anything for roughly a year. That's the minimum amount of time before next-gen consoles begin truly expanding the average games' expectations of VRAM. By then, the Radeon VII—or its cheaper, slower successors—could be the exact thing the GPU marketplace needs now that bitcoin mining is dying off and gaming performance is becoming a valid sales driver again. Until then, we're just gonna sit here with our Radeon VII, wondering what we can do with all this danged VRAM.

Channel Ars Technica