MSI GeForce RTX 3080 Gaming X Trio 10G Review

By a nose, MSI’s GeForce RTX 3080 Gaming X Trio 10G is the fastest third-party version of the RTX 3080 we’ve seen so far, but its titanic body and excessive (triple!) PSU connector requirements demand some extra attention from shoppers.

PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

MSI GeForce RTX 3080 Gaming X Trio 10G Specs

Name Value
Graphics Processor Nvidia Ampere GA102
GPU Base Clock 1440 MHz
GPU Boost Clock 1815 MHz
Graphics Memory Type GDDR6X
Graphics Memory Amount 10 GB
HDMI Outputs 1
DisplayPort Outputs 3
VirtualLink Outputs No
Number of Fans 3
Card Width triple
Card Length 12.3 inches
Board Power or TDP 340 watts
Power Connector(s) 3 8-pin

It’s been a wild last few weeks in the world of graphics cards. The GeForce RTX 3080 debuted to wide acclaim; the GeForce RTX 3080 has largely gone out of stock; the GeForce RTX 3080 saw some drama around stability and performance. And with the release of the $759 MSI GeForce RTX 3080 Gaming X Trio 10G, the hits (mostly) keep on coming. This is MSI’s most extreme version of several RTX 3080 designs, and given the lineage of the Gaming X Trio with previous high-end GPUs, the expected superlatives pile on: The card is huge, it’s brawny, and it packs a hell of a lot of power under its gigantic shroud. While its value proposition may not look as crystal-clear when positioned next to less-expensive, faster (and arguably more elegant) options like the seminal Nvidia GeForce RTX 3080 Founders Edition, it’s an irrepressible beast, and the story of this card is one that is common to every RTX 3080 card we’ve tested thus far: If you can actually find one, that’s probably the RTX 3080 to buy, if you simply have to be an early RTX 3080 adopter.

Specs Compared: MSI GeForce RTX 3080 Gaming X Trio 10G vs. the World

To start, it’s only fair that we compare the MSI GeForce RTX 3080 Gaming X Trio 10G against Nvidia’s Founders Edition RTX 3080, as well as the "Turing" cards it plans to succeed in the GeForce RTX 20-Series, the RTX 2080 and the RTX 2080 Super.

On the specifications front, MSI hasn’t made a whole heap of changes to its Gaming X Trio 10G from the Founders Edition of the RTX 3080, instead opting just to jump the boost clock from 1,710MHz boost clock to 1,815MHz in the Gaming X Trio. We’d imagine this is just the first of many variants, though, with our first hint being the "10G" at the end of the name, which denotes the 10GB of onboard VRAM. It’s not out of left field to assume that at some point MSI might plan to launch cards that carry even more onboard VRAM.

The Gaming X Trio has a bit of OEM flair on its shroud, found on the front-facing side of the card in a single, customizable LED strip that runs almost the entire length of the card. And speaking of length, this card has gotten plenty to spare at a whopping 12.3 inches from front to back. If you were planning on installing this card in anything but an ATX-size case or larger, you’ll either need to upgrade or get real creative with how things are organized inside your build. You don’t need a ruler to measure the space you need, but an XXL Philadelphia hoagie.

Similar Products

AMD Radeon RX 5700 XT

Nvidia GeForce RTX 2070 Super

Nvidia GeForce RTX 3080 Founders Edition

Asus TUF Gaming GeForce RTX 3080 OC

MSI GeForce RTX 2070 Armor

Nvidia GeForce RTX 2080 Super

Nvidia GeForce RTX 2080 Ti Founders Edition

AMD Radeon VII

Sapphire Pulse Radeon RX 5500 XT

Now, the tricky part to talk about with this and other third-party RTX 3080 cards. A minor kerfuffle arose in the first weeks of release of the RTX 3080, with reports of some third-party RTX 3080 cards seeing crashes in certain games when the GPU was engaged in boost. An Nvidia driver pushed shortly thereafter seems to have cleared up the issue.

For those out of the initial loop as that story developed, what looked like it might be a major issue died down with the driver release. Conjecture had it (based on a post from EVGA that detailed its history with one of its FTW-family card designs that was held back from release) that the issues were down to a certain kind of on-card capacitors known as "POSCAPs."

Without getting too deep into that fast rising and deflating story, the rumors had it that certain RTX 3080 cards using a combination of POSCAPs and MLCC capacitors, or solely POSCAPs, had stability issues that needed to be addressed by the driver. Nvidia issued a statement noting that any given mix of capacitor types was not an indication of quality or lack thereof, and that the OEMs collaborated closely with them on the designs. Indeed, while some vendors have made statements that they used all-MLCC designs in certain of their cards, the driver fix seems to have poured oil on the waters of the problem. It’s impossible, outside the realms of computer engineering expertise and very specialized equipment, to pinpoint any firm cause and effect in any of this. We’ll discuss the impact of those drivers in the performance section below, but we can say, they do keep things stable.

Continuing on the discussion of the design of the card, eagle-eyed shoppers are probably looking at the picture below with a bit of trepidation.

. as they perhaps should. No, you’re not seeing triple. The MSI GeForce RTX 3080 Gaming X Trio 10G requires a staggering three eight-pin connectors to get all the juice it needs (it’s rated for a 340-watt TDP), though oddly it has the same TDP/power requirements as the dual-eight-pin Asus TUF Gaming GeForce RTX 3080 OC, which uses just two connectors.

Unlike that card, though, the MSI GeForce RTX 3080 Gaming X Trio 10G features the same port layout as the RTX 3080 Founders Edition: three DisplayPort 1.4b ports, and just one HDMI 2.1 output. (The Asus TUF card features a second HDMI 2.1 port.) One port from last-generation "Turing" days may be conspicuously absent: the USB-C-lookalike VirtualLink. No VirtualLink here, folks; Nvidia has ditched the port due to lack of adoption, and its board partners look to be following suit.

Let’s Get Testing! Time to Play the Gaming X Trio

So, back to the card on hand. PC Labs ran the MSI GeForce RTX 3080 Gaming X Trio 10G through a series of DirectX 11- and 12-based synthetic and real-world benchmarks. Our spanking-new PC Labs test rig is Intel-based and employs a PCI Express 3.0, not 4.0, motherboard. It’s equipped with an Intel Core i9-10900K processor, 16GB of G.Skill DDR4 memory, a solid-state boot drive, and an Asus ROG Maximus XII Hero (Wi-Fi) (Opens in a new window) motherboard. All cards below were retested on this rig. Given our tests with the Core i9-10900K and recent Ryzen 9 CPUs, this rig is the best reasonable configuration of the moment in 2020 to cut the CPU out of the equation for frame rates. (Read more about how we test graphics cards.)

For our testing, we focused some of the effort on the esports aspect of the Nvidia GeForce RTX 3080 with games like Counter-Strike: Global Offensive (CS:GO) and Rainbow Six: Siege. We also ran the card through the rest of our new standard benchmark regimen, which tests a card’s abilities to handle AAA games at the highest possible quality settings, as well as how it rides during synthetic benchmarks that stress the card in a variety of ways.

Also remember that almost every test we run (aside from the esports titles) is done at the highest possible quality preset or settings. If you have a higher-hertz gaming monitor and you’re worried your card might not make the frame-rate grade, it could still be possible with the right card and a combination of lower settings. Not only that, but many of these titles (including Death Stranding, Shadow of the Tomb Raider, and F1 2020) have both DLSS and FidelityFX CAS with Upscaling integrated directly into the game. This can mean boosts of up to 40 percent more performance on top, depending on the setting and the card you’re playing with.

And so, onward to our test results. Note: If you want to narrow down our results below to a specific resolution (say, the resolution of the monitor you plan to game on), click the other two resolution dots in the chart legends below to suppress them and see a single set of results. Our new list of AAA titles includes a mix of recent AAA titles like Red Dead Redemption 2 and F1 2020, as well as some older-but-still-reliable pillars of the benchmarker’s toolkit, like Shadow of the Tomb Raider and Far Cry 5.

Testing Results: Synthetic Benchmarks

Synthetic benchmarks can be good predictors of real-world gaming performance. UL’s circa-2013 Fire Strike Ultra is still a go-to as an approximation of the load levied by mainstream 4K gaming. We’re looking only at the test’s Graphics Subscore, not the Overall Score, to isolate the card performance. Meanwhile, we also ran 3DMark’s Time Spy Extreme test, which is a good test of how well a card will do specifically in DirectX 12 games at 4K resolution. Finally there’s https://jiji.ng/ Port Royal, which is strictly a test for RTX cards right now, measuring how well they handle ray-tracing tasks. (Thus why blank results for the AMD cards on that one.) Also here are a handful of GPU-acceleration tests (Furmark, V-Ray, LuxMark); more details on those at the "how we test" link.

Right off the line, the MSI GeForce RTX 3080 Gaming X Trio 10G tells a story. one where it’s almost always the winner versus the other third-party RTX 3080 we have tested (the Asus TUF). But, while it’s clearly the faster of the third-party RTX 3080 cards we’ve tested thus far (and to be fair to Asus, that number is now at a whopping two), it doesn’t as consistently beat the Nvidia GeForce RTX 3080 Founders Edition, or lies in a virtual margin-of-error tie with it, depending on the test. (Of note, the 3DMark tests show a very slight loss to the Founders card.) Not by a perceptible enough amount that you’d be able to tell with the naked eye, mind you. just in bench-chart land.

Testing Results: Recent AAA and Multiplayer Games

Now on to the real-world stuff. The following benchmarks are games that you can play. We typically used in each case (for the AAA games) the highest in-game preset and, if available, DirectX 12. The multiplayer-focused and esports titles (such as CS:GO and Rainbow Six) were set slightly below top detail settings to maximize frame rates.

Here the story remained much the same. The MSI GeForce RTX 3080 Gaming X Trio 10G is a consistently faster third-party performer than the Asus TUF card, but it traded blows with the Founders Edition RTX 3080, sometimes winning by a few frames, sometimes tying, sometimes a couple behind. Again: Nothing you’d notice outside the granularity of bench charts like these.

Now, here’s the rub: We got in some early testing on this card with the original 456.55 driver that Nvidia pushed when these cards were first released, and on average between Assassin’s Creed: Odyssey and Far Cry 5 (the only two tests we got done before stability issues forced us to back off until the next driver version was sent out), the MSI GeForce RTX 3080 Gaming X Trio 10G was performing a few frames faster at each resolution. Not enough to make a substantial amount of difference in your general play experience or be a deal maker, but enough to likely narrow some gaps.

Again, the new driver smoothed out the experience, and we mustn’t make a mountain out of these differences. Even with the new driver, any RTX 3080 level of performance we have seen so far, whether from the Founders card, this MSI, or the Asus TUF, is still revolutionary compared with earlier Nvidia Turing cards, or really, earlier anything.

Testing Results: How About Some Legacy AAA Titles?

We also ran some quick tests on some oldies-but-goodies that still offer the AAA gaming experience. These legacy tests include runs of Hitman: Absolution, Tomb Raider (2013), and Bioshock: Infinite, the last being a game that has no business still being as well optimized as it is here in 2020.

Though it’s rarely a surprise to see a modern card do so well on games as old as these four, we never get tired of seeing those numbers climb higher and higher every year. On all but Hitman, as you can see, you can do some serious high-refresh play at 1440p. And you’ll get 60fps at 4K, easy.

Overclocking and Thermals: The Beast Hits the Ceiling

We ran a 10-minute stress test in 3DMark Port Royal on the MSI GeForce RTX 3080 Gaming X Trio 10G, and the card peaked at a temperature of 78 degrees C. That’s just one degree hotter than the ultra-engineered RTX 3080 Founders Edition hit in the same test, which goes to show that maybe the Nvidia card’s advanced cooling system can get matched by brute solutions like the MSI’s three fans when Port Royal is running the card as hot as it will go!

When it came time to overclock the card using EVGA’s Precision X1 (Opens in a new window) utility, though, there was little joy in MSI-ville. Normally, our overclocking procedure is pretty straightforward: Start by boosting the core clocks by 100MHz, boost the memory clocks by 150MHz, tweak a little voltage here, a little temperature limit there, and usually, boom, we’re cooking with gas. If that doesn’t work, scale back and take smaller steps. But the MSI GeForce RTX 3080 Gaming X Trio 10G was having none of it.

Despite our best efforts and the softest of touches, the card would not accept any overclocking profile that would actually affect real-world performance to a point where it could conclusively be attributed to the effect of the overclock, and not just to the general benchmark margin-of-error sway of 1 or 2 percent that you typically see from bench run to bench run. Bear in mind, of course, that the card comes overclocked aggressively out the box from the get-go.

One possibility comes down to the driver addressing the stability issues: Whatever Nvidia had to do in its new drivers to keep the cards stable, it may well be preventing the overclock profile from pushing the card any further than it’s safe to. Now to be clear, we could easily achieve an unstable overclock that would return higher results in isolated tests, but the likelihood of the card enduring through any extended 4K-resolution benchmark quickly fell off as soon as even a 2 percent boost in core and memory clock was applied.

That said: Reality-check time. The card is fast enough on its own, and accelerated out of the box (that’s what the extra cost is for!), enough that the lack of overclocking headroom shouldn’t be a deal breaker. But it’s still something to keep in mind if you were hoping to buy the Gaming X Trio and boost it just a bit to 100 percent beat the Founders Edition. Tweaking may not deliver much joy. But what you get out of the box should.

Engage Beast Mode! (If You Can Find the Beast)

Got a huge PC case? Got a big budget and an even bigger urge to be an early adopter? Then go for it. The $759 MSI GeForce RTX 3080 Gaming X Trio 10G is a fine graphics card that, while slightly pricier than either the $699 RTX 3080 Founders Edition or the $729 Asus TUF Gaming RTX 3080 OC, brings the beast to 4K gaming or very-high-refresh 1440p play. The card is a brute on its own, scored a few outright frame-rate victories, tops the third-party cards we have seen so far, and (taken in a vacuum) is worth the price of entry.

The problem for the Gaming X Trio, though, is we don’t live in a vacuum, and the $699 RTX 3080 Founders Edition exists. at least in theory. That card is quite a bit more compact, and the performance level is more or less identical. That said, the same thing we noted in the review of the Asus TUF Gaming RTX 3080 OC rings true here: In this market, the best RTX 3080 card is the RTX 3080 card you can actually find on sale. Founders Editions or OEM cards, the RTX 3080 is very hard to snag on real or virtual shelves right now, and that shortage of supply is rumored to continue until at least Q1 of 2021, if not beyond that point.

If it turns out the Gaming X Trio 10G version is the only RTX 3080 you can find, you can rest assured that you’re leaving nothing on the table (except, presumably, another $60) by opting for it. Just check your power supply leads first for the three you’ll need, and perhaps be prepared to move your PC into a roomier chassis to make room for a monster.

Gigabyte GeForce GTX 1660 OC 6G Review

Meet the Nvidia GeForce GTX 1660

Nvidia’s latest mainstream GPU, the GeForce GTX 1660, aims for gamers who need solid 60fps-plus performance with AAA games at 1080p, and even higher frame rates for esports titles. Gigabyte’s OC version hits those marks well.

PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Gigabyte GeForce GTX 1660 OC 6G Specs

Name Value
Graphics Processor Nvidia Turing TU116
GPU Base Clock 1530 MHz
GPU Boost Clock 1830 MHz
Graphics Memory Type GDDR5
Graphics Memory Amount 6 GB
DVI Outputs 0
HDMI Outputs 1
DisplayPort Outputs 3
VirtualLink Outputs No
Number of Fans 1
Card Width double
Card Length 8.8 inches
Board Power or TDP 130 watts
Power Connector(s) 1 8-pin

With the silicon from the just-launched GeForce GTX 1660 Ti still warm, Nvidia is keeping the fabrication lines rolling full tilt with another new mainstream GPU, the GeForce GTX 1660. Priced starting at $219 for base models, the GeForce GTX 1660 is considerably cheaper than the base versions of the GeForce GTX 1660 Ti ($279, a difference of $60) but offers many of the same benefits. The key one? The muscle to play popular multiplayer titles at 1080p resolution above 60 frames per second (fps), even above 100fps in some cases. The first GTX 1660 card we tested, Gigabyte’s $219 GeForce GTX 1660 OC 6G, is the GTX 1660 in fairly stripped-down form, but it provides solid performance and makes a fine mainstream upgrade from previous-gen video cards like the GeForce GTX 960, GTX 1050, or GTX 1060.

Gamers on a Budget, Gather ‘Round

Like the GeForce GTX 1660 Ti, the GeForce GTX 1660 is a mainstream waypoint between low-end Nvidia cards like the GeForce GTX 1050 and the much pricier GeForce RTX line. The card carries all the benefits that Nvidia made with its switch to the Turing architecture, while leaving behind the specialized Tensor and RT cores that give the GeForce RTX line of cards their higher prices.

Based off the same TU116 Turing chip as the GeForce GTX 1660 Ti, the GeForce GTX 1660 is, for the most part, a scaled-down version of the Ti card, with one notable difference: the use of GDDR5 memory, instead of GDDR6. (Both cards have 6GB.) This downgrade doesn’t seem to affect the throughput of the memory (both families of card are rated to run at a steady 192.1GBps), but it is curious that Nvidia is taking such a big step back in this department nonetheless.

Similar Products

MSI GeForce GTX 1660 Ti Gaming X 6G

XFX Radeon RX 590 Fatboy

Nvidia GeForce RTX 2060 Founders Edition

MSI GeForce RTX 2070 Armor

XFX Radeon RX 580 GTS XXX Edition

Beyond that, however, the GeForce GTX 1660 is functionally almost identical to the GeForce GTX 1660 Ti, save for some critical downgrades in the boost-clock frequency, the power consumption, and the number of CUDA cores available to render on.

Old Against New: Nvidia GeForce GTX 1660 vs. Nvidia GeForce GTX 1060

To give a frame of reference for how Nvidia has stepped up its game from the previous generation of Pascal GTX cards, here’s a 1:1 comparison of the Gigabyte version of the GeForce GTX 1660 I have on hand here for review against the reference card of the GeForce GTX 1060.

The GeForce GTX 1660 is launching for $30 less than its predecessor did back in 2016, but it features many substantial improvements to the architecture that are reflected in significantly faster results in our benchmarks tests below.

Nvidia Against AMD: GeForce GTX 1660 vs. Radeon RX 590

As Nvidia continues to steamroll through the GPU market with new Turing-based cards, it’s also shoring up a position as the value leader (AMD’s former sweet spot) with cards like the GeForce GTX 1660. Here’s a look at it versus the reference specs of the Radeon RX 590, AMD’s most comparable card. In terms of real-world pricing, though, the GTX 1660 at $219 or a little more falls between most models of the Radeon RX 590 and the older Radeon RX 580.

The GeForce GTX 1660 beats out the Radeon RX 590 both on price and in performance, despite having less available memory and a smaller memory bus width. For more detail on how memory plays a part in 3DMark results, head on over to our deeper-dive comparison of the GTX 1660 versus the AMD RX 590 for a full explanation. As you’ll see in our benchmark tests below, in a few tests the Radeon RX 590 manages to keep pace, but on the whole, Nvidia has cemented its cred as the leader in the value-centric mainstream GPU market (at least, in theory, until AMD’s next-gen "Navi"-based cards drop sometime later this year).

Gigabyte’s GeForce GTX 1660 OC 6G: The Walk-Around

As a base card, the Gigabyte GTX 1660 OC 6G is predictably minimal in every sense of the word, from its design to its specs to the lack of added lighting anywhere on the card.

The card itself measures 8.8 inches front to back, which makes it a good fit for most medium-size desktop chassis, and it could even fit into a MicroATX case if you laid all your cables right. It requires just one eight-pin power connector from your PSU.

On the trailing edge, we found three DisplayPort 1.4 slots, as well as one HDMI 2.0b. No DVI or VirtualLink ports here, as Gigabyte aims to cut down on costs by also cutting down on the number of connectors on the I/O backplane.

Let’s Get Benching!

PC Labs ran the Gigabyte card through a series of DirectX 11- and 12-based synthetic and real-world benchmarks. Our test rig is equipped with an Intel Core i7-8700K processor , 16GB of G.Skill DDR4 memory, a solid-state boot drive, and an Aorus Z370 Gaming 7 motherboard.

For our benchmark results, we wanted to focus some of our efforts (in an all-new testing regimen) on the esports abilities of the GTX 1660, as much of the press from Nvidia with this card and the GTX 1660 Ti has centered around its ability to push 1080p online multiplayer games (like those in the hyper-popular "battle royale" genre) to their highest possible frame rates.

We also ran the GTX 1660 through the rest of our standard benchmarks, which test a card’s abilities to handle AAA games at the highest possible quality settings. With the GeForce GTX 1660 being the budget-focused card it is, we weren’t expecting it to smash any records in this department, and graded it on a curve with that in mind.

Synthetic Benchmarks

3DMark Fire Strike Ultra

Synthetic benchmarks can be good predictors of real-world gaming performance. Futuremark’s circa-2013 Fire Strike Ultra is still a go-to for simulating 4K-based gaming. We’re looking only at the graphics subscore, not the overall score.

On first glance, the results we got from 3DMark’s Fire Strike Ultra test would make you think that, clearly, the Radeon RX 590 is the better bet. But keep reading, and you’ll see how that narrative soon gets flipped on its head.

3DMark Time Spy and Time Spy Extreme

This is Futuremark’s DirectX 12-enabled benchmark for predicting the performance of DirectX 12-enabled games. It uses major features of the API, including asynchronous compute, explicit multi-adapter, and multi-threading.

Again, Gigabyte’s version of the GTX 1660 doesn’t look to be pulling ahead in the areas it should, even getting beat out by the older (and cheaper) GTX 1060.

Unigine Superposition

Our last synthetic benchmark is Unigine’s 2017 release, Superposition. This benchmark does incorporate ray tracing, but it’s done in software, not hardware, and thus doesn’t utilize the ray-tracing (RT) cores of the RTX 20 series in these charts.

It’s here, in the Unigine Superposition test, that we finally start to create a baseline for what to expect in our real-world gaming tests. The GeForce GTX 1660 pulls ahead of its predecessor, the GeForce GTX 1060, and it also posts its first win versus the Radeon RX 590 at each tested resolution.

Real-World Gaming

The following benchmarks are games that you can actually play. The charts themselves detail the settings we used (typically the highest in-game preset and, if available, DirectX 12).

A quick note: Though most of our game tests are maxed out in graphical fidelity to push the cards to their limit, multiplayer gaming is all about maintaining the best balance between graphical fidelity and frame rate. As such, we’ve kept the test sequences for Apex Legends, CS:GO, and Rainbow Six: Siege tuned to the best combination of necessary improvements in settings (higher anti-aliasing and lower shadows, for example), while still trying to keep frame rates for 1080p games above that coveted 120Hz or 144Hz mark for high-refresh monitors. Playing at the more common 60Hz is a cinch with these games using a card of this caliber.

Shadow of the Tomb Raider

Square Enix’s most recent title in the Tomb Raider franchise is our first real-world test. This game is well-optimized for the PC platform, but very demanding at its higher visual quality settings.

Because Shadow of the Tomb Raider is one of the few games that’s made to showcase Nvidia’s technology directly, it’s no surprise to see the AMD Radeon RX 590 get left way in the dust at 1080p on this test. (It was much closer at the two higher test resolutions.)

Rise of the Tomb Raider

The 2015 predecessor to Shadow of the Tomb Raider is still a great benchmark.

The scores shifted a bit in the Radeon RX 590’s favor on this trial, but the GeForce GTX 1660 still came out nontrivially ahead of the GTX 1060 Founders Edition.

Far Cry 5 and Far Cry Primal

The fourth and fifth installments in the Far Cry series are based on DirectX 11, but still demanding. We’re looping the benchmark charts together here since the games play out similarly in a relative sense.

The Far Cry titles are where the GeForce GTX 1660 starts to settle into its lead as the superior GPU over AMD’s aging tech. The results are right on par with what Nvidia advertises as the benefits over the GeForce GTX 1060, and they also fall in line with the numbers posted by MSI’s overclocked GeForce GTX 1660 Ti.

Final Fantasy XV

We’ll take a respite from fps-based benchmarks for Final Fantasy XV.

As is usually the case, the contest between comparable AMD cards and the GeForce GTX 1660 wasn’t even close on this test, but that’s due to Nvidia working so closely with Square Enix during the process of porting FFXV from PS4 to PC. Like-priced Nvidia cards will probably always win this one, no matter what AMD serves up.

World of Tanks Encore

This is another non-fps-based benchmark that’s available as a free download. It’s not super-demanding, but it’s still a reliable test.

We were actually expecting things to be a bit closer on this test between the GeForce GTX 1660 and the older GeForce GTX 1070, but considering that the GTX 1070 also topped the GTX 1660 Ti in this test, it’s no surprise that the GeForce GTX 1660 fell further behind the same curve.

Apex Legends

Apex Legends is the newest, most exciting battle royale on the planet, and it’s gaining adherents by the millions. So we figured, if you’re upgrading your rig or buying a whole new one to get in on the phenomenon, you should have a good idea of what kind of performance to expect on that first boot, right?

Though we weren’t able to hit the Holy Grail of 144fps during our Medium-preset tests of Apex Legends in 1080p, we’d imagine that, with just a few settings turned down from there, the GTX 1660 OC 6G would have no problem keeping up with your 144Hz monitor’s refresh rate at that same resolution. (For more detail around performance with this uber-popular game, see our analysis of The Best Graphics Cards for Apex Legends.)

Counter Strike: Global Offensive

One of the oldest, yet still most popular, online games around the globe, Counter Strike: Global Offensive is the latest in a long line of titles that have changed almost nothing about their core gameplay since 1999. and gamers wouldn’t have it any other way. The engine is considered one of the best optimized in all of PC gaming, which makes it easy to see major gaps in any one card’s abilities versus another’s.

No surprises here. Counter Strike: Global Offensive will run at high frame rates on anything that isn’t an actual bread-burning toaster, so if you have a 240Hz monitor running at 1080p, the GeForce GTX 1660 OC 6G will still give you about 40 more frames of breathing room before it tops out.

Rainbow Six: Siege

Finally there’s Rainbow Six: Siege. Despite the game languishing in bugs and server problems during its first year, Ubisoft has worked hard on its competitor to CS:GO to make it one of the most highly played games on the Steam platform, with 45 million players and growing as of early 2019.

This is another fast-paced game that’s been highly optimized by its developers, though even with their help, the GeForce GTX 1660 still wasn’t able to get to 240fps on the high preset in 1080p. That said, medium or low settings should have no problem boosting you up to wherever you want to be, in order to match your monitor’s refresh rate.

Just a Touch of Overclocking

Normally, this is the section where we’d discuss the overclocking potential of a card, but unfortunately, in the case of the Gigabyte GTX 1660 OC, we were chained to the company’s proprietary Aorus Engine (Opens in a new window) to handle the task.

Aorus Engine performed buggy with this card. Simply trying to adjust the fan speed could cause the whole program to freeze up, and it even made PC Labs’ entire testbed crash to BSOD on two occasions. When I could get an OC profile to save and apply, the app would again hang and prevent me from launching standard benchmarks such as Fire Strike Ultra in 3DMark or the benchmark of Far Cry 5.

That said, between the crashes and lockups, I did manage to get in one 3DMark test before anything crashed, with an added 175Hz to the boost clock and 250Hz to the memory clock. The results were roughly 7 percent better than the original run, which is fairly standard for third-party cards that already have some kind of overclocking done out of the box.

A Bit of Temperature-Taking

The Gigabyte GeForce GTX 1660 OC 6G features two fans with triangular blade edges, which according to claims create a more even flow of air across the entirety of the heatsink. that is, if they’re running at all.

Gigabyte is confident enough in the heat-dispersion capability of its heat sink that unless you’re running a game or movie that really kicks the GPU into gear, both fans stay off until their services are required. This is designed to cut down on the noise level of the GPU, and the card was able to keep itself relatively chill (50 degrees C) during normal activity such as web browsing or typing out this very review.

For our more formal thermal testing, we ran the 3DMark Fire Strike Ultra Stress Test for 10 minutes, and tracked the progress of the temperatures throughout.

Things did get a bit hotter under the GPU collar than expected, with the card topping out at 71 degrees C during peak load times.

As you can see, most of the heat generated by the card was soaked up by the metal plate attached to the back and radiated outward in a smooth pattern from that central source.

The 1660: An Nvidia Win for the Mainstream

Just as the GeForce GTX 1660 Ti made purchasing AMD’s Radeon RX Vega 56 cards look like a poor value, Gigabyte’s GeForce GTX 1660 OC 6G puts the competitively priced AMD Radeon RX 590 in a pickle. That AMD card may come with a couple of good bundled games for a limited time, but it would seem that AMD might be compelled to make a price move on the RX 590 to stay on pace. Once again, Nvidia is succeeding in filling out a holistic and balanced GPU line for gamers. and the race for second place isn’t all that close.

The Gigabyte GTX 1660 OC 6G could have been a bit more overclockable and just a little flashier in design for our tastes. But what it lacks in flair, it more than makes up for in reliable 1080p gaming performance on a budget.