The GPU Flashback Archive: NVIDIA GeForce 600 Series and the GeForce GTX 680

  • News, Hardware, Articles
  • 0
  • HWBOT

The GPU Flashback Archive: NVIDIA GeForce 600 Series and the GeForce GTX 680

The GPU Flashback Archive arrives today at the NVIDIA 600 series that debuted in Spring of 2012. The new range of cards showcased a new graphics architecture design and the beginning of what we might describe as the Kepler era. Let’s take a peek at the changes that the new design heralded as well as a close up view of on the GeForce GTX 680 card, the most popular 6-series card with HWBOT members historically speaking. Before we look at some notable scores that were made with the GeForce 680, let’s first kick off with an overview of what innovations arrived with the new Kepler architecture.


NVIDIA GeForce 600: Overview

If we cast our minds back to 2012 we can recall a era when NVIDIA and AMD were virtually neck and neck, with successive graphic card launches from each company swinging the performance crown from side to side. The arrival of Kepler in many ways represents the beginning of the end of the competitive duopoly that is clearly absent today. Kepler helped NVIDIA push ahead of AMD in terms of graphics processor design, creating a performance lead which AMD still finds insurmountable, despite the arrival of their latest Vega-based cards. Let’s take a look at Kepler in a little detail.

GeForce 400 and 500 series cards had been based on a Fermi architecture that strove for raw performance. Raw performance at any cost you could possibly add. The GeForce GTX 480 and GTX 580 had done a good job of making sure that NVIDIA was competitive when compared with a quite ambitious and effective AMD. However both GF100 and GF110 GPUs had been monsters in terms of die-size, power draw and heat, all issues which NVIDIA sought to address with Kepler. The Kepler architecture contains much of the same ideas as Fermi, but with an emphasis on improving overall efficiency. One key strategic element was the decision to remove the Shader clock and implement a Unified clock, while instead adding considerably more CUDA cores.

The first Kepler GPU to arrive on the scene is the GK104 which packs 8 Stream Modules (renamed SMXes) which in total pack 1,536 CUDA cores. This is four times as many as the previous generation, a substantial increase that essentially compensates for the removed shader clock. By doubling the size of each, they have in way simply doubled what they had with Fermi. Yet, despite all these additional cores and a whopping transistor count of more than 3.5 million, the GK104 is in fact a considerably smaller chip, measuring only 294 mm². If we compare that to the GF110 which packed 3 million transistors into 520 mm², it’s easy to see that NVIDIA are chasing massive improvements in efficiency. In terms of power too, we see the flagship card drop from 244W to 195W. In terms of VGA card cooling designs, a 50W drop is very significant. How was NVIDIA able to achieve this? As well as design changes, they were also able to exploit TSMC’s new 28nm manufacturing capabilities, a major factor in reducing die size and power draw.

From a design perspective, the new GK104 is in fact more comparable to the last Fermi revision, the GF114 which arrived a year earlier on GeForce GTX 560 Ti cards. The GF114 was a more efficient version of Fermi, being designed primarily for gaming applications as opposed to pure compute. Whereas the GeForce 500 series was launched with the GF110, a chip we can refer to as ‘Big Femi’, the GeForce 600 series arrived sporting the GK104, a more gaming oriented version of Kepler. ‘Big Kepler’ would arrive in due time in the form of the GK110, a 7 billion transistor beast that debuted on NVIDIA Quadro K6000 series workstation cards. It would also adorn the first NVIDIA GTX titan card.

Another salient point to mention about Kepler is the memory controller. Fermi was originally designed to reach beyond frequencies of 1,000MHz (4GHz effective), yet the GTX 580 was configured at precisely 1,002MHz. The GK104 features a massively improved memory controller that clocks 50% higher than its predecessor meaning we see the series arrive
with its GDDR5 ICs tuned to 1,502MHz (6GHz effective). The new architecture used a 256-bit bus, lower than the 384-bit bus we saw on Fermi 2.0.

One other major change from the perspective of clock speeds however, is the introduction of GPU Boost. To further optimize efficiency NVIDIA took a leaf from Intel’s CPU design book and designed Kepler architecture GPUs with the ability to dynamically boost certain cores to higher frequencies when needed. These boost clocks work in tandem with heat sensor and power draw data to keep the GPU within acceptable, predetermined parameters. Overclocking your GPU now became a matter of directly controlling the core clock by manipulating the GPU Boost power target and GPU clock offset, just as we do today. At launch EVGA’s Precision X utility was one of the first to implement GPU Boost via software.

As you can see in the NVIDIA slide above, the company actually marketed its GPU Boost feature (shown as a boost of 100MHz) as ‘Overclocking’.


The Most Popular NVIDIA GeForce 600 Card: The GeForce GTX 680

It’s time to take a look at the most popular NVIDIA 600 series cards in terms of submission numbers to the HWBOT database:

  • -GeForce GTX 680 – 20.49%
  • -GeForce GTX 670 – 16.80%
  • -GeForce GTX 660 – 14.06%
  • -GeForce GTX 660 Ti – 10.28%
  • -GeForce GT 610 – 7.29%
  • -GeForce GTX 650 – 4.72%
  • -GeForce GTX 690 – 4.36%
  • -GeForce GTX 650 Ti – 3.51%
  • -GeForce GT 630 (DDR3) – 2.94%
  • -GeForce GTX 650 Ti (Boost) – 2.25%

As with recent NVIDIA generations, the card that accounts for the most submissions is the flagship -80 card, in this case the GeForce GTX 680 which was used in 20.49% of all submissions. In comparison to the GeForce 500 series however, we do see the cards that sit just below the GTX 680 take up a more substantial part of the pie. The GTX 670 (launched in May 2012) was used in 16.80% of all submissions while the GTX 660 and the GTX 660 Ti (both launched in August 2012) have double figures also. Without further ado, let’s look at the GTX 680 card in a little more detail.

The GeForce GTX 680 was officially launched on March 22nd 2012 with a price tag of $499 USD. It was a PCIe 3.0 compliant, 2-slot card that offered a pair of DVI outputs, one HDMI and (for the first time on an NVIDIA reference card) a DisplayPort output. Its GK104 GPU was clocked at 1,006MHz (boosting to 1,058MHz) and featured 2GB of GDDR5 clocked at 1502MHz (6GHz effective). Due to its lower 195W TDP it was fitted with two 6-pin PCIe power connectors. In terms of cooling design it varied from the GTX 580 in that the fan was repositioned slightly higher up, away from the center of the PCB. This is due to the dual DVI ports which occupy the lower portion of the exhaust vent. The shroud that houses the aluminium heatsink no longer features a wedge shape design that was used on the GTX 580. One other thing to note is that it is also half an inch shorter at exactly ten inches.

Once we get past the reference design we find the GTX 680 being re-fashioned by NVIDIA partners with custom cooling solutions and boosted frequencies. Here’s an example of what ASUS were offering their customers, a 3-slot card with 1,201MHz GPU core boost taking its place front and center on the color box.

Interestingly perhaps, if we again refer to the submission data we find the dual GPU GTX 690, which was launched in May of 2012, was used in only 4.36% of submissions. This card could almost be seen as foreshadowing the NVIDIA Titan series in that it retailed at launch for a $1,000 USD. Dual GPU cards often end up making some kind of compromise in terms of single GPU clock speeds or massive, noisy cooling solutions. With the more power efficient Kepler GPU line, NVIDIA clearly thought they had the right to really push dual-GPU solutions to a more mainstream position in the market. With an asking price equal to that of two GTX 680s, it’s easy to see why it didn’t really work out the way they hoped. In terms of looks, the GTX 690’s silver shroud really does make it a candidate for a Titan forerunner.


NVIDIA GeForce 600 Series: Record Scores

We can now take a look at some of the highest scores posted on HWBOT using an NVIDIA GeForce 680 card, the fastest single GPU card in the 600 Series lineup.


Highest GPU Frequency

Although technically speaking, GPU frequency (as with CPU frequency) is not a true benchmark, it does remain an important metric for many overclockers. Looking through the database, we find that the submission with the highest GPU core frequency using a GeForce GTX 680 card comes from the legendary k|ngp|n (US). He pushed the GPU of an EVGA GeForce GTX 680 Classified card (which he may well have helped design) to 2,080MHz, which is a incredible +106.76% beyond stock settings. The rig used also included an Intel Core i7 3930K ‘Sandy Bridge-E’ processor clocked at 5,600MHz (+75.00%).

You can find the Hardware First Place submission from k|ngp|n here on HWBOT: http://hwbot.org/submission/2320002_kingpin_3dmark11___performance_geforce_gtx_680_17047_marks


3DMark Vantage

The highest 3DMark Vantage score submitted to HWBOT using a single NVIDIA GeForce 680 card comes from Splave (US), an Elite overclocker currently ranked as No.5 in the world on HWBOT. He recently pushed a GTX 680 card to 1,650MHz (+64.02%) on the GPU and 1,900MHz (+26.50%) on the memory to hit a score of 64,358 marks. It’s certainly worth mentioning that the rig also used a very nicely pushed Intel Core i9 7980XE ‘Skylake-X’ processor clocked at 4,900MHz (+88.46%).

You can find the submission from Splave here on HWBOT: http://hwbot.org/submission/3702895_splave_3dmark_vantage___performance_geforce_gtx_680_64358_marks


Aquamark

In the classic Aquamark benchmark we find that Splave (US) is again the highest scorer with a single GeForce GTX 680 card. He holds the Hardware First Place record with an impressive score of 642,047 marks. The score was actually made just two weeks ago and will have benefited massively from being joined by an Intel Core i7 7740X ‘Kaby Lake-X’ chip clocked at 7,005MHz (+62.91%).

You can find the submission from Splave here on HWBOT: http://hwbot.org/submission/3709867_splave_aquamark_geforce_gtx_680_642047_marks

Thanks for joining us for this week’s episode of the GPU Flashback Archive series. Come back next week and check out the NVIDIA GeForce 700 series of graphics processors and cards.





0

Bitte einloggen oder register to comment.