Today we find the GPU Flashback Archive delving into the not so distant past to focus on the NVIDIA 900 series of graphics cards, the first to use NVIDIA’s new Maxwell architecture which had already seen the light day in mobile GPU solutions, an indication of the direction that the company were taking at the time. Let’s take a look at the cards that were launched as part of the 900 Series, the improvements and changes that Maxwell brought and some of the more memorable scores that have been posted on HWBOT.
The first question one may well have regarding the NVIDIA 900 series is simple - what happened to the 800 series? To answer the question fully, you must first look at the direction that NVIDIA was moving at the time. A movement to expand its product offerings in order to compete in the quickly expanding mobile SoC market. The suddenly ubiquity of Android-based smartphones around the globe was fuelled in part by the development of mobile SoCs from Qualcomm, Samsung, Mediatek, Marvell, Allwinner and others. The traditional feature phone was quickly being replaced by smartphones that now required improved multi-core CPU performance, HD display support and, importantly from NVIDIA’s perspective, decent enough graphics processing to actually play 3D games. Intel and NVIDIA were two companies with plenty of R&D and marketing budget who sought to enter a new market to help bolster revenues during an inevitable slow down of desktop PC sales, a traditional cash cow for both.
The GPU Flashback Archive series continues today with a recap of the NVIDIA GeForce 700 series, a series refresh which heralds part two of the Kepler family of GPUs. We can also remember it as a time when NVIDIA launched their first ever GTX Titan card and with it, a new pricing and retail strategy for truly high-end graphics card products. Let’s take a look at the new Kepler architecture GPUs, the cards that were popular with HWBOT members and some of the more memorable scores that have been posted since launch.
The 2011-2013 period of history saw NVIDIA implement a more regular cadence to their high-end product launches and refreshes. One that saw the company launch a new GPU architecture every two years, with new product lines arriving each year. This means deriving two product lines per architecture with an improved version offered the second time out. This is what we saw with Fermi, an architecture whose potential was full realized at the second attempt. With the GeForce 700 series, which arrived proper in May 2013 with the arrival of both the GeForce GTX 780 and GTX 770, we have something different. The new cards arrived using a much bigger version of the Kepler architecture compared to what we saw on the NVIDIA 600 series.
The GPU Flashback Archive arrives today at the NVIDIA 600 series that debuted in Spring of 2012. The new range of cards showcased a new graphics architecture design and the beginning of what we might describe as the Kepler era. Let’s take a peek at the changes that the new design heralded as well as a close up view of on the GeForce GTX 680 card, the most popular 6-series card with HWBOT members historically speaking. Before we look at some notable scores that were made with the GeForce 680, let’s first kick off with an overview of what innovations arrived with the new Kepler architecture.
If we cast our minds back to 2012 we can recall a era when NVIDIA and AMD were virtually neck and neck, with successive graphic card launches from each company swinging the performance crown from side to side. The arrival of Kepler in many ways represents the beginning of the end of the competitive duopoly that is clearly absent today. Kepler helped NVIDIA push ahead of AMD in terms of graphics processor design, creating a performance lead which AMD still finds insurmountable, despite the arrival of their latest Vega-based cards. Let’s take a look at Kepler in a little detail.
This week the GPU Flashback Archive sets its sights on the GeForce 500 series from NVIDIA. Arriving in late 2010, the 500 Series was the second round of graphics cards based on the Fermi architecture which had limped over the line in the previous generation, ostensibly due to fabrication and yield issues. The new flagship GTX 580 arrived with a more polished take on the Fermi design that help NVIDIA combat the threat from AMD and their popular Radeon 5000 and 6000 series cards. As ever, let’s take a look at the new GPU, the new flagship card and a few of the outstanding scores that have been submitted to HWBOT.
To say that the NVIDIA 400 series graphics cards launch was less than smooth, would be a total understatement. The GF100 Fermi architecture GPU in fact arrived six months late with a significant number of cores hacked off. Blame was laid at the door of fabricators TSMC and a 40nm manufacturing process that clearly hadn’t been optimally adapted for NVIDIA’s Fermi, a monster chip boasting 3 billion transistors and a 529mm² die. While cards such as the GTX 480 had actually done well to make NVIDIA competitive in performance terms, the GTX 580 and its GF110 GPU was rather quickly shoved out the door just eight months later as a revised and improved version of the original.
This week in our GPU Flashback Archive series we cast our minds back to a very popular and well loved graphics card series, the GeForce 400 series. NVIDIA launched the GeForce 400 series in March 2010 armed with a new Fermi architecture that it hoped would help it compete with the successful AMD Radeon 5000 series. Let’s look at the new features that Fermi offered, the cards that were popular and the scores that were submitted to HWBOT in this era.
Compared to previous product launches from NVIDIA, the GeForce 400 series launch did not go as smoothly as hoped. September 2009 saw AMD come out with their Radeon 5000 series which made a solid case against NVIDIA 200 series offerings. It would be January before NVIDIA really started wooing tech media with tales of its forthcoming Fermi architecture lineup. It would be March 2010 before tech media actually got their hands on the new cards and several weeks after that before enthusiasts would be able to actually buy one. This was not the typical NVIDIA launch. Reasons for the delay certainly seemed to lie with issues with actual fabrication at TSMC who were not providing the yields expected on their new 40nm process. This was a problem that particularly hurt NVIDIA due to the fact that the new Fermi GPU, the GF100, was actually very large. When the GeForce 400 series finally arrived in the form of the GeForce GTX 480 and GTX 470, by most calculations they were six months late.
GIGABYTE released a major revision of Aorus Z370 Ultra Gaming motherboard. Revision 2.0 replaces the 7-phase CPU VRM of the original with a new 11-phase setup that uses stronger ferrite-core chokes that don't whine when stressed. This new revision will be a part of the prize money of the GIGABYTE HWBOT competitions this winter: the AUROS Winter OC Challenge, the currently running AORUS March Madness and the soon to be anounced april competition too!
The latest version of GPU-Z is now available from the guys at TechPowerUp. GPU-Z version 2.8.0 adds support for for several AMD Vega-based Mobile GPUs and improved stability with AMD ‘Raven Ridge’ APUs. As ever there the new release includes a bunch of significant bug fixes and optimizations:
To begin with, we've addressed driver-crash issues seen on AMD "Raven Ridge" APU iGPU enabled systems, when using GPU-Z. The new DXVA 2.0 Features page in the "Advanced" tab is a ready-reckoner for all the video formats your GPUs provide hardware-acceleration for. We've made improvements to the accuracy of video memory usage readings on AMD Radeon GPUs, rendering performance of NVIDIA PerfCap sensor; and AMD power-limit readings in the "Advanced" tab.
Among the new GPUs supported are Radeon RX 460 Mobile, RX 560 Mobile, RX 570 Mobile, RX 580 Mobile, RX 550 based on Baffin LE. Minor bug-fixes include NVIDIA PerfCap sensor drawing outside its area; accuracy of temperature reading on AMD "Vega," a "BIOS reading not supported" error popping up on certain motherboards, and the driver digital signature reading getting truncated on high-DPI displays. Grab GPU-Z v2.8.0 from the link below.
Here’s the full changelog for version 2.8.0:
- Fixed crashes and other issues on AMD Ryzen Raven Ridge APU
- Added DXVA 2.0 hardware decoder info to Advanced Tab
- "Disable sensor" menu item now properly called "Hide"
- Improved VRAM usage monitoring on AMD
- Improved rendering performance of NVIDIA PerfCap sensor
- Improved AMD power limit reporting in Advanced Panel
- "MemVendor" is now included in XML dump output
- Fixed NVIDIA PerfCap sensor drawing outside its area
- Fixed "BIOS reading not supported" error on NVIDIA, on some motherboards
- Fixed HBM memory type detection in Advanced Tab on Fury X
- Fixed temperature misreadings on Vega
-Fixed "Digital Signature" label getting truncated on some hidpi screens
- Added support for RX 460 Mobile, RX 560 Mobile, RX 570 Mobile, RX 580 Mobile, RX 550 based on Baffin LE
A little while ago we noticed a flurry of very impressive 2D scores from US No.2 Splave. He used an octa-core Intel Core i7 7820X processor to make Global First Place scores in three of today’s multi-threaded 2D benchmarks; Cinebench R11.5, Cinebench R15 and Geekbench 3 Multi-core and Intel XTU. Let’s have a peek at the rig used and also try and figure what configuration he used:
In the two Cinebench benchmarks we find Splave pushing his Core i7 7820X under liquid nitrogen to 6,128MHz which is a very satisfactory +70.22% beyond the chip’s stock settings. According to the CPU-Z screenshot he configured core voltage at 1.524 V. He also configured his DRR4 kit at 1,838MHz with 12-12-12-24 timings. His motherboard of choice is an ASRock X299 OC Formula. All of which helped him push the highest ever octa-core score in Cinebench R11.5 to 29.68 points, and in Cinebench R15 to 2,739 cb points. Both scores edge past the previous best from Sofos1990 (Greece).
When benching Geekbench 3 Multi-core we find CPU cores of the same rig pushed slightly more conservatively to 6,115MHz (+69.86%) to hit a new octa-core high score of 50,725 points. In the Intel XTU benchmark the new Global First Place in the octa-core rankings also belongs to Splave with a score of 4,175 marks and his 'Skylake-X' architecture CPU clocked at 5,958MHz (+65.5%). Again in both cases Sofos1990 loses out.
GIGABYTE Kicked off their 2018 Season of OC contests her eon OC-ESPORTS just a few weeks ago with the GIGABYTE AORUS Winter Challenge contest. With just one week left, it’s time take a quick look at the current leaders, the CPUs and boards being used, the scoring and of course the great prizes up for grabs.
GIGABYTE AORUS Winter OC Challenge: Feb 1st – Feb 28, 2018
Running throughout the month of February 2018, the contest is allows overclockers to use any Intel processor with 6 or less physical cores (disabled cores not allowed). However, to create a more level playing field, processors cannot use CPU core or cache frequencies above 5GHz. Contestants must use a GIGABYTE / AORUS motherboard and submit using the contest wallpaper to ensure only fresh submissions count. The contest spans three separate stages each starting and ending on a different date to keep interest levels peaked throughout. Let’s take a look at the stages to get a snaphot of who looks in the frame for some great hardware prizes from GIGABYTE.
Stage 1: XTU (Feb 1st – 10th)
Stage 1 of the contest ultimately acts as a tiebreaker with only one actual contest point awarded for all contestants that make a submission. Why do we need that? The way the contest is setup, using a max-5GHz, max-6-core limitations, we can expect plenty of overclockers to be hitting very similar scores. Stage 3 will act as the first tiebreaker, followed by Stage 2 with XTU scores in Stage 1 used as a final decider.
At the top of Stage 1 we find Nik (Germany) at the top of the stable with a leading XTU score of 2,825 marks. As you would expect, he used an Intel Core i7 8700K, plus a GIGABYTE Z370N WIFI mini0ITX form factor motherboard. Crucially he configured his DDR4 system memory at 2,036MHz (12-12-12-28), a configuration that may well have benefited from the smaller motherboard and shorter trace paths on the PCB itself. Every edge counts in a contest centered on memory and OS tweaking, which is what we’re looking at when CPU clocks are limited.
This week’s trip down memory lane centers on an interview that HWBOT conducted with three Elite overclockers on the issue of overclocking with liquid nitrogen. Der8auer (Germany), Vivi (S.Africa) and Rbuass (Brazil) sat down with Massman and Xyala for a general discussion on the topic of using LN2, the gear you need, the knowledge required and the benefits that it offers in terms of temperatures and performance. Here’s what we published on February 12th, 2014:
HWBOT - Why would an overclocker change to liquid nitrogen cooling?
Vivi - Overclockers always want more performance and a higher overclock. They know the only way to get it is with better cooling. To eliminate the cooling problem, you use liquid nitrogen as it can take the component to its coldest and/or best operating temperature. Then you are free to go for the highest possible overclock.
Rbuass - I also believe it is a quest for more performance. Many enthusiast overclockers feel that if they want to do better scores, they don’t want to be limited by the enthusiast-grade cooling anymore. So in search of the maximum, they gather all their courage and go extreme!
Der8auer - We all started as normal overclockers using air- or water cooling. However there is always the point where you hit the thermal limit of your setup. You can raise the voltage of your CPU or GPU but you won’t be able to clock higher. The conclusion is that you need a lower temperature to achieve better results. Participating in HWBOT rankings means competing with the rest of the world so in order to improve your ranking you have to step up to a better cooling solution such as dry ice or liquid nitrogen.
HWBOT - What aspects of an LN2 cooling solution do you believe are most important when considering a purchase?
Vivi - First make sure you have access to dry ice or liquid nitrogen, because a container is useless without active extreme cooling. Secondly, do some research for which pot is better for the cooling you will use. There are containers designed for dry ice and others designed for LN2. In most cases they are backwards compatible, though.
HWBOT - what are the things you look for in (extreme) cooling gear? Anything we should look for, or try to avoid?
Vivi - Surface area and weight is the most important for me. I prefer a heavy pot over a light one, because I have access to LN2 which has super-fast heat transfer capabilities, so it can cool down a heavy pot with good surface area fast. This is best for any light and heavy load benchmark. For dry ice, I would use a lighter pot with more surface area because dry ice can’t cool down a big pot quick enough during load.
Der8auer - Extreme overclocking is quite critical as you have a lot of side effects. High voltages or condensation water can easily kill your hardware if you are not well prepared. In terms of preparation it doesn’t matter which cooling solution you use. Dry ice, LN2 and different pots. You always have to prepare your hardware carefully to achieve good results and have fun.
Last week saw the conclusion of Round #52 of the Rookie Rumble contest series and a first win for encrypted11, our first ever winner to hail from Singapore. German Rookie CSN7 finds himself in runner up spot again while Canada’s MeinFehr makes it into third place. Let’s take a look at the hardware and scoring that took place in a little detail:
Rookie Rumble #52: January 23 - February 15th, 2018 - Firstly however, let me give you a quick reminder about what the Rookie Rumble series is all about. The central idea is to give Rookie-class HWBOT members a place where they can compete against each other on a level playing field. For this reason Enthusiast, Extreme and Elite Overclockers are not eligible to compete. Round #52 of the contest was set up with three distinct stages featuring these three benchmarks; Intel XTU, Super 32M and Geekbench 3 (Single Core). Let’s examine each stage in isolation, starting of course with the ever popular XTU benchmark and Stage 1.
Stage 1: Intel XTU
The Intel XTU benchmark is without doubt the most popular benchmark with Rookie members on HWBOT which is why it is no surprise to see 367 overclockers competing here in Stage 1. Its popularity is due largely to the fact that many newcomers experience overclocking for the first time through using the XTU benchmark. Plus it has a simplified and integrated system tweaking UI and a very simple submission process. Unlike previous contests, in Stage 1 of Round #52 we find that scores remain undivided by core-count, a fact that heavily favors the latest high-core count, Skylake-X processors.
The win in Stage 1 was,was taken by CSN7 (Germany) who used a custom water-cooled Intel Core i9 7920X processor that he pushed to a very impressive 4,980MHz, which is +71.72% beyond stock settings. His rig (pictured below) also featured an ASUS ROG Rampage VI Apex motherboard and a GeForce GTX 1080 Ti card. The winning score was 4,272 marks, which is quite a away ahead from second placed stafel (US) with 4,040 marks using a Core i9 7940X clocked at 4,560MHz (+47.10%). Third place belongs to HailHappen with 3,681 marks using a moderately more affordable Core i9 7900X clocked at 4,630MHz (+40.30%).
We will be migrating the forums to invision power board today. As all content will be migrated too, we expect this progress to take a whole day. Commenting on submissions and news will not be possible as it is integrated with the forum.
Edit: come say hi in our new forums!
[Press Release] EK Water Blocks, the Slovenia-based premium computer liquid cooling gear manufacturer is releasing a new Socket TR4 based monoblock made for several GIGABYTE X399 motherboards. The EK-FB GA X399 GAMING RGB Monoblock has an integrated 3-pin RGB Digital LED strip which makes it compatible with GIGABYTE Fusion, thus offering a full lighting customization experience.
This is a complete all-in-one (CPU and motherboard) liquid cooling solution for two GIGABYTE AMD X399 Chipset based motherboards that support AMD Socket TR4 AMD Ryzen Threadripper processors. This monoblock is compatible with the following GIGABYTE motherboards: GIGABYTE X399 Aorus Gaming 7 (rev.1.0), GIGABYTE X399 Designare EX (rev.1.0).
Designed and engineered in cooperation with GIGABYTE, this monoblock uses a completely new cooling engine that ensures excellent CPU cooling performance. This water block directly cools the AMD Socket TR4 type CPU, as well as the power regulation module (VRM). Liquid flows directly over all critical areas, providing the enthusiasts with a great solution for high and stable overclocks. The additional included passive heatsink is used for the VRM and network chip components placed between the I/O shield and the memory DIMM slots.
This X399 platform based monoblock features a redesigned cold plate with a fin area that covers most of the Ryzen Threadripper IHS surface. The design also ensures that the monoblock cold plate is covering the entire Ryzen Threadripper processor IHS, thus enabling better thermal transfer. The base of the monoblock is made of nickel-plated electrolytic copper while the top is made of quality acrylic glass material. The main nickel plated mounting screws and brass screw-in standoffs are pre-installed so that the installation process is quick and easy.
Hot on the heels of the latest mandatory update to 3DMark Time Spy to version v2.4.4254 on February 5th, Futuremark have today just released an update to version v2.4.4264. The update address issues problems with the submission process. Crucially, the new version of the 3DMark suite, along with latest SystemInfo version 5.4 is now also mandatory for all HWBOT submissions:
3DMark v2.4.4264 Changelog
- Improved score validation checks. Result submits from previous versions will no longer be eligible for the 3DMark Hall of Fame.
3DMark v2.4.4254 Changelog
- The installer is now available in Japanese, Korean, and Spanish.
- To meet our improved score validation checks, hardware monitoring information is now required for competitive submissions to the 3DMark Hall of Fame.
- Restored the 3DMark splash screen when starting the application.
- Fixed a crash that could occur when the system returns unexpected values for the amount of video RAM.
Here’s an odd one for this week’s trip down memory lane and it harks back to a day in February 2011 when we posted a story about a company called Corvalent, an industrial motherboard and systems manufacturer who attempted a pretty cool project that used treated water to submersion-cool an entire system. It’s not the most conventional approach by to system cooling, but when you see the Core i3 processor literally simmering away inside the transparent chassis, it certain is elegant. You can check out the video from 2011 which remains available on YouTube here. The following are notes from Corvalent that explain the rationale behind the project:
This is a video of an engineering experiment we conducted cooling a computer by completely submerging it in liquid. This liquid submersion cooling system is NOT using mineral oil, or any type of oil cooling. This experiment was done using a chemical made by 3M called Novec™ 7000. It has a low boiling point, and leaves no residue or any trace whatsoever behind on the motherboard. The board was equipped with an i3 processor, running at 100% load. Very interesting cooling results, and strange to see a computer processor without a heatsink, boiling liquid to keep cool.
The idea of cooling computers through liquid submersion, has been around for about 50 years... but it has been generally reserved for the more exotic supercomputers and never really caught with mainstream users. Perhaps it's because we in the technology world are all wired at an almost primal level to believe that: "Liquid + Computers = BAD". In any case, the concept is slowly catching on, particularly with some in the video gaming community who are using mineral oils as a non-conductive liquid to totally submerge a computer in. The mineral oil idea is interesting... but I can't imagine the unholy mess that comes about when it's time to upgrade or make a change, plus, mineral oil isn't exactly the best for heat exchange.