Click on the competition images to go straight to the competition page, or click here for a more detailed overview at HWBOT.
World Tour 2017 and HWBOT X
Coming soon ...>
Road to Pro 2017
Starts Feb 1, 2018
|GPUPI - 1B||Titan V||1200/1080 MHz||H2o vs. Ln2||1sec 884ms||100.5 pts||0 2|
|3DMark11 - Performance||GeForce GTX 1080 Ti||2556/1610 MHz||ikki||43722 marks||95.8 pts||0 3|
|3DMark - Fire Strike||GeForce GTX 1080 Ti||2075/1516 MHz||ikki||47688 marks||94.3 pts||0 2|
|3DMark - Fire Strike Extreme||GeForce GTX 1080 Ti||2075/1516 MHz||ikki||37086 marks||68.1 pts||0 2|
|Catzilla - 1440p||GeForce GTX 1080 Ti||2190/1611 MHz||Bruno||25797 marks||51.4 pts||0 0|
|Geekbench4 - Single Core||Core i7 7700K||6004 MHz||SAMBA||8159 points||49.8 pts||0 0|
|XTU||Core i7 7700K||5750 MHz||SAMBA||2001 marks||48.4 pts||0 0|
|wPrime - 1024m||Pentium E2140||4235 MHz||nachtfalke||10min 0sec 55ms||47.5 pts||0 1|
|3DMark - Fire Strike Extreme||GeForce GTX 1080 Ti||2151/1624 MHz||jab383||16261 marks||45.5 pts||0 0|
|wPrime - 32m||Pentium E2140||4425 MHz||nachtfalke||18sec 848ms||44.8 pts||0 1|
Click on the competition images to go straight to the competition page, or click here for a more detailed overview at HWBOT.
Coming soon ...>
Starts Feb 1, 2018
Today we find the GPU Flashback Archive delving into the not so distant past to focus on the NVIDIA 900 series of graphics cards, the first to use NVIDIA’s new Maxwell architecture which had already seen the light day in mobile GPU solutions, an indication of the direction that the company were taking at the time. Let’s take a look at the cards that were launched as part of the 900 Series, the improvements and changes that Maxwell brought and some of the more memorable scores that have been posted on HWBOT.
The first question one may well have regarding the NVIDIA 900 series is simple - what happened to the 800 series? To answer the question fully, you must first look at the direction that NVIDIA was moving at the time. A movement to expand its product offerings in order to compete in the quickly expanding mobile SoC market. The suddenly ubiquity of Android-based smartphones around the globe was fuelled in part by the development of mobile SoCs from Qualcomm, Samsung, Mediatek, Marvell, Allwinner and others. The traditional feature phone was quickly being replaced by smartphones that now required improved multi-core CPU performance, HD display support and, importantly from NVIDIA’s perspective, decent enough graphics processing to actually play 3D games. Intel and NVIDIA were two companies with plenty of R&D and marketing budget who sought to enter a new market to help bolster revenues during an inevitable slow down of desktop PC sales, a traditional cash cow for both.
The GPU Flashback Archive series continues today with a recap of the NVIDIA GeForce 700 series, a series refresh which heralds part two of the Kepler family of GPUs. We can also remember it as a time when NVIDIA launched their first ever GTX Titan card and with it, a new pricing and retail strategy for truly high-end graphics card products. Let’s take a look at the new Kepler architecture GPUs, the cards that were popular with HWBOT members and some of the more memorable scores that have been posted since launch.
The 2011-2013 period of history saw NVIDIA implement a more regular cadence to their high-end product launches and refreshes. One that saw the company launch a new GPU architecture every two years, with new product lines arriving each year. This means deriving two product lines per architecture with an improved version offered the second time out. This is what we saw with Fermi, an architecture whose potential was full realized at the second attempt. With the GeForce 700 series, which arrived proper in May 2013 with the arrival of both the GeForce GTX 780 and GTX 770, we have something different. The new cards arrived using a much bigger version of the Kepler architecture compared to what we saw on the NVIDIA 600 series.
The GPU Flashback Archive arrives today at the NVIDIA 600 series that debuted in Spring of 2012. The new range of cards showcased a new graphics architecture design and the beginning of what we might describe as the Kepler era. Let’s take a peek at the changes that the new design heralded as well as a close up view of on the GeForce GTX 680 card, the most popular 6-series card with HWBOT members historically speaking. Before we look at some notable scores that were made with the GeForce 680, let’s first kick off with an overview of what innovations arrived with the new Kepler architecture.
If we cast our minds back to 2012 we can recall a era when NVIDIA and AMD were virtually neck and neck, with successive graphic card launches from each company swinging the performance crown from side to side. The arrival of Kepler in many ways represents the beginning of the end of the competitive duopoly that is clearly absent today. Kepler helped NVIDIA push ahead of AMD in terms of graphics processor design, creating a performance lead which AMD still finds insurmountable, despite the arrival of their latest Vega-based cards. Let’s take a look at Kepler in a little detail.
This week the GPU Flashback Archive sets its sights on the GeForce 500 series from NVIDIA. Arriving in late 2010, the 500 Series was the second round of graphics cards based on the Fermi architecture which had limped over the line in the previous generation, ostensibly due to fabrication and yield issues. The new flagship GTX 580 arrived with a more polished take on the Fermi design that help NVIDIA combat the threat from AMD and their popular Radeon 5000 and 6000 series cards. As ever, let’s take a look at the new GPU, the new flagship card and a few of the outstanding scores that have been submitted to HWBOT.
To say that the NVIDIA 400 series graphics cards launch was less than smooth, would be a total understatement. The GF100 Fermi architecture GPU in fact arrived six months late with a significant number of cores hacked off. Blame was laid at the door of fabricators TSMC and a 40nm manufacturing process that clearly hadn’t been optimally adapted for NVIDIA’s Fermi, a monster chip boasting 3 billion transistors and a 529mm² die. While cards such as the GTX 480 had actually done well to make NVIDIA competitive in performance terms, the GTX 580 and its GF110 GPU was rather quickly shoved out the door just eight months later as a revised and improved version of the original.
This week in our GPU Flashback Archive series we cast our minds back to a very popular and well loved graphics card series, the GeForce 400 series. NVIDIA launched the GeForce 400 series in March 2010 armed with a new Fermi architecture that it hoped would help it compete with the successful AMD Radeon 5000 series. Let’s look at the new features that Fermi offered, the cards that were popular and the scores that were submitted to HWBOT in this era.
Compared to previous product launches from NVIDIA, the GeForce 400 series launch did not go as smoothly as hoped. September 2009 saw AMD come out with their Radeon 5000 series which made a solid case against NVIDIA 200 series offerings. It would be January before NVIDIA really started wooing tech media with tales of its forthcoming Fermi architecture lineup. It would be March 2010 before tech media actually got their hands on the new cards and several weeks after that before enthusiasts would be able to actually buy one. This was not the typical NVIDIA launch. Reasons for the delay certainly seemed to lie with issues with actual fabrication at TSMC who were not providing the yields expected on their new 40nm process. This was a problem that particularly hurt NVIDIA due to the fact that the new Fermi GPU, the GF100, was actually very large. When the GeForce 400 series finally arrived in the form of the GeForce GTX 480 and GTX 470, by most calculations they were six months late.
Among the diverse group of people that make up the HWBOT membership role, we have several members who seem to involved in a race largely against themselves, which is exactly the way they like it. One such case involves China’s wytiwx and his work pushing older CPUs to new frequency heights. You may recall earlier this month we noted he pushed an Intel Pentium Mobile Celeron processor from its 1.2GHz base clock to beyond 4.5GHz, a massive and probably highest ever percentile increase of +275%. Today we turn our attention to his latest project and a new Global First Place score using a Intel Mobile Pentium 4 532 processor.
Based on the Prescott architecture that Intel unleashed into the world in 2004, the Mobile Pentium 4 532 processor has a base clock of 3.06GHz, a pretty high default frequency by today’s standards. The processor belongs to the Socket 478 family of processors which also includes Northwood and Gallatin offerings, an area of particular interest to wytiwx. He managed to push the P4 532 to a massive 6,617.65MHz, which is +115.84% beyond stock. This is not only the highest CPU frequency for a P4 532 processor, it also happens to be (according to the HWBOT database) the highest ever frequency of any Socket 478 family chip. The rig was based around an Intel P35 platform ASUS P5K-E/WIFI-AP motherboard with DDR2 configured at 575MHz (CL5-5-518). The scores beats the previous highest frequency which comes from GRIFF (Italy) at 5,406.58 MHz.
We find that wytiwx also managed to take down the Pentium 4 532 Global Ranked Score in the classic SuperPi 1M benchmark with a run in just 21sec 969ms with the CPU clocked at a somewhat more conservative 6,243.05MHz (+103.62%). Once again we should note that the score is the highest SuperPi 1M submission using a Socket 478 processor. You may also be interested to note that wytiwx has also been dabbling with running DDR3 system memory on Socket 478. Check out this submission which involves 1GB of OCZ DDR3 kit and an Intel Pentium 4 2.4Ghz (Northwood, 200 FSB). This guys is having all the fun.
Check out all the score submissions in the links above, and also feel free to check in with wytiwx (China) here at his profile page. Thanks to Strunkenbold (Germany) for the heads up.
It’s been a while since Buildzoid last posted a live OC session on his Actually Hardcore Overclocking Youtube channel. After a decent vacation, he returns this week with a full-bore extreme OC session, complete with his trademark crazy hairdo and plenty of LN2. The mission this time around is to push a newly volt-modded AMD Vega 56 card.
The full run down includes a delidded Intel Core i9 7940X which is pushed on water cooling to 4.9GHz and above using an ASRock X299 OC Formula motherboard. In terms of memory Buildzoid is using a Teamgroup T-Force Xtreem kit that promises frequencies of 3,733MHz with 12-12-12-28 timings. The Vega card itself is from Sapphire. The object of the OC session is to try and get some decent scores in 3Dmark Fire Strike. As always it’s fun to see Buildzoid go through the motions in getting his sub-zero system to optimal performance levels. Whoever said that extreme overclocking is a series of blue-screens until you get it right, will find plenty of evidence in this video.
You can find the OC live stream video from Buildzoid here on the Actually Hardcore Overclocking YouTube channel.
Steve Burke and the guys at Gamers Nexus continue to keep the flow of solid content coming. Most recently they did a report that looks into the murky and somewhat volatile area of DRAM pricing. DRAM, or basically the system memory kits that we use in our rigs, tends to fluctuate considerably depending largely on perception regarding supply and demand. Steve took a look into the pricing trends of DDR4 specifically and found that in fact we are paying the same prize for our DRR4 kits as we were when it arrived on the market. How is that possible? Gamers Nexus report:
While researching GPU prices and learning that GDDR5 memory price has increased by $20-$30 on the bill of materials lately, we started looking into the rising system memory prices. RAM pricing has proven somewhat cyclic over the past few years. We’ve reported on memory price increases dating back to 2012, and have done so seemingly every 2 years since that time. This research piece pulls five years of trend data, working in collaboration with PCPartPicker, to investigate why memory prices might be increasing, when we can expect a decrease, and more.
DRAM prices are crazy right now. We’ve driven that point into the ground over the past few years, but pinpointing a “when” and a “why” is a difficult proposition. With the help of PCPartPicker, we’ve identified some general trends that seem almost cyclic, and provide some relief in pointing toward an eventual downturn.
Read the full report from Gamers Nexus here. You may also want to check out this video from Steve which is actually quite eye-opening, making a compelling argument that something really isn’t quite right in the DRAM market right now. Nice report guys. Nice work.
Dennis Garcia and Darren McCain are back with their latest podcast. Hardware Asylum Episodes 83 -1 to 3 offer an in-depth look at the CES 2018 show with took place in Las Vegas a week or two or ago. The guys discuss pretty much every technology company that they caught up with at the show plus a summary of the coolest things that they encountered that week:
CES or the Consumer Electronics Show is the first major trade show of the year and it happens down in Las Vegas. The show attracts 170,000 people annually and while it isn’t dedicated to PC hardware many vendors take advantage of the schedule and setup displays at the local hotels.
In this Episode the duo talk about CES 2018 and some of the vendor meetings that they had. Believe it or not but this was Darren’s first time at CES while Dennis could be considered a veteran having attended the show annually since 2007. The show started on Day 0 with the Intel Keynote where they talked about Data. There were some drones and even a drone style taxi that flew indoors.
After that is was off to get some sleep and start the show. Vendors in this podcast would include Asus and their day one Media Day, EVGA, Phanteks, AZIO and FNATIC. They might be an eSports team but have started branching out to offer apparel and now gaming keyboards and mice.
Check out all three Hardware Asylum podcasts right here.
The Cheapaz Chips Season 2 contest just came a conclusion around a week ago, with OGS (Greece) taking the win and a very nice GALAX GTX 1080 Ti HoF Lab Edition card (catch the full writeup here on OC-ESPORTS). The contest series is all about pushing cheaper graphics cards and in the case of Season 2, focused on pushing NVIDIA GT 1030 cards. Like all 1000 series cards, the GT 1030 and its GP108 graphics processor is based on the Pascal architecture, an architecture that for many overclockers presents a very specific set of challenges. The good news however, is that US overclocker and Cheapaz Chips 4th place finisher niobium615 (US) has put together a really solid guide that specifically deals with pushing Pascal.
niobium615 (US) is a member of the /r/overclocking team on HWBOT, a team with a growing member list of enthusiastic overclockers. In fact the team had two representatives in the top five, a solid sign of their increasing pedigree. The /r/overclocking pages on reddit actually contain some great guides that span beginner to advanced levels. The Pascal guide from niobium615 contains lots of advice regarding things like voltage / frequency curves, throttling issues, the differences you will experience between ambient and extreme overclocking, driver interfaces and a whole lot more. Here’s a taste of what he has to say:
So, Pascal time. First thing to get out of the way; a custom BIOS can solve a lot of OC-related issues that Pascal has. Unfortunately, unless someone figures out how to sign a modified BIOS, that's not happening anytime soon. That leaves the NVAPI as the only other way to control the cards. Is it a bit limited? Yes, but you can still get them to behave much more nicely with the right commands.
One of the new features of Pascal is that clocks can now be set with voltage/frequency curves. In fact, that's the only way that clocks are handled on Pascal. Every card has a "stock" V/F curve defined in the BIOS, as was the case for Maxwell and Kepler, but this curve can now be directly modified from the OS. Offsets are still available, and can be set using the same NVAPI call as previous architectures. I have a feeling that this is for compatibility's sake as much as anything else. An important thing to note is that offsets are applied to the stock frequency curve, as was the case with Maxwell and Kepler, not the currently defined frequency curve. If an offset is applied to the card, the V/F will be reset to stock.
Catch the full Pascal overclocking guide from niobium615 here on reddit.com. A big thanks to mickulty (UK) for bringing this to my attention.
This Thursday we revisit a day back in January 2013 when the guys at OverClocking-TV sat down to conduct an interview with a young up-and-coming overclocker going by the handle ‘der8auer’. Today of course Roman ‘der8auer’ Hartung is virtually a household name in extreme overclocking circles, however back in 2013 he was a University student looking to find his way into the tech industry. Let’s revisit his conversation with OC-TV:
der8auer - My name is Roman Hartung and I’m 23 years young. I live in a quite small town in the south part of Germany near Stuttgart. I’m a mechatronics student in the 4th semester and hope to get a job in the computer industry afterwards. I’m the team captain of the German HWBot team PC Games Hardware and you can find me everywhere using the nickname “der8auer” which is pronounced “der8auer” which means “theFarmer”.
I started quite young as a typical online gamer and I always wanted to have the latest hardware even though I didn’t have the money for it as a young student. So I was stuck long time with a cheap Athlon XP and GeForce 4 system until I had the money to afford my first high end system at the age of 16 using an AMD 64 4600+ and two 7800 GTX.
OC-TV - How did you discover the Overclocking?
der8auer - The first time I heard about it was in a German computer magazine about an overclocked Athlon XP 1700+.
OC-TV - Since how many years are you playing with Overclocking?
der8auer - About 8 years. My original plan was to play Battlefied 2 on my new system but then I came across the 3DMark2005. First I just installed it to see the amazing graphics but I became addicted raising the Marks (3dmark point system). So I ended up overclocking my hardware to achieve a higher 3DMark score and also to have more FPS in Battlefield 2 when it came out.
Catch the full interview with Roman here at OverClocking-TV where he goes on to discuss his preferred cooling techniques, what he does when he’s not overclocking and more.
Despite the arrival of GPU-Z v2.7.0 just a week or so ago, the guys at TechPowerUp have been busy chasing bugs and fixing imperfections, so much so that they have just launched v2.7.O of the popular Graphics Card utility. The new version includes several important bug fixes and updates the app’s internal modules. I’ll let btarunr explain in detail:
TechPowerUp today released the latest version of GPU-Z, the popular graphics subsystem information and diagnostic utility. Version 2.7.0 comes with a handful of important bug fixes and updates to its internal modules. To begin with, we've updated the NVFlash module that lets GPU-Z extract video BIOS from graphics cards, the newer NVFlash supports BIOS extraction from some of the newer NVIDIA graphics cards such as the GTX 1070 Ti. We've also fixed incorrect video memory amount reading on AMD Radeon RX Vega graphics cards. TMU and ROP counts, and OpenCL status on AMD "Polaris 21" GPUs is fixed, as is incorrect labeling of a memory clock sensor on NVIDIA GPUs. GPU-Z will no longer prevent system shutdowns and reboots on Windows 10 Fall Creators Update.
Here’s the full changelog for version 2.7.0:
Find the latest GPU-Z v2.7.0 utility from TechPowerUp here.
The the second season of the Cheapaz Chips contest series on OC-ESPORTS just can to a head with Greek overclocker OGS showing his modding and GPU pushing skills to take the win. In fact OGS managed a maximum points haul after taking wins in all three stages of the contest, just ahead of Splave (US) and Chilli-Man (Australia) who take second and third spots respectively. Let’s take a look at the scores submitted and the modded graphics cards involved in a little more detail.
Cheapaz Chips Season 2: December 15th to January 20th, 2018
Before we get into the scoring and the overclockers involved, let’s first remind ourselves what the Cheapaz Chips contest is all about. Firstly, it’s crucial to note that the contest was proposed and created by the CCTF, the Community Competition Task Force at HWBOT (read more about the CCTF here).
The central idea of the Cheapaz Chips contest series is to encourage HWBOT members to put all of their ingenuity and passion into overclocking entry-level hardware components that we wouldn’t cry over too much if we accidently pushed it too hard and killed. The series has also been designed to give overclockers a wonderful excuse to get down to some serious card modding. Graphics card modding can be a daunting task for anyone attempting to do so for the first time. It takes a steady and assured hand that knows how to solder with unerring accuracy, as well as detailed knowledge of exactly how the card and power delivery design works. All of which means that you’re better off starting your modding adventure with a cheaper card.
Cheapaz Chips Season 2 kicked off in mid-December, ending on January 20th and featured stages suited to benching NVIDIA GT 1030 graphics cards, the subject matter of Season 2. All benchmarks were GPU-centric with CPUs limited to 5GHz. As an added incentive, GALAX were kind enough to contribute a GALAX GTX 1080 Ti HoF OC Lab Edition card for the winner. Let’s get stuck in to the details with Stage 1.
Check out the full roundup article regarding the Cheapaz Chips Season 2 contest here on OC-ESPORTS which also features a gallery featuring many of the modded graphics cards involved.
Believe it or not, there are folks out there who believe that multi-GPU setups are increasingly a thing of the past. Try telling that to Xtreme Addict (US) and k|ingp|n (US), two overclockers who have been proving that there’s still plenty of fun to be had with 4-way SLI systems. Both overclockers have broken World Records in the classic Catzilla 3D benchmark series using no less than four LN2 cooled GTX 1080 Ti cards. Let’s take a look at the scores and the rigs used:
Xtreme Addict kicked it all off at the weekend, uploading a bunch of impressive 3D scores including World Record breaking scores in Catzilla 4K and Catzilla 1440p. The new World Record score in the Catzilla 4K benchmark now stands at 47,495 marks. His GPUs of choice come from GALAX with four Nvidia Hall of Fame GeForce GTX 1080 Ti cards (featured in the pic on the left). These were pushed under LN2 with boost clocks raised to 1,973MHz, a boost of over +33%. The rig was based around an ASRock X299 OC Formula motherboard with the rig’s Intel Core i9 7980XE 'Skylake-X' processor cranked up to 5.6GHz, a massive overclock of +115% beyond stock. The same rig also managed to garner several silver and bronze cups in 3DMark Fire Strike Ultra, Fire Strike Extreme and Time Spy. It also managed to break the World Record in the Catzilla 1440p benchmark, hitting a score of 82,968 marks. However, the World Record belonged to XA for just a matter of hours before k|ngp|n arrived on the scene to snatch it back.
As you would expect k|ngp|n opted to use his very own EVGA Kingpin Edition Nvidia GTX 1080 Ti cards for his latest 4-way Pascal adventure. Just a matter of hours ago he posted a Catzilla 1440p score that edged past XA. The new World Record for Catzilla 1440p now stands at 85,280 marks thanks to the four GPUs being clocked at a pretty incredible 2,500MHz (+68.92%). Vince also used an Intel Core i9 7980XE, in his case pushed to 5.7GHz which is +119.23% beyond stock. The rig also used an EVGA X299 DARK motherboard. Interestingly with only two GTX 1080 Ti cards, he also managed to break the World Record in the Catzilla 720p benchmark which now stands at 117,566 marks. Nice going.
You can find the score submissions in the links above. You can also keep[ breast of the action by checking out the Xtreme Addict and k|ngp|n profile pages. Indeed, there may well be more 4-way scoring to come.
Roman ‘der8auer’ Hartung and his colleagues at Caseking have come up with a new product that once again proves how they’re dedicated to giving overclockers the gear we need to achieve absolute maximum performance. Just a few days ago they launched the ‘Skylake-X Direct Die Frame’, a patent pending device that allows for direct-die mounting on all socket 2066 motherboards. It essentially replaces the Integrated Loading Mechanism (ILM) on your motherboard, allowing users to use a de-lidded CPU without the need for a heat spreader. Here are the details:
The outer edge of the der8auer Skylake-X Direct Die Frame is located a mere 0,1 mm below the silicon chip itself, effectively preventing any unwanted tilting of the CPU cooler and protecting against damage. Furthermore, the black anodized coating isolates the aluminium of the Direct Die Frame as it is no longer electrically conductive. As a result the SK-X DDF can be seated safely and securely against the contact area of the CPU.
Installation of the SK-X DDF requires that the Intel socket retention module first be removed, this then allows the bundled back plate to be attached to the reverse of the motherboard with adhesive pads, the CPU inserted, and the frame secured by means of four screws. The SK-X DDF is manufactured according to extremely tight tolerances to ensure an equal distribution of downward force. This helps to maintain optimum contact between the motherboard and CPU while ensuring all devices, such as PCIe cards or RAM modules, are recognized correctly.
The Skylake-X Direct Die Frame is available now at Caseking for 69.90€. You find more information here on the Caseking website. You can also check out this video from Roman which covers his new creation and its ‘German Engineering Perfection’ in all its glory.