In general, the efficiency ratings and rankings are definitely a good way to keep overclockers busy. A lot of people therefore argue that HWBOT should perhaps give points for the best efficiency. In this small editorial, I will try to analyze the efficiency rating mechanism and provide a few points of criticism. For the readers, it might be interesting to read as they hold at least part of the answer to why HWBOT is not giving points based on this criterion. After all, as previously described, we do see efficiency ratings as a way to rank scores/overclockers by skill.
Introduction to efficiency ratings
Recently, I spend my free time overclocking Clarkdale-based products. Not specifically to reach new heights in HWBOT’s Overclockers League, but rather to give myself a different challenge than just the high-end hwboint-benching. I posted my exploits on two forums, the local forum here at HWBOT as well as the forum over at Xtremesystems, in which I described my personal challenge to reach the best possible scores taking into account the limitations I had set up for myself. Although the concept of this overclocking exercise was to see what score I could reach using single-stage cooling, the thread quickly turned into a discussion on how ‘efficient’ my scores were.
It has to be said, SuperPI 32M is not an ordinary benchmark. Next to 3DMark01, this benchmark is treasured by hardcore overclockers as a way to display your overclocking capabilities to the most individual level; whoever manages the highest efficiency is the best overclocker. The reason behind this fixation is quite simple: unlike many other benchmarks, both SuperPI 32M and 3DMark01 are highly sensitive to overall platform tweaks. Not only raw power makes a difference, optimizing the operating system and memory subsystem are vital for producing the best possible results. On an off-topic side-note, rumors go that 3DMark05 might just be the next 3DMark01.
In any case, due to the tweakable nature of the benchmark, overclockers have come up with the concept of low-clock challenges which are designed to bring forward the most talented overclockers as these challenges eliminate the CPU frequency, which is the key variable in the SuperPI result equation, from the playing field, leaving OS and platform optimizations as main divider between overclockers. I would argue, however, that most of the time these low-clock challenges are just for fun and hardly fueled by the competitive spirit. Perhaps because almost no-one dares to explicitly claim the title of ‘most efficient overclocker’, as that would make things competitive, or perhaps because as long as we keep it silent we can trick ourselves into believing we still are top-class overclockers even in the absence of world record breaking results. One thing has to be mentioned in favor of the low-clock challenges: they do allow less fortunate overclockers, both those who lack the financial possibilities to purchase high-end components and those who have bad luck ending up with badly overclocking samples, to show off their tweaking abilities.
A few points of criticism
One of the main arguments of the efficiency-based rankings is that they allow literally anyone to compete and compare overclocking results. A valid argument, as efficiency rankings take the CPU frequency, which is the most dominant factor in most benchmark applications, out of the equation and allow people with bad hardware to compete against people with very good hardware. The problem is, however, that this just moves the problem. Instead of the need for a good CPU, you now need … good memory. Memory that is capable of running high frequencies combined with tight (sub-)timings doesn’t come very cheap. You need a high-end memory kit if you want to seriously compete in these low-clock challenges. In addition, the necessity for a good CPU is not completely removed as the memory controller is also part of the efficiency equation.
It is also common knowledge that, using the current formulas, the lower the CPU frequency, the easier it is to reach an optimal efficiency. This can be explained by the non-linear scaling characteristic of most benchmark applications. When comparing the relation between result and CPU frequency, you will notice that at lower frequencies the effect of an increased frequency on the end result is greater than in a situation where we are already at high frequencies. In practical terms this means that you can easily ‘fiddle’ with the efficiency by just lowering the CPU frequency.
A good way to work around the above problem is to define efficiency rankings based on fixed CPU frequency (e.g.: maximum 4GHz) or based on a fixed end result (e.g.: 7min flat). This forces people to run in a specific range of frequency and presents the efficiency-rating in an easier to understand ‘lower CPU frequency is better’-format.
Obviously you can improve the formula’s accuracy in measuring the efficiency by inserting all sorts of variables into the equation. Currently, the ‘PP-formula’ (brought into the community 13th December 2006) consists of just two variables: core frequency and result. With the help of correlative research to determine the weight of all variables (QPI, memory, IMC, …) you can build a formula that describes the efficiency a lot more accurately. It also takes a lot of money-driven variables out of the picture as, for instance, high memory frequency that can only be obtained with expensive memory is now a part of the efficiency formula. Sadly enough, the formula will never be perfect as some aspects are hard to take into account. A prime example is the memory subtimings, which are difficult to include in an equation, but are vital to an efficient result.
As a second point, the value of an efficiency rating is only worth as much as the trust between provider and perceiver. No one can really be certain if a score has been run at the frequency shown in the screenshot, which is taken after the benchmark has completed, or if the person who achieved this score has downclocked the system before making the screenshot. The efficiency rating is based on the information provided in the screenshot; therefore ‘improving’ the efficiency can simply be done by reducing the perceived operating frequency.
The last point I want to bring up is perhaps the most important one: being efficient does not always mean you did a good job. It’s one thing to optimize the platform to be strong clock per clock wise, but that’s merely one step in the entire process of producing the best score possible with a given setup. ‘Crippling’ the system so much you lose 100MHz on the CPU, but gain on memory subsystem, might result in a very efficiency result, but in the end you didn’t do very well as you didn’t manage to squeeze the best result out of your system. In addition, if you improve the accuracy of the efficiency rating you exclude skill required to obtain high frequencies as well, which goes against the idea to increase the value of skill.
The above arguments clearly show that using an efficiency-rating to award points is not a very practical idea. A concept perhaps worth contemplating is the so-called primus inter pares idea, where points can be increased based on your position in comparison to similar systems. This concept is founded on the ‘find similar result’-feature proposed in the ‘Features & suggestions‘ subforum. Instead of using the entire range of overclocking results to determine efficiency, we can use a limited ranking based on similar results. Instead of using absolute point increases for the most efficient results, we can use a factor calculated based on the position within the subgroup of similar results, which would serve a purpose of error-correcting mechanism to give the ‘efficient’ scores an extra boost. Perhaps this would require upper and lower limits.
Feel free to share your opinion on this topic in the HWBOT forums!