When did 8/10 become a bad score?

My apoligies for posting a gaming-related article, but the content of the editorial is as relevant for the world of gaming as it is for the world of hardware. It's no secret I'm not too happy with the way certain so-called "hardware review sites" are dealing with awards (see: "[RANT] How to make sure your awards/reviews are totally meaningless: Hardwareheaven.net" thread") and it's clear similar thoughts come up elsewhere too.

Interesting read!

The first rule laid down to any new writer looking to publish a piece of work online should be this: do not take some of the comments to heart. It’s a briefing that everyone who writes regularly for the Internet has to go through.

That’s no slight against commenters, rather the odd one or two who, er, take things to a more extreme level. Most comments, particularly related to video games, are of the harmless variety. Some are even revelatory; containing unknown embellishments that spark off meaningful debate among the readership. This is a very good thing.

Yet, for as wonderful and varied as article comments are, it’s the negative ones which catch the eye. One or two aggressive disagreements can quickly turn a peaceful feed into a raging flame war and in the latter half of 2011, one special flavour of murderous rage began to draw particular attention.

...

Newshound Pat Garratt believes that Metacritic itself is partially to blame for inflated review scores. He argues that a “need for very high Metacritic marks has led to a culture where games that carry sub-9 scores are no longer seen as true hits.' It’s a badly kept secret that big development studios reward their staff with bonuses for high ratings on Metacritic. This ethos filters down to PR representatives, who increase the pressure on reviewers to overstate their scores.

I’m inclined to agree with Pat. Most gaming outlets operate in a symbiotic relationship with publishers. Journalists rely on them for access to preview content and review discs, while publishers depend on the reliable marketing push a positive review will garner. This back-and-forth has lead to a culture in which it is considered de rigueur to award good games a nine or ten.


15

Belgium Massman says:

What bothers me is the following:

It’s a badly kept secret that big development studios reward their staff with bonuses for high ratings on Metacritic. This ethos filters down to PR representatives, who increase the pressure on reviewers to overstate their scores.


I think this kind of bonus is not the right way to reward people, certainly not if there is a connection between the person who writes the review and the person who sends out the samples. Actually, from now on I'll ask whether or not the bonus depends on the review and if it does, I'll add a bogus 10/10 to each tested product just to screw with whoever made that rule.

If a marketer trusts me to test a product, I don't want MY article to lower their bonus at the end of the year :).

United Kingdom borandi says:

A lot of review companies are measured in terms of award performance. The more awards (even trivial awards, like 'OMG It's a Product Award') they get, the better internally these people do. This also puts pressure on to get more awards the next year, then the next year - it's a viscous cycle.

But ultimately it's a struggle of advertising vs. exclusivity. You want to get the exclusive review to get the hits in, so you (may) pander to the requirements of a company so you are on their whitelist. But as a result, you may be slammed for giving everything a pseudo-meaningless score 'just to get the hits and advertising in'. What was the website you complained about giving an award to anything and everything under the sun a while back? You can't argue both ways dude :)

Belgium Massman says:

I'm not following the "you can't argue both ways" comment. Was it directed to me or in general?

Norway knopflerbruce says:

I kinda agree that the awards are quite useless, BUT most products are pretty good actually. If you want to give awards to... say... midrange memory kits (1600MHz high latency), how do you do that? Editors choice or 10/10 because the timings are 9-10-9 rather than 10-10-10? OOOH INSANE PERFORMANCE BOOST:D Or you give it because the heatspreaders look cool? In the end you have only two choices, nearly no awards, or a bunch of them - since the products are so similar. Same thing with SSDs, too. Mobos as well, although there are more ways to screw up a mobo than a stick of memory (layout for example).

United Kingdom borandi says:

Massman said: Their point-scoring metholody is pretty much useless (giving almost everything 9/10 or higher) and so area their awards. I sincerely hope no hardware vendors are bragging with those awards ... they just give them away for free it seems.


Massman said: Actually, from now on I'll ask whether or not the bonus depends on the review and if it does, I'll add a bogus 10/10 to each tested product just to screw with whoever made that rule.


Even if you disagree, as someone reading your words, there are interpreted such these two statements are connected and contradictory (even if you insist they are not). Let's equate scores and awards for a moment, as they will both be used by PR.

Who is to say that HH take the philosophy as you do in the second QUOTE, but as a reader, you are inclined to make the first QUOTE. If you give something a 'bogus 10/10' to ensure your PR mate doesn't lose any bonus, you're misleading those who read your reviews if it is a bad product. Then you come into disrepute for giving bogus 10/10s across your review range.

United Kingdom borandi says:

knopflerbruce said: I kinda agree that the awards are quite useless, BUT most products are pretty good actually. If you want to give awards to... say... midrange memory kits (1600MHz high latency), how do you do that? Editors choice or 10/10 because the timings are 9-10-9 rather than 10-10-10? OOOH INSANE PERFORMANCE BOOST:D Or you give it because the heatspreaders look cool? In the end you have only two choices, nearly no awards, or a bunch of them - since the products are so similar. Same thing with SSDs, too. Mobos as well, although there are more ways to screw up a mobo than a stick of memory (layout for example).


Memory is always a sketchy area to review. If it works, it works. If it doesn't, it doesn't. There's also a price/warranty element. Every product should be rated out of two - price good gives 1, if it works as expected, give it another 1. If it's exceptionally cheap, perhaps another one or an award. If you want to add overclocking in there, or style, then you could be considered nitpicking if you then mark it out of four. So then you could mark it out of 10 - price out of 4, use out of 4 (for memory, that's almost always a 4), style 1, overclocking 1. That means most kits will get a score 4-9. Is that a fair representation?

Belgium Massman says:

Not if you clearly specify that the bogus award is to help your contact to get the bonus (s)he deserves regardless of the product quality. I couldn't care less what the people within the company do with the 10/10; the point is that a marketer should not be evaluated based on awards or scores. The whole point of a review is to evaluate the product, not the person who provided you with the product ... If HH would indeed apply this philosophy, they are lying to their readers as in all the reviews they clearly link the score/award to the product evaluation.

Belgium Massman says:


//edit: Better yet (if they are evaluated based on avg score) ...

United Kingdom borandi says:

The Obligatory Award™, for all your PR needs.

United States sin0822 says:

Here is how i see what you are talking about massman, the scoring system has changed. When was the last time you saw a 6/10 or below? Not often, right? you barley ever see lower than a 7, i haven't one yet, and that might be the issue. So people automatically think the lowest score given out will be a 6. 8 thus changes to a = not so great score, 9 for some reason however jumps over a huge margin to a great product. However hardware heaven has devised their own little method where people who actually know the product and the industry don't read their reviews, or don't read more than 1 because the review was so misinformed they couldn't take more. That then allows HH to give whatever scores they want b/c no one hold them accountable, and manufactures don't mind the awards.

Below is a response from HH from an inquiry i made to their ASRock X58 Extreme 6 review(i wrote this back before i ever wrote my X58A-UD5 rev 2.0 review), my issues ranged for saying it performed and overclocked better than the X58A-UD9, the only thing the reviewer told me the UD9 had over the Extreme 6 was that the UD9 supports 4-way SLI and that the UD9 has 2oz copper in the PCB. I am sorry i also bitched at them to saying the Marvell SE9128 was a top notch SATA6GB/s controller.

This is the response i got:



if you don't want to read it here are some highlights:
"As well as matching and exceeding performance the ASRock version also ran cooler than the default config on the UD9 and overclocked higher"
"In July I tested the majority of high end boards based on X58. The Gigabyte UD9 was the best performer in that roundup (which included the best boards from Asus and MSI also). Using that same board to compare to the ASRock (and retesting it with the latest BIOS/drivers) we see in every test that the ASRock version matches or exceeds the UD9 in performance."
" This means that the only real area where the board is lacking when compared to the UD9 (which scored 10/10/8/10) was the lack of extra copper in the PCB and the ability to run 4way (which was noted in the article). This resulted in the lower build score for ASRock."
"I doubt you will find a motherboard review out there which goes as far as to test real world performance, which matters to consumers, and looks at aspects such as SATA3 (with RAID) and USB3 with appropriate components then follows it up with tests such as 3 screen gaming in CrossfireX and 3-way SLI … tests which fully cover the potential of the product in question."

I mean goes as far to test real world performance? there are many sites that do that, and give fair scores. IMO you recommend Marvell SE9128/3 and don't even realize that Intel SATA3GB/s gives better 4K Random speeds, which is more real world than sequential. Pissed me off. Worst is when he said it Oced better than the UD9, maybe HH should come own HWbot rankings for X58 then.

Anyways guys, according to most review sites we have 0 idea about what boards OC higher and perform better.

But its sad IMO 8/10 9/0 in my mind always have an will refer to grades, like 80%+ is a B, 90%+ is a A. B is what you don't want to get under, but is perfectly acceptable depending on the price. IMO however not it seems the scale starts at 7 and ends at 10.

BTW orignial HH review on X58 ASrock xtreme 6 http://www.hardwareheaven.com/reviews/1041/pg1/asrock-x58-extreme6-and-intel-core-i7-970-review-introduction.html

Canada Vinster says:

For reviews that have a score our of 10, 5 should be the score if the product tested did everything it was set out to do... like memory... if it's rated at 1600Mhz and has 10-10-10-30 timings and it did just that, then it gets a 5-10 as it did what was expected.. the score would go up if it performed at setting that exceeded its rated settings. get higher mem speed or tighter timing and was stable then scale the score accordingly.

for MB's same thing, if you want to compare 2 manufactures then it should be broken down to individual features specifically ensuring that the conditions are identical and the look at the simplicity of how each manufacturer brought that feature to the user... have a feature and it doesn't work well or is flawed then it loses points.

I don't understand why reviewers couldn't function under simple guidelines. it's almost like there should be a committee on testing regulations thus ensuring everyone did things under certain guidelines to ensure the reviews aren't bullshit.

United States sin0822 says:

Vinster said: For reviews that have a score our of 10, 5 should be the score if the product tested did everything it was set out to do... like memory... if it's rated at 1600Mhz and has 10-10-10-30 timings and it did just that, then it gets a 5-10 as it did what was expected.. the score would go up if it performed at setting that exceeded its rated settings. get higher mem speed or tighter timing and was stable then scale the score accordingly.

for MB's same thing, if you want to compare 2 manufactures then it should be broken down to individual features specifically ensuring that the conditions are identical and the look at the simplicity of how each manufacturer brought that feature to the user... have a feature and it doesn't work well or is flawed then it loses points.

I don't understand why reviewers couldn't function under simple guidelines. it's almost like there should be a committee on testing regulations thus ensuring everyone did things under certain guidelines to ensure the reviews aren't bullshit.


5/10 means it sucked, that is just general. So if you do everything you should in a class you get a 50%?

K404 says:

I see Vinsters point. If a product has modest spec......and does its spec.... and no more...... and the price is normal..... and the packaging is simple....... there is nothing wrong with the product and maybe it does PRECISELY what the company wants it to do..... why give it 10/10? Are we supposed to give a company a standing ovation for shipping a product that passed internal QC and is not DOA? The product that has the option of performing miles beyond its spec and is way cheaper than people expect and it is well packaged with a great warranty..... THAT gets 10/10. There should be clear distinctions between "meh, whatever" stuff and "this is awesome, wrapped in fantastic" stuff.

United States sin0822 says:

I think K404 what you described should get a 8/10 or 8.5/10 as most products aren't perfect, every product has issues and bugs.

Canada Vinster says:

5/10 has become a crappy point scheme because too many idiots go 9/10 on products that do exactly as they were intended to do... how does it get a score that is above average? because that's all it is, average? we've allowed this practice as consumers to become acceptable. I don't read reviews just for the sole idiocy that has become the norm because I don't agree with many of the methods that are acceptable...

Please log in or register to comment.