Are We Hearing the Whole Story In the Samsung Benchmark Issue?
About a week ago, we highlighted AnandTech's news item of Samsung enabling higher performance levels for certain applications on the global edition of the Galaxy S4. The set of applications included a couple of popular Android benchmarks such as AnTuTu and Quadrant. As the world of mobile is hot these days, AnandTech's article triggered a lot of responses. Amongst them, this editorial over at Decryptedtech.com by author Sean Kalinich. In this editorial, the author sort of defends Samsung's position stating the optimisations are much rather part of a general performance profile customization than they are to purely enhance benchmark results.
Where the author might be correct that the word "cheating" is a little over the top, he does conclude by pointing fingers at Samsung for the lack of transparency. Furthermore, the author points out more general flaws of the media in this industry.
Samsung screwed up by enabling what looks like an extension for a profile that already existed in Google’s Android. Regardless of the fact that it does allow other apps to perform better, they should have not included bechmarks and tests by name. They should not have done this without some sort of explanation or disclaimer. There are a ton of developer features in Android (and in Samsung’s flavor) so why not point to those when running a benchmark and ask that developers put this information in an FAQ? This would allow someone to see performance in “stock” mode and then in a “benchmark” mode. One gives you the basic performance profile and the other allows the device to push its max limits. It is a simple solution that should have been thought of and allowed for.
Benchmark developers for mobile are also to blame here as their code is not suited to the typical power and performance profiles on most phones. AnTuTu was already shown to not have properly optimized code for all platforms so there is a big possibility that other benchmarks are also inefficient in this regard. It is possible that certain optimizations are needed to allow benchmarks to perform properly. It is not an easy task to develop code that works for all hardware build and Android versions. We know that manufacturers have to spend a considerable amount of time reworking their OS offerings whenever Google pushes out an Android update so why would we expect an app that is written for stock Google to work the same across all builds?
Lastly is the industry and tech press. This group is to blame possibly more than others as they have helped to create the environment that allows for this behavior. The push to get content out as quickly as possible means that corners are sometimes cut and not all of the information is fully investigated. Reviews have also become shorter and less detailed. The focus in on synthetic and scripted tests. These can be performed without much user interaction so the process of the user experience has been removed. There is very little attempt to do real-world testing as reviewers rush product in and out of the lab. This is further fueled by companies that have deadlines for products to be covered and published. They also know the value of the news cycle and want their product covered now for maximum impact. These two groups have been the driving force behind the change in the way the industry and products are covered. Instead of through and fair coverage now it is about scandal and being the first publication to publish.
In the end, it comes down to a single question: "Are performance enhancements triggered for specific applications, designed to measure and compare performance, faul play?". I believe it is faul play, and with Samsung targeting specific benchmark they are at fault. What do you think?
请登录或注册后才能评论