[Video] Buildzoid Explores and Discusses the Nvidia Pascal Problem

Buildzoid has recently been dabbling with an Nvidia GeForce 1070 card, trying his hand at overclocking a GPU architecture that he freely admits is pretty new to him (let’s be honest, he usually prefers to push AMD cards). His recent adventures have involved him performing a few hard-modifications to the card in an effort to get it to reach higher clocks and better scores. The upshot of all his hard works is that the card is now actually performing worse than at stock settings. How is this possible? Buildzoid reveals all in a video entitled ‘The Pascal Problem’.

In terms of settings he kicks off by showing us what’s going on with the GTX 1070 card, showing us the GPU-Z read out which indicates that the GPU has been pushed to have a boost clock 2,075MHz, a boost of 37.7% from stock settings. A boost that you would imagine to offer an improved score in 3Dmark Time Spy. Sadly that is not the case. Running Time Spy at this configuration we can see straight away that the fps readings are pretty low – around 25-30 fps throughout. Not the kind of performance you would expect. When he returns to card to default settings and runs the benchmark again, we end up with an average fps of 37.5. Welcome to the Pascal problem.

Buildzoid goes on to discuss his theories about what is actually going on. He proposes that the GPU has an in-build component which reacts to higher voltages and power draws. This reduces the performance of the card while still reading higher frequencies. In short, the hardware is doing something at higher voltages that software is ultimately unable to detect.

As always you can catch the Pascal Problem video from Buldzoid here on his Actually Hardcore Overclocking YouTube channel. If anyone has something to divulge regarding Pascal overclocking, feel free to chime in with your comments.


2

Taiwan sdougal says:

Anyone have any advice or learnings regarding the Pascal problem?

Australia jaffers says:

From what I gathered, you see substantial gains from raising the minimum core clock via the frequency curve in MSI afterburner. I gained large improvements in my score just by raising this and I didn't even touch the slider.

As buildzoid was explaining in one of the livestreams, the GPU boost algorithm can update many times faster than what software can report. i.e. The GPU is bouncing around from the max boost to a lower clock. And from what I see results wise, heavy computation is done at a lower frequency (I would assume to prevent crashing) while lesser workloads are done at a higher frequency. The GPU will still report in software that it's doing 2 ghz for example, but will actually be running anywhere from 1.6 to 1.8 when under heavy load.

Although its more for ambient OC, the frequency curve can also help with temperature stability (when the GPU changes frequency at certain temps due to the boost algorithm). I managed to stop it crashing when it hits 53C for example, by dropping the frequency curve in the middle of the graph (there isn't any indication of where the temperature points are, you'd have to play around with the graph). It allowed me another 100 points in a few benchmarks.

I've yet to test this under LN2 although I'm pretty certain that it won't matter as much, and you could probably just raise up the base clock to a flat line, as I don't think the boost algorithm does much to the clock at the really low temps.

TL;DR Use the MSI afterburner curve for more control over boost algorithm and clocks

Please log in or register to comment.