NVIDIA Kepler To Do Away with Hotclocks

Hey, Nvidia, stop messing around and just give us control over the shader clock frequency again. WE will decide if the card needs to run 1x, 2x or anything in between ... *wink*

Since the days of NVIDIA's very first DirectX 10 GPUs, NVIDIA has been using different clock domains for the shaders and the rest of the GPU (geometry domain). Over the past few generations, the shader clock has been set 2x the geometry domain (the rest of the GPU). 3DCenter.org has learned that with the next-generation "Kepler" family of GPUs, NVIDIA will do away with this "Hotclock" principle. The heavy number-crunching parts of the GPU, the CUDA cores, will run at the same clock-speed as the rest of the GPU.

It is also learned that NVIDIA will have higher core speeds overall. The clock speed of the GK104, for example, is expected to be set "well above 1 GHz", yielding compute power "clearly over 2 TFLOPs" (3DCenter's words). It looks like NVIDIA too will have some significant architectural changes up its sleeve with Kepler.


10

Belgium Massman says:

No need for Nvidia to tell us how to run the shader clock frequency. We managed to figure it out on all the previous generations of the GeForce, I think we'll manage to do it on next generations just fine ... :D

United States BenchZowner says:

Higher clocks ?
My source begs to differ, try lower, maybe much lower than the last 2 gens.
Long time to wait for the 780 though, Computex is so so far away :(

Belgium Massman says:

What source?

Sweden ME4ME says:

his ass :D

South Africa DrWeez says:

me4me said: his Ass :d


Haha! :D

South Africa Vivi says:

loool

United States Hondacity says:

:rofl:

"keps" sounds delicious

i can't wait for the lightning series :)

United States BenchZowner says:

Massman said: What source?


BSzila.org

Somebody who works for somebody who knows somebody.
Maybe some documentation from a hacked ftp :p
Who knows.
Do I have to tell ? :p

Take it or leave it :)

United Kingdom borandi says:

Lets see... Cores increasing from 512 to 1024 = optimum 2x perf increase If core speed goes up ~30%, another 30% increase Perhaps some more IPC tweaks? 5-15% perhaps? 2.00x1.3x1.1 = 2.86x increase We've all seen 'leaks' (http://greenfacetech.com/show.php?id=10) suggesting ~2x increase, so I'm more inclined to go with Benchzowner here. The core speed goes down which takes out the IPC tweaks, meaning a pure core-to-core perf increase. Until we all start bumping some clocks. Also, clock speed down = power down = trying to fit in a power envelope. Just sayin'

United States BenchZowner says:

The new GPU differs a lot to the GF1xx architecture ;) What remains to be seen is how well the production will go, if they hit the desired clock speeds, and the design performs optimally. This "fight" can go both ways, with AMD landing a winner ( marginally or not ), a tie, or nVIDIA ahead ( marginally or even by a lot ). All we need now is 3D Mark2001SE 2012 Edition and Crysis 3 ( good coding, and better graphics edition ) :p

Please log in or register to comment.

Leave a Reply: (BBCODE allowed: [B], [QUOTE], [I], [URL], [IMG],...)