The GPU Flashback Archive: NVIDIA GeForce 4 and the GeForce 4 Ti 4200 64MB Card

  • News, Hardware, Articles
  • 1
  • HWBOT

The GPU Flashback Archive: NVIDIA GeForce 4 and the GeForce 4 Ti 4200 64MB Card

Today our GPU Flashback Archive series continues with a look at the GeForce 4 series that arrived on store shelves back in early 2002. It was historically another successful product launch from NVIDIA, one that helped to consolidate the company’s position as basically one of two GPU vendors that remained in existence. The GeForce 4 series arrived with a slew of new features and a broad range of price point options, strengthening NVIDIA’s position as market leader. Let’s take a look at the technologies and innovations that arrived with the GeForce 4 series, the cards that were popular with HWBOT members and some of the notable scores that we can glean from the database.


NVIDIA GeForce 4: Overview

At the heart of the GeForce 4 series we have a wholly new GPU design, the NV25, a GPU which offered significantly improved performance over the previous NV20 GPUs used by the GeForce 3 series. It arrived in February 2002 with the launch of three new high-end cards, the flagship GeForce Ti 4600, Ti 4400 and the Ti 4200 which arrived a few months or so later. These three cards were essentially replacing the previous generation GeForce 3 Ti 500 and Ti 200 cards, which by early 2002 were becoming pretty rare due to stock shortages.

At the heart of the new NV25 GPU was a revised architecture which included two Vertex Shaders, a first for NVIDIA. The NV20 had been the first to feature a Vertex Shader, giving developers access to a Vertex Shader Instruction Set which would allow them to create and program much realistic and life-like polygons instead of just simple blocky textures. The results were staggeringly impressive, offering much more life-like image rendering.

Whereas the GeForce 3 series debuted the first Vertex Shader and support for Microsoft’s new DirectX 8.0, it didn’t give most gamers and enthusiasts a tangible performance leap, as most games had yet to take advantage of the new DX8.0. With the NV20 and the arrival of the GeForce 4, we finally had a true performance leap to warm the hearts of tech reviewers everywhere.

Regarding the addition of a second Vertex Shader, I’ll let Anand Lal Shimpi do the talking:

“First and foremost, as it was pretty much expected from the start, the NV25 now features two vertex shader units akin to the Xbox’s NV2A GPU. The presence of two vertex shader units will help in games that make extensive use of vertex shading operations as they are inherently parallel in nature. As developers grow more ambitious with lighting and other vertex processing algorithms, the need for fast vertex shader units operating in parallel will increase. In the end both the pixel shader and vertex shader units will need to increase in performance in order to avoid any pipeline bottlenecks but you can expect to see the performance of pixel shaders improve tremendously while the future performance of vertex shaders will improve only a few fold over what we have today.”

Read the NVIDIA GeForce 4 review from Anandtech here..

As well as the inclusion of two Vertex Shaders, the new GPU design also had a few other tricks up its sleeve, one of which was referred to with the marketing term ‘Lightspeed Memory Architecture II’. Known to many simply as LMA II, it debuted a feature known as Z-occlusion culling which would remedy the problem of overdrawing. In laymen’s terms it allowed the GPU to look at buffer values to see if a pixel required rendering, or if would be overdrawn by another pixel. This basically made rendering much more efficient, saving memory bandwidth and improving performance along the way.

Other features arriving on the GeForce 4 series included the Accuview AA Engine which established anti-aliasing on the mainstream market as a standard feature. Another interesting feature to make its debut was nView, a replacement of TwinView which allowed users to enjoy dual-displays, a novel and rare thing in 2002. NVIDIA boldly made nView support standard across the entire range of GeForce 4 series cards. nView was considered the equal of ATI’s Hydravision equivalent and helped make multi-display support a standard feature that we take for granted today. Multi-display support was a missing feature on GeForce 3 cards, so it was good see NVIDIA make this move with nView.

The NV25 graphics processor was manufactured using a 150nm process, featured 4 Pixel Shaders, 2 Vertex Shaders, 8 Texture Mapping Units (TMUs) and 4 Render Output Units (ROPs). It supported DirectX 8.1, OpenGL 1.3 and used a 128-bit memory bus with Ti cards available in both 128MB and 64MB DDR variants.


The Most Popular NVIDIA GeForce 4 Card: The GeForce 4 Ti 4200 64MB

As mentioned above, the GeForce 4 Ti 4600 and Ti 4400 were the first high-end cards to arrive on the market. The flagship Ti 4600 card arrived with a price tag of $399 USD, NVIDIA’s standard flagship ship pricing in 2002. On launch day however, they were joined by a slew of more affordable MX branded cards; the GeForce 4 MX 460 ($179), the GeForce 4 MX 440 ($149) and the GeForce 4 MX 420 ($99). Crucially however, the more affordable price points did not arrive without pain points. All MX series card used the older NV17 GPU which lacks any kind of Vertex Shader and thusly zero support for DirectX 8.0 and OpenGL 1.3.

A passively cooled GeForce 4 MX 440 card (above) with single VGA out, plus S-Video.

Essentially the MX series brought similar performance to flagship GeForce 2 series cards, with features such as nView Accuview, AA and LMA II tacked on for good measure. For many end users however, these features were attractive enough, especially when you consider that few games at this point in time actually required DX8.1 support.

Enter the GeForce 4 Ti 4200, a card that arrived in April 2002 which boasted an NV25 GPU for a retail price of $199 USD. This made the upper tier MX cards virtually irrelevant. It also went on to be the most used GeForce 4 series card on HWBOT. Let’s take a look at the most popular GeForce 4 cards in terms of submissions to the HWBOT database.

  • -GeForce4 Ti 4200 64MB – 18.25%
  • -GeForce4 MX 440-8x – 15.64%
  • -GeForce4 MX 440 DDR – 9.09%
  • -GeForce4 Ti 4600 – 5.47%
  • -GeForce4 MX 440 SE 128bit – 5.23%
  • -GeForce4 MX 4000 64-bit – 5.21%
  • -GeForce4 Ti 4200 128 – 5.07%
  • -GeForce4 MX 440-8x 64bit – 4.90%
  • -GeForce4 Ti 4200-8x – 4.52%
  • -GeForce4 Ti 4400 – 2.92%

With a GPU clock frequency of 250MHz compared to the flagship Ti 4600 at 300MHz, the Ti 4200 was never going to be the best performer. At half the price however, it was indeed enormously popular with enthusiasts at the time, as the numbers above indicate. It had the same number of shaders and TMUs and ROPS and supported the latest APIs. It featured 64MB of DDR clocked at 250MHz (compared to 325MHz on the flagship model) and the same 128-bit memory bus.

An MSI GeForce 4 Ti 4200 card (above) with DVI and VGA outputs and Samsung DDR memory.

Arguably the GeForce 4 series can be seen as a period when NVIDIA and its AIB partners really started flexing the marketing muscles in terms of product segmentation. The HWBOT database has 13 different product models for the GeForce 4 series, compared to just four on the previous GeForce 3 series. Different options abound with 64MB and 128MB versions available for both NV25 and NV17 based cards. Plus there are later revisions such as the Ti 4800SE which used the newer NV28 GPU and support for x8 AGP bandwidth. Indeed several cards were later reissued as ‘x8’ versions, touting the same feature.

In terms of overclocking we again find that in 2002 it was possible to take a cheaper card (like the Ti 4200) and make it go faster, perhaps even to the performance levels of a much more expensive card. Here’s what Scott Wasson had to say on behalf of the TechReport in April 2002:

“Still, since NVIDIA initially announced the GeForce4 Ti at two speeds, 275MHz and 300MHz, we’re tempted to think nearly any GF4 Ti chip should hit 275MHz without much trouble. The late addition of the GF4 Ti 4200 is probably more about market demands than about binning chips. And memory overclocking, which is way more important than core speeds, will depend on the RAM that winds up on your card. Our review sample came with Hynix RAM rated at 4ns (250MHz).

“All of that said, when I showed fellow TR staffer Andy the test results for the Ti 4200 64MB, his first words were: “Wow, that thing overclocked like a mofo, didn’t it?” Why, yes. Yes, it did. We got it 100% stable at GF4 Ti 4400 speeds (275MHz core/550MHz memory) and were able to run 3DMark multiple times. It performed like you’d expect: like a Ti 4400. Then we went up from there. The card wasn’t 100% super-stable above Ti 4400 speeds, but it was stable enough at 300/580 that we could run some Q3A time demos. All of this with only a stock cooler and no heatsinks on the memory chips.

Read the NVIDIA GeForce 4 Ti 4200 review from TechReport here..


NVIDIA GeForce 4: Record Scores

We can now take a look at some of the highest scores posted on HWBOT using NVIDIA GeForce 4 series cards.


Highest GPU Frequency

Although technically speaking, GPU frequency (as with CPU frequency) is not a true benchmark, it does remain an important metric for many overclockers. The submission with the highest GPU core frequency in the HWBOT database comes from Russian overclocker West-87. He pushed a GeForce4 MX 440-8x card to 513MHz (+86.55%) with graphics memory pumped up to a very impressive 378MHz (+51.20%). His rig also included an Intel Pentium 4 3.0Ghz (Prescott) clocked at 4,245MHz (+41.50%).

You can find the submission from West-87 here on HWBOT: http://hwbot.org/submission/693456_west_87_3dmark2001_se_geforce4_mx_440_8x_12394_marks


3DMark2001 SE

The highest 3DMark2001 SE score submitted to HWBOT using an NVIDIA GeForce 4 card was made by Romanian overclocker sang. He pushed a GeForce4 Ti 4600 card to 411MHz (+37.00%) on the GPU core and (perhaps more importantly) 781MHz (+140.31%) on the graphics memory. With this configuration he managed a hardware first place score of 25,235 marks. The rig he used also featured an Intel Core 2 Extreme QX6800 ‘Kentsfield’ processor clocked at 4,243MHz (+44.66%).

He’s a shot of the rig in action:

You can find the submission from sang here on HWBOT: http://hwbot.org/submission/2558236_sang_3dmark2001_se_geforce4_ti_4600_25235_marks


Aquamark

In the classic Aquamark benchmark we find GraduS from Russia to be the GeForce 4 leader. His score of 43,727 marks was made with an ASUS GeForce4 Ti 4200-8x card with the GPU clocked at 440MHz (+76.00%) and graphics memory at 380MHz (+46.15%). The CPU used was an Intel Pentium E5300 ‘Wolfdale’ chip clocked at 3,815MHz (+46.73%).

He’s a shot of the GraduS rig in action:

You can find the submission from GraduS here on HWBOT: http://hwbot.org/submission/953202_gradus_aquamark_geforce4_ti_4200_8x_43727_marks

Thanks for joining us for this week’s episode of the GPU Flashback Archive series. Next week we will return with a look at the iconic NVIDIA GeForce 5 series of graphics processors and cards.





1

Taiwan sdougal dit :

Any of you guys still got love for the GeForce 4 series?

Veuillez vous identifier ou enregister pour commenter