This week in our GPU Flashback Archive series we cast our minds back to a very popular and well loved graphics card series, the GeForce 400 series. NVIDIA launched the GeForce 400 series in March 2010 armed with a new Fermi architecture that it hoped would help it compete with the successful AMD Radeon 5000 series. Let’s look at the new features that Fermi offered, the cards that were popular and the scores that were submitted to HWBOT in this era.
Compared to previous product launches from NVIDIA, the GeForce 400 series launch did not go as smoothly as hoped. September 2009 saw AMD come out with their Radeon 5000 series which made a solid case against NVIDIA 200 series offerings. It would be January before NVIDIA really started wooing tech media with tales of its forthcoming Fermi architecture lineup. It would be March 2010 before tech media actually got their hands on the new cards and several weeks after that before enthusiasts would be able to actually buy one. This was not the typical NVIDIA launch. Reasons for the delay certainly seemed to lie with issues with actual fabrication at TSMC who were not providing the yields expected on their new 40nm process. This was a problem that particularly hurt NVIDIA due to the fact that the new Fermi GPU, the GF100, was actually very large. When the GeForce 400 series finally arrived in the form of the GeForce GTX 480 and GTX 470, by most calculations they were six months late.
We are treated this week to a look at the NVIDIA 200 series of graphics cards. As well as rejigged product nomenclature, the 200 series represents a new and improved architectural approach to the GPU design from NVIDIA who managed to come up with their largest graphics chip ever. The 200 series was the latest weapon in the fight against ATI and one that proved to be fairly potent in terms of raw frame-rates. Let’s take a look at the new architecture, the graphics cards that were popular at the time with overclockers on HWBOT and of course, some of the more notable scores that have been made its introduction.
We mentioned in the previous GeForce 9 series article how this period of history shows plenty of overlap in terms of GPU series. In April 2008 NVIDIA launched the 9 series and the G92 GPU (read all about the 9 series here ) which was based on an improved but largely identical Tesla design. The 9 series served a purpose by bringing to market cheaper high-end enthusiast cards that could compete with ATI. It also eventually gave NVIDIA a chance to test out the 55nm manufacturing process from TSMC using a more familiar architecture. The GeForce 200 series initially launched on 65nm silicon with later revisions taking advantage the 55nm process.
This week the GPU Flashback Archive series turns its attention to the NVIDIA 9 Series of graphics cards that replaced the successful and much loved 8 series. Arriving in April 2008, the new series featured an updated GPU design that eventually found itself built on a new 55nm manufacturing process. The period also marks a time when ATI and NVIDIA were trading blows as equals. An era when taking the performance crown was all that mattered, creating a situation that proved how healthy competition in an industry could indeed be very beneficial for consumers. Let’s go back in time and revisit the NVIDIA 9 Series, the cards that were popular on HWBOT and some of the more notable scores that have been submitted to the database.
The era of the NVIDIA GeForce 9 series is actually one of considerable overlap. When the the 9 series became available in stores at launch on April fool’s day 2008, a full array of 8 series cards were still available in the retail channel. There’s nothing too odd about that, as the previous generation typically gets a price cut to help clear inventory. It is a little odd however when the next generation GTX 200 series arrived on shelves just three months later. Today we’ll try and keep things simple and just focus on exactly what the GeForce 9 series offered. The 9 series may always be compared to the second revision Tesla chips that followed it, but for now we’ll leave the GTX 200 series for next week’s edition.
This week’s trip down GPU memory lane is all about the NVIDIA 8 series of graphics cards, a series that marks the arrival of DirectX 10 and a wholly new GPU architecture. Arriving in late 2006, the NVIDIA 8 series remains a fondly remembered era for many enthusiasts and of course overclockers, especially the GeForce 8800 GTX a card that is still a topic of conversation with some retro-minded HWBOT members today. Let’s take a look at the hardware associated with the GeForce 8 series era, the technology and features that arrived at that time, and some of the scores and submissions that were made using popular GeForce 8800 GTX card.
The NVIDIA 8 series was officially launched on November 8th 2006 with the arrival of a new flagship card, the GeForce 8800 GTX. The card presented a new GPU to the world’s media, the NVIDIA G80, an entirely new design based on the Tesla architecture. The GPU itself was manufactured using a 90nm process, packed a groundbreaking 681 million transistors into a die measuring 484mm². The G80 was designed specifically with DirectX 10 in mind, taking advantage of many of the specific technologies and ideas introduced by Microsoft. One such feature is the implementation of unified shaders.
Today’s trip down GPU memory lane is all about the NVIDIA 7 series that arrived on the scene in June 2005. Where previous GPU designs had heralded major innovations and the introduction of entirely new technologies, the 7 series was more of an update by comparison. The new GPU arrived with a change in nomenclature and notably a change in the way that NVIDIA graphics cards were actually launched - NVIDIA and AIB partners had products shelves on the very same day that the press embargo was lifted. Let’s look at the GPUs and cards that arrived as part of the new 7 series launch, the cards that have since proved to be popular with overclockers on HWBOT and of course, the notable scores that grace our database to this day.
The NVIDIA GeForce 7800 GTX was launched on June 22nd 2005 as the company’s brand new flagship card offering. At launch the card was immediately available in the retail channel, literally the same day, which at the time was largely unheard of. This was seen as NVIDIA more or less giving ATI the proverbial finger, as previous ATI launches had tended to be prefaced with vague ‘coming soon... we hope’ messaging. The 7800 GTX was based on the G70, the successor to the NV4x series that had powered the GeForce 6 series. The change in naming scheme was apparently a marketing decision with GeForce 7 being better represented by G70 than NV47. The NV70 was largely based on the same architecture as the previous generation and the NV30 generation that preceded it. The G70 again used Shader Model 3.0 with support for the DX9.0c and OpenGL 2.1. Nothing new there. The real interest is when you consider the rendering configuration.
Welcome back to another episode in our GPU Flashback Archive series. Following on from last week’s look at the GeForce FX series, we turn our attention to its successor, the NVIDIA GeForce 6 series. After rising to a position of relative dominance in the early years of GPU design, the GeForce 4 and subsequent FX series had seen NVIDIA lose ground to ATI who had stolen a march with their highly popular Radeon 9000 series. The stage was set for a return with the launch of a new GPU design and a series of cards that required more space in your rig and additional power to deliver a truly next generation gaming experience. Let turn our minds back to 2004 and check out the technologies and features that debuted with the GeForce 6 series, plus the most popular cards of the era and the most notable scores that have been submitted here on HWBOT.
The NVIDIA GeForce 6 series arrived in tech reviewers hands in April of 2004, debuting with a new NV40 GPU and two graphics card models, the GeForce 6 Ultra which commanded a price of $499 USD, and the GeForce 6800 (often referred to as the non-Ultra) for $299 USD. Let’s first consider the GPU itself, the NV40.
This week’s GPU Flashback Archive article continues with a look at the NVIDIA GeForce FX series, or GeForce 5 if you prefer to keep things somewhat tidier. In truth however, the FX series was perhaps one of the least tidy product launches that NVIDIA have produced. The GeForce FX series spanned two years in terms of graphics card releases, used a total of six different GPU designs, two manufacturing nodes, three bus interfaces and technically speaking three different kinds of memory. To keep things reasonably simple, we’ll look at the new features that the FX series debuted and the technologies that were introduced while at the same time keeping our remit in focus with a look at the launch flagship GeForce 5800 Ultra, and the budget GeForce FX 5200, the most popular FX series card with HWBOT members.
The NVIDIA FX series replaced the previous generation of GeForce 4 series cards, at least in terms of product launch dates. In reality the two products overlapped during the period towards the end of 2002 and early 2003. Although the GeForce 4 series was a success, bringing the video game industry the hardware needed to make DirectX 8.0 a reality, the GeForce 4 MX series had left a sour taste in the mouth of many tech reviewers and hardcore gamers. Despite being branded as a fourth generation NVIDIA product, it entirely lacked DX8 compatibility.
Today our GPU Flashback Archive series continues with a look at the GeForce 4 series that arrived on store shelves back in early 2002. It was historically another successful product launch from NVIDIA, one that helped to consolidate the company’s position as basically one of two GPU vendors that remained in existence. The GeForce 4 series arrived with a slew of new features and a broad range of price point options, strengthening NVIDIA’s position as market leader. Let’s take a look at the technologies and innovations that arrived with the GeForce 4 series, the cards that were popular with HWBOT members and some of the notable scores that we can glean from the database.
At the heart of the GeForce 4 series we have a wholly new GPU design, the NV25, a GPU which offered significantly improved performance over the previous NV20 GPUs used by the GeForce 3 series. It arrived in February 2002 with the launch of three new high-end cards, the flagship GeForce Ti 4600, Ti 4400 and the Ti 4200 which arrived a few months or so later. These three cards were essentially replacing the previous generation GeForce 3 Ti 500 and Ti 200 cards, which by early 2002 were becoming pretty rare due to stock shortages.
This week’s GPU Flashback Archive article is all about the GeForce 3 series of graphics cards from NVIDIA, a company that by this stage in history was recognized as industry leader in GPU development and innovation. The third iteration of its GeForce brand launched with a hiccup or two in early 2001 and enjoyed status as the company’s top tier offering for around a year before it was usurped by its successor, the mighty GeForce 4 series. Let’s take a peek at the new technologies and innovations that arrived with GeForce 3, the cards that proved to be most popular with overclockers on HWBOT and of course, the notable scores and benchmarks that it spawned.
First let’s set the scene. NVIDIA’s arrival on the graphics card market in the late nineties had been wholly disruptive. After TNT and RIVA series cards, NVIDIA blew the doors of the industry with its first GeForce series and simply didn’t look back. By the time we arrive at the GeForce 3 series, we find that Matrox had left the market to focus on more niche markets while S3 Graphics were basically clinging on by their front teeth. NVIDIA eventually put an end to 3dfx and their classic Voodoo cards by buying the company out. Only ATi endured, and we all know what eventually happened to them.
We return for our next episode of the GPU Flashback Archive with another classic graphics platform from NVIDIA, the GeForce2 series. It was unleashed on the scene in early 2000 and proved conclusively that NVIDIA had become the number one graphics company on the planet. Let’s take a look at the GeForce2 series as a whole, the cards that were popular at the time and of course a few of the scores that have been submitted to the HWBOT database using GeForce2 cards.
With the launch of the NVIDIA GeForce 256 card series in late 1999, the company had truly announced its presence on the graphics card market. Competing cards from ATI, S3, Matrox and 3dfx could not compete with the GeForce 256 DDR. Based on the NV10 GPU, it was the first to offer a hardware solution for T&L (Transform and Lighting) tasks, offer fastest ever vertex shading and probably the best gaming experience that anyone could imagine. NVIDIA stayed true to their core company identity and continued to follow a pretty aggressive product launch cadence. The GeForce brand was expanded to include the GeForce2 series just six months later, in sharp contrast to the release schedule the company keeps today.