Nvidia & AMD to launch 28nm GPUs this year

Many of you would already be aware of this, and its no wonder cause its been quite a while since we've been using 45nm GPUs. These new GPUs are obviously gonna be better in most if not all terms, and run at much cooler temps than their predecessors. They're likely to be titled GTX600 series for Nvidia & HD7000 series for ATI. It makes me wonder though, how long before we start seeing single digit nm GPUs?

Lastly, I've got a question for you. Will you upgrade your current awesome graphics card just because of the newer technology?

Edit: Edited thread title from "Nvidia & AMD to launch 28nm GPUs by the end of this year." to "Nvidia & AMD to launch 28nm GPUs this year".

This is really great for hard core gaming fan. But the price of this processor is very costly. I am firstly think to upgrade my desktop or build a new desktop which will support best graphics card.

Was about to buy 5 Series Nv's but defo gonna continue using 5 Series AMD instead of upgrading to the short lived current gens :D

[quote=", post:, topic:"]

Starting next week, AMD is going to organize Tech Days in several destinations around the globe, such as London or Paris - during which the company is going to present 28nm Radeon HD 7000 series.

There is a lot of rumors flying around the web, some of which are spun by AMD themselves to raise confusion, as the Radeon HD 7000 series is going to mix the existing VLIW4 and VLIW5 architectures with the “Graphics Core Next” (GCN), introduced during June’s Fusion Development Summit held in Bellevue, WA.

Radeon HD 7000 Series with the old VLIW4 and VLIW5 Architectures

A couple of years ago, AMD and favorable media were all over NVIDIA for mixing different GPU architectures within the same product line. Then with the Radeon HD 6000 series, all of a sudden nobody questioned why AMD mixed two distinctive GPU architectures within a single series (new VLIW4 architecture only powered three high end parts). With Radeon HD 7000 Series, the situation is set to become even more complicated, with AMD mixing no less than three distinctive GPU architectures within the single generation of products.

Given the recent cancellation of 28nm Krishna and Wichita APUs, AMD will rebrand the Brazos 2.0 APU platform as Radeon HD 7200 and 7300 series, and for instance rebranded AMD E-Series APU will be powered by Radeon HD 7200 or 7300 series (all based on Evergreen GPU - VLIW5).

The higher end Trinity APU, the heir to the successful Llano A-Series APU will be powered by a Devastator GPU core, based on contemporary “Northern Islands” VLIW4 architecture, featuring product names such as Radeon HD 7450(D), 7550(D) and so on and so forth.

When it comes to discrete parts, parts with the codename Cape Verde (HD 7500, 7600, and 7700) and Pitcairn (HD 7800), they are all based on the VLIW4 architecture. The “Graphics Core Next” architecture is reserved just for the 7900 Series. Desktop parts are codenamed on Southern Islands, while mobile parts are codenamed after parts of London (read: Cape Verde becomes Lombok, Pitcairn becomes Thames etc.).

If you compare the VLIW4-based HD 6900 and the upcoming HD 7800 series, there isn’t much difference between the two. According to our sources, HD 7800 “Pitcairn” is a 28nm die shrink of the popular HD 6900 “Cayman” GPU with minor performance adjustments. This will bring quite a compute power into the price sensitive $199-$249 bracket and we expect a lot of headaches for NVIDIA in that respect.

Source & Full article here.


So, AMD is taking the lead again. I hope they’ve included DX11.1 support. And i wonder how ridiculous the prices are gonna be…

[quote=", post:, topic:"]

NVIDIA has revealed its latest graphics card, the GeForce GTX 680, using its new Kepler GPU architecture for improved performance and lower power consumption. The successor to NVIDIA’s Fermi, Kepler introduces a completely redesigned streaming multiprocessor with a focus on efficiency, along GPU Boost to dynamically adjust clock speed within power draw limits. Meanwhile, the GeForce GTX 680 also uses SMX, relying on the same base clock across the GPU and featuring 192 CUDA cores. 1536 cores on the GPU means, NVIDIA says, the GTX 680 “handily outperforms” its GeForce GTX 580.


It’s power efficiency that NVIDIA is most proud of, however. “Compared to the original Fermi SM,” the company says, “SMX has twice the performance per watt. Put another way, given a watt of power, Kepler’s SMX can do twice the amount of work as Fermi’s SM.” The card demands just two 6-pin connectors, and draws at most 195 watts of power, NVIDIA says, compared to the 244 watts the GeForce GTX 580 sucks down. That also means a quieter graphics card, since less active cooling is necessary.


Adaptive Vertical Sync, meanwhile, dynamically adjusts VSync to suit the current frame rates of the game. The NVIDIA GeForce GTX 680 itself has 2GB of GDDR5 memory and unsurprisingly supports DirectX 11 and NVIDIA’s own PhysX graphics engine and 3D Vision for 2D-to-3D scaling. There’s also NVIDIA Surround multi-monitor support, from a single card, with two DVI, two Mini DisplayPort and one HDMI, for up to four simultaneous monitors.

If that’s not enough, there’s dual SLI or 3-way SLI support for an even more ridiculous gaming rig. PCs running the new NVIDIA GeForce GTX 680 are available to order today – with card partners including SUS, Colorful, EVGA, Gainward, Galaxy, Gigabyte, Innovision 3D, MSI, Palit, Point of View, PNY, and Zotac - with the card expected to be priced at $499.

Source: http://www.slashgear…r-gpu-22219574/


So, Nvidia’s coming back with more power apparently… And thus the competition continues forever… B)