Is the CPU dead

As Intel rolls out its 32nm processors codenamed Westmere – employing the most compact and efficient processor cores ever created – you might think we’ve reached the pinnacle of computing technology. Certainly, Intel presents its new CPUs as the very heart of next-generation PC systems.

However, not everyone agrees. While Intel is touting Westmere, Nvidia is gearing up to launch new graphics hardware, codenamed Fermi, promising not only to deliver next-generation visuals but also to rival – and even shame – the computational power of the conventional CPU.

On the face of it, that sounds absurdly ambitious. But the company has persuasive technical arguments on its side, demonstrating how the power of a GPU can now be used for much more than gaming. Could 2010 be the year in which the CPU is overshadowed by graphics hardware?

Stand back CPU

The immensely powerful CPUs typically found in modern PCs simply aren’t necessary for the majority of office and internet applications. Yes, if you want to crunch a huge database of numbers, or edit high-definition video, a fast CPU will help. But the rise of the netbook demonstrates that for many everyday tasks a processor as cheap and simple as the Intel Atom is perfectly adequate.

All the same, Atom-based netbooks typically fail to satisfy when it comes to visuals. They lack the processing power to decode and display high-resolution media files, and modern games are out of the question.

The immensely powerful CPUs typically found in modern PCs simply aren’t necessary for the majority of office and internet applications

Nvidia believes you can get the best of both worlds by partnering a lightweight Atom with a discrete GPU that specialises in these specific tasks – in March 2009, it launched just such a hybrid platform under the name Ion. “Ion translates to smooth HD video [on an Atom system], including streaming video from YouTube or Hulu, with Flash 10.1, and support for popular games like The Sims and World of Warcraft,” claims marketing manager Ben Berraondo.

And here at PC Pro we’ve been impressed by the graphical capabilities of low-power Ion-based systems, including the recommended Samsung N510 netbook and the Asus Eee Box EB1501 nettop.

In the future, it’s easy to imagine that in this segment of the market the CPU might become almost irrelevant, with graphics hardware providing a more significant differentiation between models.

The general-purpose GPU

This, however, is only one part of the story. If a GPU can help out a CPU by decoding video and rendering 3D scenes, there’s no reason why its processing abilities can’t be turned to other purposes as well.

The concept of using a graphics processor for non-graphical calculations is known as general-purpose GPU computing (GPGPU), or just GPU computing.

And it makes a lot of sense. Intel’s top of the line Core i7 processor presents eight execution cores to the operating system (four of which are virtual cores simulated via Hyper-Threading), while a cheap $30 graphics card offers ten times as many processing units – each one drawing a fraction of the power consumed by a CPU core.

Move up to the high end and you find cards such as the Nvidia GTX 295 integrating a massive 480 cores. It’s clear that by exploiting these devices developers can harness a level of parallel processing horsepower that a CPU can’t hope to compete with.

“Right now, the industry is seeing previously unheard of speed-ups by simply moving from the CPU to a GPU,” declares Berraondo. “Video encoding, for example, can be ten times faster, or more, on a GPU compared to a CPU.”

GPU computing has more serious applications, too. One example is [email protected], a distributed computing project that seeks cures for medical conditions such as cancer, cystic fibrosis and Parkinson’s disease.

In 2008, Nvidia reported that, according to independent analysis, running the [email protected] calculations on the company’s GPUs yielded results “140 times faster than on some of today’s traditional CPUs”.

Come in CUDA

Nvidia’s secret weapon is its Compute Unified Device Architecture (CUDA), first unveiled in 2007, which extends familiar programming languages – including C, Java and Fortran – with functions that make it easier for developers to offload calculations onto an Nvidia GPU.

CUDA isn’t the only way to create GPU-based software: Microsoft’s DirectCompute API, for example, offers programming functions that can be run on any modern GPU, as does the OpenCL framework originally developed by Apple. Indeed, Nvidia’s rival AMD argues that it’s these open standards, rather than the closed world of CUDA, that represent the future.

But CUDA had a headstart of almost two years over these more open interfaces. And as CUDA general manager Sanford Russell explained at the company’s 2009 GPU Technology Conference, CUDA is inherently a more appealing choice for developers, owing to its support for familiar languages, and C in particular.

“You go out there,” Russell suggested, gesturing through the window to the streets of Silicon Valley, “and you say ‘everybody who programs in C, stand on this side of the street. And everybody who uses an API, go stand over there.’ There’d be a few people on the API side and a whole bunch on the C side.”

Nvidia plans to make CUDA even more powerful with a new range of graphics cards based on its innovative Fermi architecture. Fermi cards will be the first to – in the words of CEO Jen-Hsun Huang – “treat both graphics and code as equal citizens”. Technical improvements include a shared L2 cache and an onboard thread scheduler that will help Fermi run code more efficiently than any existing GPU.

It all adds up to a vision of the future in which the CPU progressively becomes a lightweight, commoditised part of the overall PC architecture, while both visual and mathematical operations are handled by a massively parallel, highly programmable GPU. Is this where we’re heading? Are CPU manufacturers about to become bit players in the PC industry?

Limitations of the GPU

GPU technology still can’t compete with a traditional CPU in certain areas. Modern CPUs have huge instruction sets and advanced features, such as out-of-order execution and speculative branching, to ensure clock cycles aren’t wasted. The CPU is consequently far better at executing the complex single-threaded code that represents the bulk of applications.

And while large-scale data processing may be just what’s needed for research projects and enterprise applications, when it comes to home and business usage there aren’t many tasks that really benefit, outside of the familiar examples of video editing and transcoding.

This crucial point isn’t lost on Intel. When PC Pro asked the company’s product marketing engineer Mike Abel whether the company considered CUDA a threat, he visibly failed to shake in his boots. “In some cases, you might say you’re seeing CUDA delivering benefits over the CPU,” he acknowledged, “but when I look at DirectCompute, and things like that, I think ‘this is something that’s intended for high-end workstations’.

High-performance computing is a very different segment to mainstream computing, and for the mainstream there are many different ways for a software application to deliver performance, such as multithreading.”

Abel couldn’t talk in depth about GPU computing for high-performance applications, but this was perhaps to be expected: Intel is currently licking its wounds over the failure of its GPU-style card, Larrabee, the development of which was halted in December.

But that failure itself is highly suggestive of the limitations of GPU computing: Larrabee’s cores were more advanced than Nvidia’s stream processors, enabling them to perform more complex tasks – which also made them more expensive, more power-hungry, and harder to program.

For mainstream computing, though, Abel was happy to make clear that, so far as Intel’s concerned, the power of its CPUs is sufficient to make GPU computing irrelevant. “The CPU is at the forefront of the PC, and we believe it will remain there,” he affirmed.

“If someone wants to play games, they’ll get a discrete graphics card. But for transcoding, should I go buy a $200 card, or is my CPU good enough? From what we’ve seen, and what we’ve tested, the CPU is very competitive, delivering performance in some cases better than what Nvidia and AMD are able to deliver.”

So far as Intel’s concerned, the power of its CPUs is sufficient to make GPU computing irrelevant

There may be some value in DirectCompute, he conceded. “There may be special usages – niche areas where it might make sense. I’m not going to say ‘hey, that’s something that Intel will never support’. We think that today we have a very competitive solution, but we always try to best utilise all the resources on a processor. And if there may be creative ways to do that, we’ll certainly evaluate them.”

It’s a confident stance but, as Endpoint Technologies analyst Roger Kay notes, Intel can hardly say anything else. “Intel isn’t really in a position to produce a massively parallel processor,” Kay told PC Pro. “It needs more time to produce a chip with both the performance and power characteristics that will allow it to compete in the highly parallel computing space.”

The next round

It will be fascinating to see how Intel and Nvidia’s different strategies play out during 2010. But before we reach the endgame, both parties have a few more moves to play. Nvidia has already shown that CUDA is serious business on existing hardware, and Fermi is set to open up further GPU-based programming possibilities.

Intel, meanwhile, hopes to outflank GPU computing by continuing to beef up its processors, helping them handle the media tasks that are the bread and butter of discrete GPUs. “It was already announced that Sandy Bridge [Intel’s next-generation architecture, due for launch in 2011] will have Advanced Vector Extensions, which could greatly improve floating point performance,” confirmed Intel PR manager Radoslaw Walczyk.

“And we have some other stuff too, which we won’t talk about yet. But you can be sure that with new generations of hardware, Intel will introduce new technologies that will definitely benefit multimedia operations.”

Ultimately, though, whatever ends up on the desktop, there’s no doubt that GPU computing has changed the game forever. As Roger Kay concludes, “the man in the street won’t use GPU computing any time soon – but he’ll immediately enjoy the products of it: 3D animation effects done by major studios, drugs discovered using GPU computing and, since oil and gas companies will use it for exploration, perhaps even the price he pays for energy.”