3D card manufacturers shouldn't take this the wrong way, but it takes a lot to make us crawl out of the communal Eurogamer bed (yes, all the Eurogamer writers share a single large bed - we do it for frugality and communality, which remain our watchwords) and go to a hardware presentation. There's a nagging fear someone may talk maths at us and we'd come home clutching the local equivalent of magic beans. And then we'll be laughed at by our fellow writers and made to sleep in the chilly end where the covers are thin and Tom left dubious stains. That's no fun at all.
Then again, there's some things you can't help but go and have a gawk at. So when an invite claims, "All too often new hardware brings with it a small performance increase - maybe a 5-10 percent over the previous fastest thing. Wouldn't it be far more exciting to see a speed increase of x20 or even x100... well, we'll be happy to show just that on Friday," you have wander along. Even though you suspect it may be a trap and they're going to attack you with ill-shaped blades, you have to find out what on earth they're talking about.
As we suspected, it wasn't quite what we were hoping for. Sure, there are programs which gain a x100 increase via the methods NVIDIA talks about on this particular Friday, but unless you're working in economics or astrophysics modelling, it's not exactly that relevant. However, something more quietly astounding was explained. Mainly, that despite the fact that no-one you know bought a PhysX card, if you're a PC gamer with a relatively recent NVIDIA card, you've already got one. Or, at least, you will soon. Spooks.
The primary idea NVIDIA was trying to push was Optimised PC - the approach discussed in Rob Fahey's interview with Roy Taylor the other day. The idea being that the traditional PC approach where you buy the fastest PC processor you can doesn't actually lend the best results, at least in most situations. If you spent more on - predictably - a GPU-driven 3D card, for an increasing number of areas, you're going to get much higher performance. If the program is using the GPU in a meaningful way, anyway. NVIDIA highlights areas like image-processing and HD video-encoding, as well as - natch! - games. You lose in single-threaded activities - like, say, just booting up a program - but they argue a small loss in opening a Word Document is less noticeable than frames in games or similar.
Where it starts getting interesting is NVIDIA's development language, CUDA. The problem with all the threading programming methods is that it's radically different to single-threading (and, yes, we're getting into, "Why would anyone care about this but a programmer?" territory, but its background for the key point later). It's hard to do, and CUDA is basically a way to make things more accessible.
NVIDIA claims anyone experienced in C or C++ will be able to get a grip on it (i.e. not us, but the aforementioned programmers). This means that anyone who codes in CUDA can program the GPU to do pretty much whatever they like; it's by turning the 3D card into a bank of processors that the financial analysts and the astrophysics guys are getting such impressive results. And impressive savings, as it's a lot cheaper to do it this way.
Now, NVIDIA claims that the fact GPU solutions are cheaper is going to push better GPUs into more business machines. This will help push the idea that an okay CPU/good GPU machine gives better performance than a good CPU/okay GPU, leading to more machines with better GPUs... and so, making more PCs abstractly available for gaming. Or, at least, raising the bottom level of hardware that you can expect people to have.
In terms of a more general use, transcoding video can take hours. Later in July, all GeForce 8000+ cards will ship with Elemental HD, a program which manages to perform the odious task - in the words of NVIDIA - "in a matter of minutes". The software will also be available for people to download online, probably with a small fee ala Quicksave if they already have a GeForce card.
Point being: this CUDA malarkey isn't something that's just for future NVIDIA technology. It's something that allows the hardware many PC gamers already have to be repurposed.
For example, PhysX. NVIDIA's Physics 3D Card system was only supported in a minor fashion, as no-one would buy a card just to make explosions fancier, but with CUDA it can run on one of the other GPUs. A proportion of the 3D card's power can be given over to running physics, giving those fancy PhysX-style interactions without actually having a specific card for it. CUDA's porting to PhysX will become available to the public in July, but developers already have the tools.
You'll be able to - for example - manually, up front, decide to devote a proportion of your 3D card's power to PhysX. Alternatively, developers can commandeer it and do exactly the same thing. The new generation of cards which are about to be announced are able to deal with pretty much anything that exists on the highest setting with power left over, so that power can be given over to acting like a 3D card would.
And it goes further. Where previously you'd have just thrown out your old 3D card when you upgraded your PC to a new one, if you have a G8000+ 3D card already, you can keep it, and just set it to concentrate solely on doing PhysX tasks. This isn't a SLI situation where you need two of the same cards working in tandem - any post-8880 card, rather than being put out to digital pasture, can be given a job of deciding how bits of glass bounce off a skyscraper, or similar. NVIDIA claims it's talking to ATI to try and get them to use CUDA too, which.... well, we'll see there, eh?
The potential is interesting. Demos shown include Natural Motion, whose Euphoria engine is heavily physics-dependent, allowing unique, convincing moments in games. A straight collision isn't enough, as straight ragdolls are ludicrous - the system involving AI (so the hit object will try and move limbs to protect self and similar) leads to impressively naturalistic results. The first sign of this publicly was in Grand Theft Auto IV, but Natural Motion's own American football game, Backbreaker, is a fascinating example of what a physics-heavier approach to collisions can give games. And, with CUDA-esque use of GPUs to do this stuff, the PhysX related boon is accessible to even more of us.
So they did talk some maths, then, but we survived.