What’s our final impression of the new cards? First of all, the parallel with the GeForce 7800 GTX is obvious. They build on an already-proven architecture by improving the few weak points that had been identified and significantly boosting the processing power. So there are no unpleasant surprises as far as the architecture is concerned, with the possible exception of the absence of support for Direct3D 10.1 or the slightly disappointing double-precision floating-point performance.
On the other hand, unlike the current situation, at the time of the 7800 GTX Nvidia hadn’t put their SLI to work with a bi-GPU card, like the 9800 GX2. That’s the drawback of this launch, since despite raw processing power that’s nearly doubled compared to the earlier generation (even under actual conditions, thanks to the increase in efficiency, and even though some of our synthetic tests didn’t go off the charts as we might have hoped), the GTX 280 didn’t perform well against the 9800 GX2, which beat it fairly regularly in gaming tests. And though it would be crazy to prefer the earlier card – which has only half the useful memory and much higher power consumption at idle, to cite only two of its now-obvious disadvantages – its results necessarily take some of the glory away from the new ultra-high-end card. Still it would be hard to fault Nvidia on this, since it just seems inconceivable to squeeze even one more ALU onto the gigantic die, and when you compare it to the competition’s performance.
And there are several other slightly regrettable points – first among which is the very high noise level, which is hard to explain for the GTX 280 and the 260, when their power consumption is lower than that of bi-GPU cards under load and extremely low at idle. And don’t forget the lack of support for DirectX 10.1, which obviously is a political choice and one that will obviously slow or even prevent adoption by developers, which is reprehensible in the light of the Assassin’s Creed affair. And the price of the GTX 280 ($650, available starting tomorrow), that puts in the “high-high-end” category, is problematic too in the light of the very aggressive price point of the “little” GTX 260 – with 18% less performance on average than its big sister, it’s announced at a price that’s almost cut in half, at $400! The result is that its availability date, announced for June 26, is likely to be particularly tense, given that it has the same GT200 GPU.
Finally, we can’t close without mentioning the very concrete perspectives for the application of CUDA. While a year and a half ago it was unlikely anybody would have pointed to CUDA as one of the positive aspects of the GeForce 8, all that has now changed, with the first three concrete applications ready or nearly ready for use. We’re talking about the BadaBOOM video transcoder and the Folding@Home GeForce beta client, both of which leave the CPU and Radeons in the dust, but also GeForce PhysX support, which has enabled a lot more developers – true, it wasn’t all that hard – to announce support for the technology in their next games, even if it remains to be seen what difference the implementation will make. This considerably widens the area of application of CUDA-compatible GeForce GPUs (from GeForce 8 on), if the release of optimized software with wider appeal continues and if AMD doesn’t manage to jump onto the stage.