GeForce Titan X – Expensive Big Maxwell
GeForce GTX 980 has managed to impress in terms of its technical specifications. Still built using 28 nm process, it outperformed GTX 780 Ti and R9 290X using much lower power consumption. This left some clear headroom for a larger, better performing chip using the same process. Just like 2 years ago, Nvidia has decided to use the fruits of their performance lead to release a pricey and powerful Titan X graphics card.
GeForce GTX Titan X is the latest graphics card from Nvidia, based on a single GM200 GPU. Based on the same Maxwell 2 architecture as GM204 in GTX 980, GM200 has also been known as "Big Maxwell". GM has 3072 CUDA cores (24 SMM units), 192 Texture Units, 96 ROPs, 384-bit wide memory bus and consists of 8 Billion transistors. In all these specifications it is a 50% increase over GM204. Correspondingly it also requires 50% more power, with 250 W TDP. Interestingly enough, GM200 is also the largest Nvidia's chip ever, with 601 mm2 area, even larger than the enormous chips powering GTX 280 and GTX 480. In Titan X, the chip is clocked at 1000 MHz (1075 MHz Boost, 12% lower than in GTX 980) and connected to the staggering 12 GB VRAM. Such amount of memory should be quite future proof, even in the age when consoles have 8 GB of total RAM. The combination of basic configuration and clock means that Titan X should have 33% lead over GTX 980, unless other constrains come into play. The price for the top single-GPU performance remains as high as it was previously – $999.
Unlike the last time, however, the high price is harder to justify for consumer. Previous Titan cards had fully unlocked double-precision compute capabilities of GK110, making them cheapest compute cards. To keep the amount of transistors and the size of the chip in check without the benefit of smaller manufacturing process, Nvidia has not added more FP64 units in GM200. As a result, there is no double precision performance to unlock. While there are some compute tasks where single precision is enough, anybody wanting high FP64 performance for a wider variety of computational tasks has to either use older Kepler Titans or pay much more for Tesla and its error-correcting memory. Thus, Titan X aims almost exclusively at those gamers who are ready to pay a hefty premium to get as much performance as soon as possible.
In games, Titan X manages to maintain 33% and higher performance lead over GTX 980 as long as CPU keeps up. Consequentially, it outperforms R9 290X by a wider margin, sometimes by 50 – 70%. Even in Mantle enabled titles, Titan X is unreachable. Things favour Titan even more at high resolutions and high VRAM usage where Titan X's effective bandwidth and memory amount shadows other cards. However, getting two R9 290X or a single liquid cooled R9 295X is $300 cheaper than Titan X. Two GTX 980 are just $100 more expensive over Titan X. Current SLI and Crossfire situation means that only some games will benefit properly from multiple GPUs and memory size does not double in AFR mode. This leaves a single powerful GPU a more reliable option. The future is a bit more interesting however. With additional developer effort in Mantle and upcoming DirectX 12, Vulkan, multi-GPU performance can be used more efficiently. It will not magically add proper multi-GPU support to every game, but top games and popular game engines may get near perfect multi-GPU performance. Such situation would leave Titan much less appealing even to the gamers chasing top performance.
Overall, Titan X is currently most expensive and powerful single-GPU gaming card available. It is interesting how long will it be able to keep this top performance spot and how long will Nvidia wait to introduce a cheaper, cut down version with 6 GB memory. Titan X's existence at $999 spot makes an enormous $700 price seem like a good deal, while once outrageous $550 is considered a bargain because of it. Titan X is a very convenient price anchor for Nvidia. As for AMD, lower prices on R9 2xx series keep them compelling, while they are working on their new generation. While they are facing similar engineering constraints as Nvidia, the early rumours suggest that we might get a very interesting GPU from them in the first half of the year. If you want to get the best performance at the lowest price, it is worth to wait a bit.