[hardware] Difference between nVidia Quadro and Geforce cards?

I'm not a 3D or HPC guy, but I've been tasked with doing some research into those fields for a possible HPC application. Reading benchmarks, comparisons and specs between nVidia Quadro and Geforce cards, it seems that for similar generation cards:

  • Quadro is 2x-3x the price of Geforce
  • hardware wise, the differences are not that great
  • in benchmarks (3ds Max, Maya and some others) Quadro cards are much better performing than Geforce ones

Does anyone know what are the exact and precise technical differences that can cause such better performance? My speculation (and what can be generally read on the net), since the hardware is of similar specs, is that it's all in the drivers. If that's the case, what kind of functionality do the Quadro drivers offer that programmers of 3ds Max and other have taken advantage of?

Of course, I'm not interested in marketing speak: higher business value, professional oriented, better support, better QA, etc...

This question is related to hardware nvidia hpc

The answer is


I have read that while the underlying chips are essentially the same, the design of the board is different.

Gamers want performance, and tend to favor overclocking and other things to get high frame rates but which maybe burn out the hardware occasionally.

Businesses want reliability, and tend to favor underclocking so they can be sure that their people can keep working.

Also, I have read that the quadro boards use ECC memory.

If you don't know what ECC memory is about: it's a [relatively] well known fact that sometimes memory "flips bits (experiences errors)". This does not happen too often, but is an unavoidable consequence of the underlying physics of the memory cards and the world we live in. ECC memory adds a small percentage to the cost and a small penalty to the performance and has enough redundancy to correct occasional errors and to detect (but not correct) somewhat rarer errors. Gamers don't care about that kind of accuracy because for gamers those are just very rare visual glitches. Companies do care about that kind of accuracy because those glitches would wind up as glitches in their products or else would require more double or triple checking (which winds up being a 2x or 3x performance penalty for some part of their business).

Another issue I have read about has to do with hooking up the graphics card to third party hardware. In other words: sending the images to another card or to another machine instead of to the screen. Most gamers are just using canned software that doesn't have any use for such capabilities. Companies that use that kind of thing get orders of magnitude performance gains from the more direct connections.


Surfing the web, you will find many technical justifications for Quadro price. Real answer is in "demand for reliable and task specific graphic cards".

Imagine you have an architectural firm with many fat projects on deadline. Your computers are only used in working with one specific CAD software. If foundation of your business is supposed to rely on these computers, you would want to make sure this foundation is strong.

For such clients, Nvidia engineered cards like Quadro, providing what they call "Professional Solution". And if you are among the targeted clients, you would really appreciate reliability of these graphic cards.

Many believe Geforce have become powerful and reliable enough to take Quadro's place. But in the end, it depends on the software you are mostly going to use and importance of reliability in what you do.


It's called market segmentation or something like that. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards.

What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work.

As Jason Morgan has pointed out, there are tricks that can unlock some of the disabled features in GeForce to bring them in par with Quadro. Those usually involves soldering and voids the warranty on the card. Since HPC puts lots of stress on the hardware and malfunctions occur more frequently that one would like them to, I would advise against using cheap tricks.


Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers.

The difference is in the intended market and hence cost.

Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX).

Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well.

CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance.

Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL.

Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support.