I am curious why you figure a "high end gamer card" wouldn't be good for fusion 360
I phrased that wrong. A high end gamer card may well be perfectly acceptable for graphics intensive apps: 3D cad, rendering, photo imaging. Like if you already have a good card & gaming is your primary thing, will it work ? Yes. For simpler models & low assembly count you wont see any difference, which I would guess is 99% of the people on this forum. Gaming is refresh intensive but (relatively) smaller facet processing count vs graphics intensive apps like 3D cad, rendering, photo-realism etc. even though those apps are more 'static'. So the GC architecture is orientated to graphics command process grinding. Even companies like NVidea offer cards in every price range & PC format, but they still kind of segregate them by primary task. The uber expensive GC cards are for uber intensive applications like medical imaging. Like I was saying, RAM is also important, they kind of compliment one another for 'typical' tasks. Slicing a section through a 500 part assembly is different processing than rendering fur, which is different again from calculating lighting/shading/reflection. Generally the more RAM the better because its cheap these days. For Cad state of the art fast processor can only go so far, which is more a function of how the math problem must be solved internally vs accessing multi core.
The other thing is kind of practical but important. The software itself is usually orientated around 'likely' graphics cards suited for the task whether we like it or realize it. Many will actually suggest specific cards and/or driver release numbers. Probably the number one problem is graphic issues resulting from cards & drivers that are fighting the software in some way ranging from just a bit slower to not working at all.
I'm no expert at this stuff, its way above my pay grade, but there is lots of info out there if you want to delve into it. There is always going to be a dude that says I'm running laptop X with integrated motherboard GC & no issues. And that's fine, it obviously meets his needs. Until it doesn't. A better metric is just run a standardized, documented benchmark test & record the time. If its 40 seconds instead of 5 then we are comparing apples to apples. If he is OK with 40 seconds then that's fine too. If the smoke came out, well good time for an upgrade the new stuff is always better LOL
this link kind of lays out the principles
https://www.graitec.co.uk/hardware/cad-workstation-guide/workstation-vs-gaming-graphics
there are tons of real world tests like this
https://www.solidworks.com/sw/support/shareyourscore.htm
https://www.pugetsystems.com/labs/a...-Comparison-What-Is-the-Meaning-of-This-1112/
core processing
https://www.cadtek.com/solidworks-use-multiple-cores/
I phrased that wrong. A high end gamer card may well be perfectly acceptable for graphics intensive apps: 3D cad, rendering, photo imaging. Like if you already have a good card & gaming is your primary thing, will it work ? Yes. For simpler models & low assembly count you wont see any difference, which I would guess is 99% of the people on this forum. Gaming is refresh intensive but (relatively) smaller facet processing count vs graphics intensive apps like 3D cad, rendering, photo-realism etc. even though those apps are more 'static'. So the GC architecture is orientated to graphics command process grinding. Even companies like NVidea offer cards in every price range & PC format, but they still kind of segregate them by primary task. The uber expensive GC cards are for uber intensive applications like medical imaging. Like I was saying, RAM is also important, they kind of compliment one another for 'typical' tasks. Slicing a section through a 500 part assembly is different processing than rendering fur, which is different again from calculating lighting/shading/reflection. Generally the more RAM the better because its cheap these days. For Cad state of the art fast processor can only go so far, which is more a function of how the math problem must be solved internally vs accessing multi core.
The other thing is kind of practical but important. The software itself is usually orientated around 'likely' graphics cards suited for the task whether we like it or realize it. Many will actually suggest specific cards and/or driver release numbers. Probably the number one problem is graphic issues resulting from cards & drivers that are fighting the software in some way ranging from just a bit slower to not working at all.
I'm no expert at this stuff, its way above my pay grade, but there is lots of info out there if you want to delve into it. There is always going to be a dude that says I'm running laptop X with integrated motherboard GC & no issues. And that's fine, it obviously meets his needs. Until it doesn't. A better metric is just run a standardized, documented benchmark test & record the time. If its 40 seconds instead of 5 then we are comparing apples to apples. If he is OK with 40 seconds then that's fine too. If the smoke came out, well good time for an upgrade the new stuff is always better LOL
this link kind of lays out the principles
https://www.graitec.co.uk/hardware/cad-workstation-guide/workstation-vs-gaming-graphics
there are tons of real world tests like this
https://www.solidworks.com/sw/support/shareyourscore.htm
https://www.pugetsystems.com/labs/a...-Comparison-What-Is-the-Meaning-of-This-1112/
core processing
https://www.cadtek.com/solidworks-use-multiple-cores/
Last edited: