for an upcoming job i'm trying to determine if it's worth it to aquire a new graphics card for my (rather old, athlon-xp-based) sidekick system or if it's less hassle overall to get a complete machine on loan for the duration of the job. what i need is decent pixelshader 3.0 preview ability. there's no way in hell i'd replace my trusty quadro with some gamer card in my workhorse machine so that's why it has to be a card for the old box.
i'd need an AGP card, 128 or 256 mbytes - i don't care. ATI or nVIDIA? in this case i don't care either, it's only for running some engine viewer/editor for shader finetuning anyway. however, the final product might very well end up on xbox 360. it would be nice to not get a card with a fan that makes one deaf, or one that puts out insane heat and sucks an enourmos amount of power. TV-out would be nice to have, too.
i was looking at the radeon X1600 Pro, would that be a fit? or are the nVidia boards better for my kind of application? and if so, GF6600 or what? afaik the high end nVidia boards are quite the monsters, probably too much to handle for this machine with a mainboard designed in the era of geforce3/4's.
keep in mind i'm not going to use the board for anything else. as soon as the job is over i'll be most likely getting rid of it, hence it doesn't need to be future proof or whatever.
Replies
JKM: i know those but it seems that they have overheating problems unless you have a lot of airflow in the case.
this is an athlon xp system - it's warm in there. far hotter than in my much higher clocked dual-xeon work machine, actually. whatever AMD put into their CPU's...
And passive video? Come on maybe it works on desktop, MAYBE.
e.g trackmania with sm3 runs pretty bad (15-20 fps) but with sm2 it works well... but I got it for the reason of having sm3 at all, too.
I've heard good things about ATI's x line. I didn't realize they had an AGP version of the X1600.
maybe check out www.tomshardware.com
i want to keep my quadro for stability, dual DVI, ability to run maxtreme and it's build quality mostly. i'm aware that the consumer boards are just as fast (in d3d, which i dislike to use in max).
if i had the budget, i'd get a shader 3.0-enabled quadro. but the overall performance difference isn't worth it in this generation.
i am surprised to see all those new models coming out in AGP versions as well, btw. even the 7800 from nvidia is available.
good times!
i want to keep my quadro for stability, dual DVI, ability to run maxtreme and it's build quality mostly. i'm aware that the consumer boards are just as fast (in d3d, which i dislike to use in max).
if i had the budget, i'd get a shader 3.0-enabled quadro. but the overall performance difference isn't worth it in this generation.
[/ QUOTE ]
You wanted to say that consumer cards are times faster than your quadro (it propably gets beaten by $300 cards in everything as it must be very old one if it has no SM3.0, like based on fx5900 or something?).
BTW decent consumer cards come with dual-DVI these days.
BTW why would you use OpenGL in max? It has no shader support etc.
shader support? sorry, i want to work efficiently, not see the app slow down and crash. shader support in max is incredibly buggy and resource hungry -> external viewer. direct3d viewport on nvidia from all i've seen is fugly at best - flickering & zfighting, dodgy wireframe overlays, shitty transparency and probably more i have forgotten by now. not acceptable at all.
animation timing and such will be done with sufficient performance on my workhorse anyway without shaders applied.
thanks.
I was about to post that PCIe is much higher bandwidth specs, and came across this.
spec listings are often just there to deceive, or so it seems. there's a lot of difference between some theoretical peak performance you'll rarely hit and the everyday results in the real world.
unless it's at least regularly 2x the speed of my old kit, i'm not even going to bother thinking about upgrades for performance reasons anymore - it's just not noticeable enough a difference to me. graphics cards seem to have improved mainly on the shader side anyway, can't say that i've seen a spectacular performance increase in raw wireframe, textured and shaded display modes since the days of the geforce3.
With edged faces i could clone 200k tri object 20 times and it remained completely usable.
For some reason the performance dies if one object has high tri count.
(tested under DX9 with X1900XT)
Ah yes now please show me the GF3 that can do it, k?