Hey guys
My 8800gts finally blew out today
I'm looking to get a new video card fast, for about $200.
I'm looking at this nvidia gtx 460 1gb right now for $190
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130565
I see a bunch of these evga 460's though and I don't really get the differences between them. Anyone know what the deal is with that?
Do you guys think this is a good card for my money?
Thanks!
Replies
They have an article specifically about the GTX460 from last week:
EVGA GeForce GTX 460 FTW
And also, their best card for the money lists are good:
Best Graphics Card for the Money: December
I think the GTX460 is a great card though, just personally. Good price too.
gaming to avoid it.
Nvidia crippled their entire 400 series of cards to such an extent that cards two generations
old routinely outperform their contemporaries when it comes to viewport performance.
I was going to buy a 460 myself, but luckily I saw this before it was taken off:
http://en.wikipedia.org/wiki/GeForce_400_Series#OpenGL_Problems
Then I searched around the net and found many people having similar problems, with
absolutely no acknowledgement from Nvidia(Support rep is hardly an official spokesperson).
I would love though for someone to come forth and show that their viewport performance is stellar.
First is it any and all OpenGL apps? If so, then that sucks, but I guess not that big a deal since most stuff is DX anyway. Which makes me further wonder:
Second, in 3dsmax, if you set the viewport to D3D, which is what I use usually, does it still have that problem?
Basically, are those cards crippled on purpose for 3D CG apps, or is it just their OpenGL that's borked?
Bigjohn from what I've read it affect your viewport performance even if its set to
D3D.
I've seen posts about problems in many 3D apps, from Max to Maya to it affecting Blender.
Who knows, maybe they crippled it so that people would get Quadros, maybe
its their new architecture thats the problem.
Have a look here:
http://www.polycount.com/forum/showpost.php?p=1259589&postcount=22
I guess I'll be the guinea pig though haha. I'll let you know firsthand if I have problems.
I am on the lookout for a new video card after my ageing 8800 finally bit the dust(Literally it appears), so now I am searching for something worthy to purchase
and fill that empty PCIe slot that's been gathering dust in the meantime.
As an Nvidia devotee I figured I'd look up their latest models and get something
that fit more or less into my price range, immediately I noticed that the GTX 460
was moderately priced and a fairly competative performance card. I was overjoyed
since I figured my search was over after only fifteen minutes or so of
searching(Minus the coffee...). Then, before I was to make my joyful purchase I stumbled upon this:
http://en.wikipedia.org/wiki/GeForc...OpenGL_Problems
Which to put into words crushed my Nvidia bubble and essentially left me
wondering on for another suitable video card.
So, my question boils down to this, should I get an ATI card(HD 5xxxx or 6xxxx), or
buy a pricey 200 series Nvidia card(Say 270)?
I am a bit hesitant to get an ATI card mainly due to two reasons, one being they
are less developer friendly(Nvidia has CUDA and PhysX), have a pretty lousy
track record as far as drivers and 3D app performance.
I really would appreciate any input on this, and if any of you own a 400 series
card, I would love to hear your experiences regarding viewport performance.
Thanks a bunch!
Polycount is a very reasonable place to talk about video cards, computer hardware threads often pop up in GD and tech talk, either of which are fine.
So, not really sure what you're getting at about not asking a question?
One person does mention that he can not get the "High Quality" viewport mode in Maya working with his GeForce 570.
From what I've seen over the years Maya has always been the most finicky 3D software when it comes to graphics cards and drivers. So I suppose if you are a Maya user you'd best complain to Autodesk to get their act together.
So, if you want a gamer card for some 3d work, don't consider to buy an ATI, that's my recommendation.
BTW, Zbrush does not depend of the GPU.
I use a Quadro CX in my home machine and I think a Quadro 460? in my work machine. Both have been great, as were the multiple nvidia cards I used before them.
http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=7104664&csid=_22
$50 off coupon: http://www.tigerdirect.com/email/WEM2538.asp?cm_re=Homepage-_-Spot%2001-_-CatId_email_COMPONENTS_CLEARANCE_2010
I made a post about it on the another 400 series issue thread a while back
http://www.polycount.com/forum/showpost.php?p=1259589&postcount=22
Hope that helps
zBrush doesn't do OpenGL (not sure if it even uses the GPU at all), 3dsmax's viewport supports OpenGL, but I think most people just use D3D. Games are all DX as well.
So when does that issue come into play?
Is Tiger Direct as respectable and safe to purchase from as newegg?
I've looked at the 200 series, and they are significantly slower in almost every aspect except for the core-clock. So, is a 400 card still okay to buy for 3d apps when ATI is out of the question (I've had bad experiences too, and all the reading I've done looks very bad for ATI) and 500s very likely could have the same problems as the 400 (but are just more expensive)?
I ask this because I'm considering the card XenoKratios had mentioned (the 465). Please let me know ASAP, I haven't got much time left!
The more I look the worse things are looking for 400 series...even the older 8000 and 9000 series run 4-6x more polygons than a way newer card such as the 470 or 480...what about a newer 500 series? Anything on that or how the 200 series does?
I really don't want to pay $1-5k for a Quadro. More like way less.
I guess no one really has a definitive answer.
So, is it wise to make the switch to ATI?
:poly122: I just ordered a GTX470 as my 8800GT died sometime ago... I don't want ATI (never liked it, also drivers and games support)
care to elaborate more on these 400 series problem? how bad are they?
That would stop the problem
Hey Makkon,
I have chucked about 2.5million polys into Maya 2011 and its ok, I guess its quicker than the 8800 but its prolly not a whole lot quicker. See the problem is I have never used, or even seen a Quadro or something 'high end' in action, so dont really know how good it gets. As for ATI, I`m not sure I would personally make the switch, seems to be a lot of Nvidia goodness to change, but maybe this range of cards does have a problem in 3d Apps, I`ll need to play more to find out, but it would be very annoying as its the exact thing I bought the thing for!! :poly118:
http://area.autodesk.com/forum/autodesk-3ds-max/installation---hardware---os/asus-geforce-gtx-480-problems-in-3ds-max-2011/
An older 8800 is good with Maya still, it can apparently handle 5-10 million polygons just fine. How's the 200 series? It seems the problems didn't start until the 400s.
The 400 and 500 series are really just good for gaming...
Could I, for example, theoretically SLI an 8800 and a 470 in one rig, therefore taking care of the viewport, etc. issues (8800) and still be able to play amazing games (470)?
I'd really like to know.
From SLI's wiki article.
I dont see how Nvidia could cripple only certain applications and not everything in one go (games included).
The problem is rendering double sided lighting in opengl and if i turn off double sided in Blender the viewport speeds up an is super fast...if they were crippling applications surly they would use a more ingenious method to do this?
Im still hoping for a driver update to fix this though...Nvidia must realise that game studios are not going to buy Quadro cards when the target platform is consumer based GPUs.
Not true. My mate has a GTX 400 line (Not sure which) but uses my old 8800GT as his extra card, which handles shit like physics etc perfectly.
two sided lighting, they've got a benchmark that might be useful and informing to run.
Check references for this section:
http://en.wikipedia.org/wiki/GeForce_400_Series#OpenGL_Problems
@odium - That seems to contradict what's written in the literature, unless the core of
the 400 line is very similar to the 8800, could you elaborate on how he has it set up?
Since it doesn't seem to fit what's written here:
"There are rare exceptions for "mixed SLI" configurations on some cards that only have a matching core codename (e.g. G70, G73, G80, etc.), but this is otherwise not possible, and only happens when two matched cards differ only very slightly, an example being a differing amount of video memory, stream processors, or clockspeed. In this case, the slower/lesser card becomes dominant, and the other card matches. Another exception is the GTS 250, which can SLI with the 9800 GTX+, as the GTS 250 GPU is a rebadged 9800 GTX+ GPU."
Also if he does run it like that somehow then it still kinda sucks:
"In cases where two cards are not identical, the fastest card or the card with more memory - will run at the speed of the slower card or disable its additional memory. (Note that while the FAQ still claims different memory size support, the support has been removed since revision 100.xx of NVIDIA's Forceware driver suite."
Yea, im just passing on what the Nvidia tech support have told me
I quoted all the articles on the wiki page in my initial support request to Nvidia.
I spoke to them today and the support guys said
That's a whole different animal. You can hook up an Nvidia card (even a 9600) to be exclusively a PhysX card. That's not SLI though, that's just a dedicated PhysX card, which gives quite a significant performance boost for games using PhysX.
A common thing today is to get a 3-way SLI motherboard, hook up two identical cards in SLI, then a third cheaper one in the third slot just for PhysX.
None of this has to do with 3dsmax though.
Are you in Maya? Try turning two sided lighting off and backface culling on, see if that helps~:poly121:
Anyways, check out this thread over at cgtalk, it will explain everything.
http://forums.cgsociety.org/showthread.php?f=7&t=907510&page=1&pp=15
How did you figure out that the memory clock dropped?
I have been running Blender with 16 million polys but Afterburnder doesn't register a memory clock drop.
Currently im getting 4fps on my 460gtx with the latest Beta drivers
now I am running the 266.35 and it's without any hick-ups so far
The beta drivers didn't work for me either, you should give the quadro drivers a shot. I have the same card, same issue, quadro drivers fixed it. For the most part, I still feel it should be faster but it's definitely an improvement.
Thanks!