Home General Discussion

RTX, dlss2 and Ampere right now

2

Replies

  • Justo
    Offline / Send Message
    Justo polycounter
    I'm not very knowledgeable on this matter. What third party models are you talking about? I thought all RTX-30XX products would come from nVidia only. Do other people take these products and alter them somehow, or do you refer to competitors (amd cards)? 
  • almighty_gir
    Offline / Send Message
    almighty_gir ngon master
    You know what i'd like?

    If i could get a 'NVidia Experience Premium" subscription, that costs me like.... i dunno, £75/month. And then NVidia just send me a new top of the line GPU once a year. Guaranteed stock, every time, subscriptions get priority stock.

    Kind of like a lease on a GPU or something. With the added bonus of when I have to return the old card, I know it will be responsibly recycled.
  • Axi5
    Offline / Send Message
    Axi5 interpolator
    You know what i'd like?

    If i could get a 'NVidia Experience Premium" subscription, that costs me like.... i dunno, £75/month. And then NVidia just send me a new top of the line GPU once a year. Guaranteed stock, every time, subscriptions get priority stock.

    Kind of like a lease on a GPU or something. With the added bonus of when I have to return the old card, I know it will be responsibly recycled.
    I like the idea but it doesn't make much sense for Nvidia to do this. Nvidia only release a new generation every 2 years anyway, you can just buy the latest and greatest on finance if that's what you want and sell it on when you're done, if only stock issues weren't a thing. When I got my 1080Ti a few years back I did it on finance because £30 a month for 2 years was an easier pill to swallow than £700ish in one go. I ended up paying it off within a year to save the small cost of interest anyway but I'm still glad I did it like that.
  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter
    Justo said:
    I'm not very knowledgeable on this matter. What third party models are you talking about? I thought all RTX-30XX products would come from nVidia only. Do other people take these products and alter them somehow, or do you refer to competitors (amd cards)? 

    Manufacturers(Asus, gigabyte, msi etc.) take Nvidia and ATI reference designs and build custom cards based on them with their own cooling solution and PCB layout.

    They usually offer 2-3 versions with mildly overclocked ones having ridiculous markup and a particularly stupid name.

    Nvidia used to sell the reference design cards directly to consumers but as I understand it the founders edition 30series cards are different from the reference design and should really be considered a custom design
    According to something I read last night Nvidia will now be selling FE cards through bestbuy in the US and those of us in Europe are fucked if we wanted one (which I did cos they're pretty).

    Fwiw I believe the capacitor quality issue has been debunked now and it was really just that manufacturers didn't have time to test properly and allowed boost clocks to climb too high.

    It's all a bit of a moot point cos realistically nobody who didn't pre-order is going to see a 30series card till the new year at this rate. 

  • oglu
  • PolyHertz
    Offline / Send Message
    PolyHertz polycount lvl 666
    Supposedly the only reason those cards were even planned was because Nvidia was worried about 'Big Navi'. Sounds like either they're no longer worried, or they plan to do a proper refresh on 7nm next year instead of just shoving more vram on the existing cards.
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    oglu said:
    I expected they would postpone the launch, but not that they cancel it, I hope this rumor will not come true.
    AMDs cards are going to be announced next week, maybe that takes away a bit of the high demand for RTX30xx.
    For CUDA users that is not an alternative unfortunately.
  • PolyHertz
    Offline / Send Message
    PolyHertz polycount lvl 666
    So AMD just showed their new GPUs, and the 6900XT is apparently just as fast as a 3090, smaller, uses less energy, and costs $1000 (so $500 less then a 3090). Looking good for team red.
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    PolyHertz said:
    So AMD just showed their new GPUs, and the 6900XT is apparently just as fast as a 3090, smaller, uses less energy, and costs $1000 (so $500 less then a 3090). Looking good for team red.
    I was quite surprise by AMD, didn't expect they would be able to deliver a competitor for the 3090 (performance wise). Independent reviews still have to confirm the claims though.
    The new Ryzen and RDNA2 are definitely a good match. Unfortunately some users depend on CUDA and are bound to Nvidia.

    In the end, the competition it is great for us customers.
  • oglu
    Offline / Send Message
    oglu polycount lvl 666
    I like what AMD is doing. Im bound to nvidia but its great to get the competition.
  • almighty_gir
    Offline / Send Message
    almighty_gir ngon master
    A couple of points worth discussing:

    At the moment, the biggest defence NVidia has for the 3090, is that it will handle 8k resolutions better. It has more ram, and is basically bottlenecked by CPU speeds. The ram is almost certainly the deciding factor in whether or not the 6900 could compete at 8k.

    HOWEVER:
    who the fuck games at 8k? who even has a monitor capable of displaying 8k? that kind of resolution is *years* away from becoming even close to mainstream.

    Also: According to the AMD presentation, to reach those speeds you need to have a 5000 series AMD CPU as well, in order to access some kind of shared memory buffer or something, that allows the CPU to pass things to the GPU more efficiently. It seems like they're basically leveraging tech they've created for the past two console generations for PC. Awesome.

    Still, it's great to see AMD pushing hard on both the CPU and GPU fronts, it's great to see more competition. And with rumours of a third company starting to make their own GPU's as well, this can only be a good thing for the market in general.
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    8k Gaming is clearly Nvidia marketing bs... maybe some people are impressed by this.

    Today the RTX3070 sale started, the shops are listing most custom boards for nearly the same price as 3080s to avoid running out of stock :lol:
    Sold out within seconds anyway, quite crazy.
  • Justo
    Offline / Send Message
    Justo polycounter
    Sold out within seconds anyway, quite crazy.

    This has always been the case though with new releases of nVidia cards, no? :)
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    Justo said:
    Sold out within seconds anyway, quite crazy.

    This has always been the case though with new releases of nVidia cards, no? :)
    Yes, but not this extreme and with such an increase in pricing imho.
    But thinking of other time limited online sales, like concert tickets, the problem with bots and scalpers is very similar.
  • wirrexx
    Offline / Send Message
    wirrexx quad damage
    If AMD RT baking works as good with substance painter as with Nvidia. I'm staying with AMD (currently own rx5700XT, but i really want RT BAKING and for UE ofcourse)
  • ZacD
    Offline / Send Message
    ZacD ngon master
    Substance and UE4 use DXR for ray tracing so AMD cards should work out of the box with those features. 
  • PolyHertz
    Offline / Send Message
    PolyHertz polycount lvl 666
    So for those of you who are locked into Nvidia GPUs, what software are you using that requires CUDA or some other proprietary nvidia technology? I'd like to add an up-to-date list of such software to the PC building/Upgrading thread in tech talk.
  • ZacD
    Offline / Send Message
    ZacD ngon master
    I'm still using the older version of GPU lightmass until I get a GPU with DXR support (and until the new Epic GPUlightmass is feature complete/parity)
    https://forums.unrealengine.com/development-discussion/rendering/1460002-luoshuang-s-gpulightmass

    I believe Adobe has CUDA acceleration as well, but I don't know if that's moved to universally supported GPU acceleration)

    IRAY in Substance Designer/Painter, I think there's some other RTX accelerations, but those might be generic DXR. 


  • Prime8
    Offline / Send Message
    Prime8 interpolator
    PolyHertz said:
    So for those of you who are locked into Nvidia GPUs, what software are you using that requires CUDA or some other proprietary nvidia technology? I'd like to add an up-to-date list of such software to the PC building/Upgrading thread in tech talk.
    Would be interesting to see an overview, good idea.
    On the hardware side G-SYNC can be a reason to stick with Nvidia, though it's not needed for work obviously.
  • Justo
    Offline / Send Message
    Justo polycounter
    PolyHertz said:
    So for those of you who are locked into Nvidia GPUs, what software are you using that requires CUDA or some other proprietary nvidia technology? I'd like to add an up-to-date list of such software to the PC building/Upgrading thread in tech talk.

    Blender can use CUDA for rendering (and also OptiX for denoising renders, which requires RTX hardware), so as far as I know nVidia is the best choice for render speeds in that app currently.
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    @Justo I was under the impression that Cycles works with OpenCL as well, are there any limitations? Performance wise CUDA seems to be better, but that might change with the new cards.
  • Justo
    Offline / Send Message
    Justo polycounter
    Prime8 said:
    @Justo I was under the impression that Cycles works with OpenCL as well, are there any limitations? Performance wise CUDA seems to be better, but that might change with the new cards.
    I haven't tried OpenCL (actually I did a little but didn't see any performance gains, so I suspect my hardware or setup wasn't correct and needs further investigating), but there is definetly an option for that in Preferences :) 
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    From what I understand (i may be wrong though) open cl would not give a performance gain. It just allows to run the same code on cpu or gpu. But if you want it to be efficient too, this is not how it should be done. Some things are conceptually different on the gpu. For example, lets say you have a list of things and you want to process them on the cpu. You iterate over them one by one. In a lot of cases, this can be done more efficnently on the gpu and process all of them at once. A texture is such a case, its just a 2d list of pixels. Certain math with them can be done the same way, in one turn. Doing the same on the cpu would require you to do the math on each element separately. Because of this, a very different code would be needed on gpu and cpu. So, running the same implementation on cpu and gpu will never be efficient. Fully parallel computation is called SIMD (single instruction multiple data), and things need to be implemented in a very specific way in order to get this. This is ideal for gpus, but cpus can hardly do it.
  • thomasp
    Offline / Send Message
    thomasp hero character
    Rendering performance aside - over the years I've seen so many issues being brought up here, in other forums or in some software's bug tracker where the cause was simply that a non-Nvidia GPU was being used. All sorts of display issues, app crashes or functionality not working, you name it. AMD (ATI) wasn't always catching up performance-wise but their drivers appear to have been problematic for a long time, definitely since the start of the 2000's.

    That alone would seal the deal for me - no desire at all to troubleshoot the odd issue here and there because I felt like saving 50 bucks.

  • Prime8
    Offline / Send Message
    Prime8 interpolator
    Are AMD drivers still as bad as their reputation, maybe they improved? I personally have no experience with AMD GPUs, I'm stuck with Nvidia since the Riva 128.
  • Blaizer
    Offline / Send Message
    Blaizer polycounter
    @Prime8 AMD drivers are still bad, a friend has a RX 5700XT for gaming, and he has black screens and crashes with the latest drivers. 

    The RX 6800 with those 16GB looks pretty cool. The main issue i have with AMD are their drivers. Years ago, I returned to Amazon a RX590 due to performance issues, crashes and those black/grey screens.

    Nvidia have better stability, but those 8GB for a 3070... nope.
  • PolyHertz
    Offline / Send Message
    PolyHertz polycount lvl 666
    I heard they released a major driver update back in March to try and fix the black screen issue. Supposedly it fixed the issue for a lot of people, but looking into it recent drivers seem to have made it worse again.
  • Justo
    Offline / Send Message
    Justo polycounter
    Nvidia have better stability, but those 8GB for a 3070... nope.

    I know nothing of GPU specs other than the article I just googled, but being a 3D artist I don't know how this translates to my daily activities of modeling/baking/rendering.

    Up to now my thought process had always been "well it's a new gen component so it's better", but reading about these new releases for the first time showed me that evidently this is not always the case. Is VRAM such a critical factor for our work? ( @PolyHertz perhaps this could be added to the What To Know info on GPUs in the Build Your PC sticky thread)
  • PolyHertz
    Offline / Send Message
    PolyHertz polycount lvl 666
    VRAM is primarily important for dealing with large amounts of texture data, such as when authoring 4K textures in Substance with many layers, or when rendering a large scene with many textures using the GPU. If you don't have enough VRAM the software will need to offload the data to main system RAM or your SSD/HDD and stream it back into VRAM in chunks as needed, which can bottleneck the cards performance.

    Added the above info about vram and the potential issues with AMD cards to the PC building sticky.
  • Blaizer
    Offline / Send Message
    Blaizer polycounter
    @Justo The principal issue i have with VRAM, is that if you don't have enough.. for GPU rendering (Arnold or Octane, or Blender), you won't be able to render complex/serious scenes or characters with displacement maps. 

    11GB might be the sweet spot for gaming, but not for work. BTW, 2 2080ti in sli (11gb+11gb) are great for rendering. Nvidia did well with the RTX 3090 and its 24GB, but i would have prefered 2 RTX 3070 with 16GB (it would have been cheaper and faster).
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    11 gb is not very future proof for games either in my opinion. A lot of new games are close to that on high settings these days on 1080p not to mention 4k.
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    Hopefully AMD will now invest more into their software, after closing the gap on the hardware side.
    There is a new rumor about a 3080ti with 20GB to be announced in December, though this would make the 3090 kind of pointless.
  • Justo
    Offline / Send Message
    Justo polycounter
     PolyHertz said:
    VRAM is primarily important for dealing with large amounts of texture data, such as when authoring 4K textures in Substance with many layers, or when rendering a large scene with many textures using the GPU. If you don't have enough VRAM the software will need to offload the data to main system RAM or your SSD/HDD and stream it back into VRAM in chunks as needed, which can bottleneck the cards performance.

    If I understand this correctly, no matter how powerful a video card may be (how do you even measure this? by the number of its RT cores? Tensor cores? TLFOPS?), it will only be able to work in chunks as big as its VRAM size.  

    It's kind of crazy to me that my 1070 has the same amount of VRAM as these lower-end models two generations after. That means that even if I get a 3070, when it comes to speed and better performance in texture-heavy apps like Substance Painter working at 4k, I will NOT see a significant boost?
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    You will see some boost. Vram just holds resources such as textures. While the gpu itself does computation. So you can expect higher framerates in games, game engines, and dcc. Substance would also finish its processes faster. Bakes would be faster for example, or adjusting heavy filters would spit out the result faster. Gpu accelerated ray tracing would be significantly faster. 
  • PolyHertz
    Offline / Send Message
    PolyHertz polycount lvl 666
    You would still see noticeable performance gains overall with a newer generation card, but that's only when all the texels that need processing at a given moment are in vram. So depending on how the texture is broken up in memory, switching between layers or moving the brush around to very different areas on the model could cause hitches / big frame drops (texels can't be processed unless they are in vram). The less often texels have to be loaded/unloaded from memory the more consistent performance will be.
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    If you work with offline rendering like Blender Cycles, the lack of vram can get nasty, because it wont render at all, if the data doesn't fit.
    At that point you need to spend time on reducing the data size, which can be time consuming and frustrating.
  • Blaizer
    Offline / Send Message
    Blaizer polycounter
    Obscura said:
    11 gb is not very future proof for games either in my opinion. A lot of new games are close to that on high settings these days on 1080p not to mention 4k.
    Yeah, that's true, i have seen that with a couple of games. But it's like they reserve the memory.
  • EarthQuake
    Blaizer said:
    @Justo The principal issue i have with VRAM, is that if you don't have enough.. for GPU rendering (Arnold or Octane, or Blender), you won't be able to render complex/serious scenes or characters with displacement maps. 

    11GB might be the sweet spot for gaming, but not for work. BTW, 2 2080ti in sli (11gb+11gb) are great for rendering. Nvidia did well with the RTX 3090 and its 24GB, but i would have prefered 2 RTX 3070 with 16GB (it would have been cheaper and faster).
    Yes, this is worth repeating. GPU renderers need to fit all of the data into the video memory. With 8 or 11GB, you may be limited if you have a complex scene or very dense geometry. With traditional CPU renderers, the data goes to RAM, and most artist's workstations will have 32-64GB these days, and even if that isn't enough, data can always paged to disk. It's not feasible to page from VRAM to RAM or disk, so the more VRAM you have, the better. That's why these high-VRAM cards are exciting for artists.
  • kolayamit
    Offline / Send Message
    kolayamit polycounter lvl 13
    No thank you, Nvidia can keep their 8 and 10gb vram GPU's. Do they expect us to be fools -  waste 1500usd for extra vram. For two generations Nvidia is holding vram, with the rtx 3080 they actually reduced vram to 10gb. And for god's sake it's the 3rd time Nvidia is releasing a 8gb xx70 gpu, grow up please. Going with 16GB AMD 6800xt from 1080ti, will update here about the performance when i have one.
  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter
    Assuming you're not planning to do any actual work with the card I imagine you'll have a lovely time with your AMD - provided they remember to build some working drivers this time 😂

    Has anyone done the sums on actual bandwidth ? 
    16gb ddr6 through a 256bit pipe Vs 10gb ddr6x through a 320bit(or is it 384) probably isn't much different.

    Not that it makes any odds for me - I need CUDA and I'm not spending £1k on a GPU so it's a 3080 or nothing..
  • kolayamit
    Offline / Send Message
    kolayamit polycounter lvl 13
    I have seen this kind of thought process with Intel users way back when Ryzen 1st gen launched. Look at the CPU Market now.

    Please use one before typing this kind of comments. The driver issues were fixed long back, having driver issue with new product is common. Nvidia's RTX 3000 also had serious drivers issues at launch. I am using two computers with 1080ti and 5700xt, the 5700xt works better with high poly objects in Maya.

    I don't need Cuda. OpenCL and Directx 12 Ultimate Raytracing should work just fine. What important for me is the 16GB vram, the new AMD gpu's are powerful enough.
  • ZacD
    Offline / Send Message
    ZacD ngon master
    A few things, I don't think it's smart to assume AMDs DXR performance is going to be better or worse than Nvidia's offerings. It's probably a good idea to wait for 3rd party benchmarks on rasterizing and raytracing performance. We're starting to see games and software take more and more advantage of raytracing, UE4 for example is using DXR for their new GPU lightmass. 

    Running out of vram is a legit concern and can be a bottleneck for some people's workflows, but again, it's often people turning settings to 8k and complaining when there's workflows that deal well with low vram. But then again, the PS5 XBSX have 16GB of GDDR6, which AMD's new GPUs better matches up with.

    And there is a Nvidia bias for developers, that means Nvidia GPUs tend to have less software bugs largely due to the fact software is being developed for and by mostly people using Nvidia hardware. 


    It's best to wait for benchmarks and pick the GPU that best aligns with you wallet and use cases. 
  • ZacD
    Offline / Send Message
    ZacD ngon master
    https://www.youtube.com/watch?v=oUzCn-ITJ_o

    TLDR: AMD gpus on par with Nvidia with ray tracing or dlss off. Not so much otherwise.

    AMD only seems worth it if you REALLY need more vram over anything else. 
  • Obscura
    Offline / Send Message
    Obscura grand marshal polycounter
    Ehh. Thats rough. Especially considering the prices. I mean 14 fps in minecraft raytracing?! That being said the nvidia card isnt much better either with dlss off, though it starts to become playable. I'm also not a fan of forcing 4k res so hard. Sometimes its challenging to make complex scenes run nicely on 1080p  already, especially with raytraced features enabled.
  • Prime8
    Offline / Send Message
    Prime8 interpolator
    From what I've read so far, the OpenCL performance in Blender is still far behind CUDA, 2080Ti with RTX off is faster than a 6800 XT.
  • kolayamit
    Offline / Send Message
    kolayamit polycounter lvl 13
    Well after the reviews, i see no option other than Nvidia :(  Will wait for high vram 3070ti/3080ti.
  • Udjani
    Offline / Send Message
    Udjani interpolator
    DLSS is pretty much standard for new games, I really hope the super resolution is just as good or amd cards have little value for someone building a pc for games today. 
  • PolyHertz
    Offline / Send Message
    PolyHertz polycount lvl 666
    Nvidia is on their second generation of raytracing hardware (which they dedicate a significant portion of the GPU die to), while AMD is on their first generation of raytracing tech which doesn't even really get its own die space (they just attached a ray accelerator to each compute unit). So I'd say it was kind of a given the results would be like this. The bigger issue is their lack of an available DLSS equivalent, they need to get that up and running asap.

    Personally I think AMD should have priced the 6800/6800xt $100 lower each so they could tout more then just extra VRAM as a reason to buy their cards over nvidia. That's how they beat Intel; undercut them on price while offering more of something (cores/vram).
  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter
    Not sure what it's like everywhere else but In the UK most base level third party 3080 boards start at £50+ over RRP with the OC boards starting at about £800 and going up to almost £1k. 
    We can't get founders edition cards here at all so. They don't count.

    Assuming AMDs offerings roughly match RRP they'll be at a decent advantage price wise so I'd expect them to do rather well in the consumer space over the next year or so 

  • ZacD
    Offline / Send Message
    ZacD ngon master
    As long as both GPUs are selling out, prices will be high and there will be continued demand for both. Although I'd be curious if there was enough stock of both, how different would the sales numbers be.
2
Sign In or Register to comment.