Not sure what it's like everywhere else but In the UK most base level third party 3080 boards start at £50+ over RRP with the OC boards starting at about £800 and going up to almost £1k. We can't get founders edition cards here at all so. They don't count.
Assuming AMDs offerings roughly match RRP they'll be at a decent advantage price wise so I'd expect them to do rather well in the consumer space over the next year or so
The shop I just went to had a 3080 box sitting on display in the window.
However, a look at the price lists revealed no 3xxx GPUs on offer at all. Best you could have gotten there was the 2070. Such a tease!
Anyway, I was there because my 2080 Ti seems to have shortened the lifespan of my somewhat aging power supply. Considering it is now basically a totally eco friendly low power card compared to the new generation it would seem advisable to factor in a decent new one to go with your shiny new electricity-guzzling unobtainable GPU.
The shop I just went to had a 3080 box sitting on display in the window.
However, a look at the price lists revealed no 3xxx GPUs on offer at all. Best you could have gotten there was the 2070. Such a tease!
Anyway, I was there because my 2080 Ti seems to have shortened the lifespan of my somewhat aging power supply. Considering it is now basically a totally eco friendly low power card compared to the new generation it would seem advisable to factor in a decent new one to go with your shiny new electricity-guzzling unobtainable GPU.
Over-speccing your PSU is generally prudent - particularly if you tend to run machines for a long time. With a highish end CPU and a 3080/90 you're looking at maybe 500 watts just to keep the lights on - to me that says get a posh 1kw PSU
Over-speccing your PSU is generally prudent - particularly if you tend to run machines for a long time. With a highish end CPU and a 3080/90 you're looking at maybe 500 watts just to keep the lights on - to me that says get a posh 1kw PSU
Now that's indeed what I would classify as posh. That's a few hundred if one of the better models, might come as a bit of a shock to some.
I am cheaping out at 650 watts but in my defense I had to act quickly since this machine started crapping out on me yesterday and I have a milestone coming up next week.
Provided you cover the power requirement and it's not a total bag of shit you're fine - over speccing basically gives you longevity and headroom for unexpected spikes.
PSUs run with varying efficiency depending on the workload, it is possible that a 1000W PSU working with a 60% at most of the time is quite inefficient, even if it has a gold, or whatever, standard. They are significantly more expansive as well, therefore it is not necessarily the best choice.
Would there be much difference between 10gb or 24GB vram when doing standard 3D work (like Substance Painter texturing, baking, etc). My old 1060 struggles a lot with 4K in Substance Painter and I'm wondering if getting a 3090 is justified for 3D art (but not much rendering).
A 4K texture (meaning 4096 * 4096 pixels using 32bit color and 4 channels) uses 64MB of VRAM. So assuming there was absolutely nothing else using VRAM, you could have up to 160 layers at 4K resolution in memory before going over 10GB. In reality though you'll obviously never have 100% of your vram free, as the system/apps can be using it for many things, so it's safer to say you'll actually only be able to get between half (80) to 3/4 (120) that many 4K layer into VRAM before running out.
If you're curious, the basic math for getting texture memory usage in bytes is: (x * y * bitdepth) / 8192
Also, I believe Substance Painter only keeps the active Texture Sets texels in vram and uses a baked/merged/flattened map for the inactive ones being displayed. So as long as you don't go overboard on the number of layers per-set, or have a crazy number of sets, 10GB should be fine for most people working in SP.
RAM alone wont fix the slowdowns. Its a lot how you build the layer stack and where you paint. If you move the working layer to the top its a lot faster than working in the middle on the stack. Its also getting slower if you use hte passthrough blending mode a lot. Specially on large groups.
@ZacD you are right, thanks. Especially PSUs with a 80-PLUS certificate must not have this issue. Sorry for the misleading information. Still beware of cheap PSUs.
Only raytracing will not work? Just using the program SD/SP will just work with AMD 6800, correct?
Not supported usually means that they won't guarantee that it all works as intended and if you happen to run into issues they're not going to investigate when you are not running on an officially blessed configuration. So it wouldn't be smart to buy such a card if SP is an application you actually rely on.
Painter/designer only use RTX to accelerate baking afaik so if you're coming from a Pascal card you won't notice it missing.
Wrt memory and performance - We've done comparative runs based on our general designer workflow and they seem to suggest that memory capacity doesn't make much difference.
On our tests between a 2080 and a 2080ti there was no tangible difference (single digit percentages) We suspect there was a bottleneck elsewhere in the system that prevented the faster bits of the Ti doing their stuff but it did demonstrate that on an otherwise equal system the extra memory didn't do anything much.
In contrast When comparing 1070 Vs 1080 there's a big difference in both painter and designer - practically speaking it means the difference between being able to work at 2k Vs 4k resolution.
My theory is that it's down to having a 50% wider memory bus on the 1080 - which is why I don't think the 3080 is going to be at a real disadvantage compared to the AMD cards in painter and designer
I'm not really sure I understand why though... AMD has said their new cards just use the DX12 raytracing API, nothing else, nothing special. So saying their cards aren't supported is saying DX12 isn't supported?
I'm not really sure I understand why though... AMD has said their new cards just use the DX12 raytracing API, nothing else, nothing special. So saying their cards aren't supported is saying DX12 isn't supported?
Or am I missing something?
Anything that uses Nvidia systems (ie: CUDA or Optix) wouldn't be trivial to port to a system that has AMD specific hardware acceleration support. I imagine MD uses CUDA rather than Optix. So they would need to port to DirectCompute, which is the D3D equivalent of CUDA.
DXR is analogous to Optix, so if apps or games only offer RTX specific hardware acceleration features, it's likely that they are using Optix. Since AMD has only had cards with ray-tracing specific hardware for a couple of weeks, there hasn't been much motivation (or even information as to how) to support them up until now. But we'll likely see more developers switching from Optix to DXR in the future. Especially if AMD's ray tracing hardware gets better (it's not great at the moment).
I was hoping that the availability and price of the 3080 become a bit better after the AMD release, but it seems to get even worth. Twice I thought I'm lucky and get a card for a reasonable price, but both times the shops send a cancellation, because they took too many orders.
Technically this should have been possible with the previous quadros too though...At least on paper there should have been no barriers, but these are definitely more powerful.
Bitcoin is on a deathroll and it's taking other altcoins with it. Hope the miners will stop buying large numbers of gpu's at inflated price now. They are the main reason of low stocks and high price due to bulk buys. Market should be flooded with used gpu's if bitcoin goes down even more. Buy mining GPU's only at 1/3 of the price due to 24 hours stress for years even at summer heat. F miners.
Hmmm, yeah I wouldn't by a used GPU. Especially the gamer models aren't built to last 24/7 type of operation and you never know what they were used for by the previous owner.
Anyway, are you saying Bitcoin is doomed? Would not complain about that (you can tell who here holds no Bitcoin investments...) - but I only see some news about a dip. It would need to be banned in the dollar and euro markets to bring it down IMO.
Replies
https://ipon.hu/shop/csoport/szamitogep-alkatresz/videokartya?156=11608
https://ipon.hu/shop/csoport/szamitogep-alkatresz/videokartya?156=11610
https://videocardz.com/newz/nvidia-official-geforce-rtx-3060-ti-performance-leaked
Over-speccing your PSU is generally prudent - particularly if you tend to run machines for a long time.
With a highish end CPU and a 3080/90 you're looking at maybe 500 watts just to keep the lights on - to me that says get a posh 1kw PSU
It's also possible to undervolt your GPU to save a lot of power usage/heat with a more mild impact on performance
https://bjorn3d.com/2020/10/undervolting-the-rtx-3080-and-the-rtx3090/2/#split_content
I'm planning on undervolting a 3080, on a 650 watt PSU and don't want to upgrade it yet.
A 4K texture (meaning 4096 * 4096 pixels using 32bit color and 4 channels) uses 64MB of VRAM. So assuming there was absolutely nothing else using VRAM, you could have up to 160 layers at 4K resolution in memory before going over 10GB. In reality though you'll obviously never have 100% of your vram free, as the system/apps can be using it for many things, so it's safer to say you'll actually only be able to get between half (80) to 3/4 (120) that many 4K layer into VRAM before running out.
If you're curious, the basic math for getting texture memory usage in bytes is: (x * y * bitdepth) / 8192
Also, I believe Substance Painter only keeps the active Texture Sets texels in vram and uses a baked/merged/flattened map for the inactive ones being displayed. So as long as you don't go overboard on the number of layers per-set, or have a crazy number of sets, 10GB should be fine for most people working in SP.
If you move the working layer to the top its a lot faster than working in the middle on the stack.
Its also getting slower if you use hte passthrough blending mode a lot. Specially on large groups.
Sorry for the misleading information. Still beware of cheap PSUs.
Wrt memory and performance -
We've done comparative runs based on our general designer workflow and they seem to suggest that memory capacity doesn't make much difference.
On our tests between a 2080 and a 2080ti there was no tangible difference (single digit percentages)
We suspect there was a bottleneck elsewhere in the system that prevented the faster bits of the Ti doing their stuff but it did demonstrate that on an otherwise equal system the extra memory didn't do anything much.
In contrast When comparing 1070 Vs 1080 there's a big difference in both painter and designer - practically speaking it means the difference between being able to work at 2k Vs 4k resolution.
My theory is that it's down to having a 50% wider memory bus on the 1080 - which is why I don't think the 3080 is going to be at a real disadvantage compared to the AMD cards in painter and designer
Or am I missing something?
DXR is analogous to Optix, so if apps or games only offer RTX specific hardware acceleration features, it's likely that they are using Optix. Since AMD has only had cards with ray-tracing specific hardware for a couple of weeks, there hasn't been much motivation (or even information as to how) to support them up until now. But we'll likely see more developers switching from Optix to DXR in the future. Especially if AMD's ray tracing hardware gets better (it's not great at the moment).
Twice I thought I'm lucky and get a card for a reasonable price, but both times the shops send a cancellation, because they took too many orders.
Also NVlink allows you to run two cards 96 GB of VRAM, Virtual Production directors are going to love this!
https://www.youtube.com/watch?v=Dw4oet5f0dI&feature=emb_logo
https://youtu.be/kw78MUnOqIs?t=87
Nvidia's CES 2021 RTX event
https://www.rockpapershotgun.com/2021/01/12/watch-nvidias-ces-2021-rtx-event-right-here/
https://www.youtube.com/watch?v=oi8WpLMy3ZM&feature=emb_title