Well I saw a video of this done on a Radeon vega64 card, looks dope, no frame count posted thou to come to a conclusion. Not this video though, this is dope.
It's good that Crytek made a hardware agnostic realtime ray tracing technology but until AMD will adopt that in their GPUs, Nvidia RTX will always be superior in my opinion. It's not only for the fact that you can use ray tracing, the RT cores that Nvidia uses allows gaming companies to optmize their games for ray tracing to get over 60+ fps with the current hardware when you enable it ( even with a rtx 2060, take a look at Battlefield 5 ) and it also speeds up all the offline rendering softwares such as Arnold, Vray etc.
Although i come to like AMD a lot lately especially regarding their CPUs compared to Intel ( threadripper and the new 7nm models that will be released in July while Intel is stuck at 14nm for over 4 years and don't want to change ), for GPUs Nvidia won this round.
It's really cool. NVIDIA's RTX cores are great for AI. I imagine that's where they'll make the most money, especially because of this. However I wonder how long it will be before Epic/Unity does something like this, or if they have a signed deal with NVIDIA.
I seriously doubt Epic is going to move away from DXR anytime soon. And it looks like AMD won't support DXR until it makes sense for their entry level GPUs. I'm expecting a long slow war over approaches to ray tracing.
Any modern GPU can do ray tracing. If you use Toolbag to bake your textures, your GPU is doing ray tracing! nVidia's RTX cards have specialized hardware specifically optimized for ray tracing, so they should be faster than your typical non RTX card, but that doesn't mean that other cards can't trace the rays.
RTX is sort of like Physx, yeah your cpu or gpu can calculate physics stuff, but that card (and later nvidia cards after they bought them out) had dedicated hardware for it. It's difficult to develop or rely on any of this hardware / manufacturer locked stuff though. If you do RTX only features, you're locking out the vast majority of users.
What I hope to see is a more agnostic approach to ray tracing in game engines, where all cards at a certain spec (let's say DX11 and up) can do it, but cards with specialized hardware will be more efficient. The GPU manufacturers are going to push hard for these exclusive systems though.
Not to mention there have been raytracing tests and such from older games, I remember seeing one for ET:QW back in the day. Now I'm supposed to be wowed by seeing rayracting done in quake 2?
re: gpu manufacturers, If you continue to support exclusive systems like g-sync and RTX you'll continue to get exclusive systems. Though nvidia did recently allow you to use freesync on your nvidia if you want, so there is that. Obviously most people don't care about the politics of these things though and only want the best product, which is understandable. But perhaps not the best long-term strategy.
Any modern GPU can do ray tracing. If you use Toolbag to bake your textures, your GPU is doing ray tracing! nVidia's RTX cards have specialized hardware specifically optimized for ray tracing, so they should be faster than your typical non RTX card, but that doesn't mean that other cards can't trace the rays.
RTX is sort of like Physx, yeah your cpu or gpu can calculate physics stuff, but that card (and later nvidia cards after they bought them out) had dedicated hardware for it. It's difficult to develop or rely on any of this hardware / manufacturer locked stuff though. If you do RTX only features, you're locking out the vast majority of users.
What I hope to see is a more agnostic approach to ray tracing in game engines, where all cards at a certain spec (let's say DX11 and up) can do it, but cards with specialized hardware will be more efficient. The GPU manufacturers are going to push hard for these exclusive systems though.
The point of DXR was to allow an "agnostic" approach, an abstraction that allows hardware vendors to implement raytracing. Metal also got raytracing in the api, and Vulkan is in the progress of standardization. The recent drivers from nvidia do allow it to run on many Pascal cards.
While I am not directly involved in raytracing, but working on rasterization tech for that company gives some perspective on how technological progress is achieved. Imo this is not at all the physx situation which was proprietary, this time standards are used. MS regularly invites developers and hardware vendors to discuss future dx features. DXR was not dropped on the earth from nowhere, a lot of people were involved over several companies for a longer time. When we got new shader features in the past it just happened to be that the major hardware vendors were on faster and closer release cycles to each other (not like at the moment).
The ability to not use DXR like done by Crytek, is not going away, I would even argue that it also benefits from a focus on raytracing/compute-centrism in general (larger caches, divergence handling, scheduling etc.). We just were in a really long phase of almost no bigger feature steps. DXR is in its first incarnation, it won't be the last, same goes with DX's or VK's feature sets.
As Unity and Epic are quite active in pro market, they are happy to use that technology, even if for gaming mass adoption we are still a bit away. A few years down the line we will have different ways to do the things than we have today, but you have to start with "collective" experience somewhere. And as you guys know the chips will not give us those big gains from the past anymore, diminishing returns. We have to come up with more clever ways, problem specific optimizations etc. Personally I had wished for a more lower-level approach than hat DXR provides, but because it is the first wider incarnation one plays it safe with more abstraction, as the mechanisms underneath are easier to change. Once the hardware vendors find good "pillars" underneath that work for multiple, we will see lower-level access beyond current api design, as it's the same what happened with previous evolution. Imo the next years are super exciting regarding technology, Intel's re-entry, new consoles etc. good momentum to drive progress.
Looking at the DXR performance of the Pascal series GPUs, it makes sense why Nvidia initially locked it down to RTX GPUs only. A 1080ti can perform worse than a RTX 2060 in DXR games. The raytracing memes, jokes, and press would have been much worse for Battlefield and Metro.
Any modern GPU can do ray tracing. If you use Toolbag to bake your textures, your GPU is doing ray tracing! nVidia's RTX cards have specialized hardware specifically optimized for ray tracing, so they should be faster than your typical non RTX card, but that doesn't mean that other cards can't trace the rays.
RTX is sort of like Physx, yeah your cpu or gpu can calculate physics stuff, but that card (and later nvidia cards after they bought them out) had dedicated hardware for it. It's difficult to develop or rely on any of this hardware / manufacturer locked stuff though. If you do RTX only features, you're locking out the vast majority of users.
What I hope to see is a more agnostic approach to ray tracing in game engines, where all cards at a certain spec (let's say DX11 and up) can do it, but cards with specialized hardware will be more efficient. The GPU manufacturers are going to push hard for these exclusive systems though.
The point of DXR was to allow an "agnostic" approach, an abstraction that allows hardware vendors to implement raytracing. Metal also got raytracing in the api, and Vulkan is in the progress of standardization. The recent drivers from nvidia do allow it to run on many Pascal cards.
While I am not directly involved in raytracing, but working on rasterization tech for that company gives some perspective on how technological progress is achieved. Imo this is not at all the physx situation which was proprietary, this time standards are used. MS regularly invites developers and hardware vendors to discuss future dx features. DXR was not dropped on the earth from nowhere, a lot of people were involved over several companies for a longer time. When we got new shader features in the past it just happened to be that the major hardware vendors were on faster and closer release cycles to each other (not like at the moment).
The ability to not use DXR like done by Crytek, is not going away, I would even argue that it also benefits from a focus on raytracing/compute-centrism in general (larger caches, divergence handling, scheduling etc.). We just were in a really long phase of almost no bigger feature steps. DXR is in its first incarnation, it won't be the last, same goes with DX's or VK's feature sets.
As Unity and Epic are quite active in pro market, they are happy to use that technology, even if for gaming mass adoption we are still a bit away. A few years down the line we will have different ways to do the things than we have today, but you have to start with "collective" experience somewhere. And as you guys know the chips will not give us those big gains from the past anymore, diminishing returns. We have to come up with more clever ways, problem specific optimizations etc. Personally I had wished for a more lower-level approach than hat DXR provides, but because it is the first wider incarnation one plays it safe with more abstraction, as the mechanisms underneath are easier to change. Once the hardware vendors find good "pillars" underneath that work for multiple, we will see lower-level access beyond current api design, as it's the same what happened with previous evolution. Imo the next years are super exciting regarding technology, Intel's re-entry, new consoles etc. good momentum to drive progress.
Hi Christoph! Glad to see you here dropping knowledge bombs.
DXR is theoretically agnostic, or will be if / when ATI decides to support it. It looks like they aren't going to do it until their whole range has ray-tracing specific hardware, which is a somewhat strange decision. In the mean time, this makes it nVidia exclusive, which isn't nVidia's fault of course.
Another drawback to DXR is that it's DX12 exclusive, so no support for Win 7, which a lot of people still have. Though I have heard some rumors that DX12 may come to Win7? Not sure how realistic that is to expect. So really, at least today, the number of systems running OS/GPU combinations that can use DXR is quite small.
But yes I agree, all of this has to start somewhere, and the development of silicon optimized for ray tracing is quite cool. My greater point was more that, you can do ray tracing on any GPU, and don't need a specific API or driver or OS or whatever. Of course, these cards with dedicated hardware and optimized drivers will be more efficient. For games this is somewhat of a moot point, even though older gen cards can do ray tracing, it would be too slow, and really even the latest cards are probably too slow to hit 60fps with ray tracing on in most games. But like you said, there are more interesting uses in the professional market where "fast" has a different meaning.
It looks like nVidia just released some new drivers that enable RTX even on cards that don't have the specialized RTX hardware, which is great to see. I should try some of the demos on my 1070 Ti =D
Vulkan avoids the Win7 issue but granted raytracing isn't yet standardized, so "truely" nv exclusive for the moment, although given the way it is done in vk is very similar to dxr, it likely means the cross-vendor version in vk won't be fundamentally different. DX12 on Win7 is indeed a reality, although it seems MS does that for certain titles or so... (World of Warcraft got it). That said I also don't know if that includes DXR or not. There is a big push on improving HLSL to Vulkan (thx to Google's stadia running on vulkan and thx to MS open-sourcing the HLSL compiler), which shall further improve native vulkan adoption (we do see a good momentum).
A little more smart-ass about your DX11 statement DX11 spans a lot of different hardware, it's 10 years more or less. A lot has changed in how resources can be handled (bindless flavors), how work can be managed (async compute etc.), so more recent dx12-class hardware does make implementing something like DXR much more viable.
About the current big performance impacts we see in games: to me it looks as if folks "reached for the stars" (relatively heavy scenarios in very feature rich titles). This can be good, as a stress test gives back the most data (what works, what doesn't), but also means the initial perf is challenged. I hope with more hardware and experience around (like now with the new drivers), people will find other ways to use the api that isn't as heavy of a scenario, but that it becomes more like an additional tool. When I see things like the minecraft project, which isn't using dxr, or quake2rtx, they give a glimpse of the future. No one knows that in detail, so I would not say raytracing triangles like in that api is exactly what we need in the long run, but some components of this new effort will be lasting longer for sure. Okay enough of my propaganda
Replies
It's not only for the fact that you can use ray tracing, the RT cores that Nvidia uses allows gaming companies to optmize their games for ray tracing to get over 60+ fps with the current hardware when you enable it ( even with a rtx 2060, take a look at Battlefield 5 ) and it also speeds up all the offline rendering softwares such as Arnold, Vray etc.
Although i come to like AMD a lot lately especially regarding their CPUs compared to Intel ( threadripper and the new 7nm models that will be released in July while Intel is stuck at 14nm for over 4 years and don't want to change ), for GPUs Nvidia won this round.
RTX is sort of like Physx, yeah your cpu or gpu can calculate physics stuff, but that card (and later nvidia cards after they bought them out) had dedicated hardware for it. It's difficult to develop or rely on any of this hardware / manufacturer locked stuff though. If you do RTX only features, you're locking out the vast majority of users.
What I hope to see is a more agnostic approach to ray tracing in game engines, where all cards at a certain spec (let's say DX11 and up) can do it, but cards with specialized hardware will be more efficient. The GPU manufacturers are going to push hard for these exclusive systems though.
While I am not directly involved in raytracing, but working on rasterization tech for that company gives some perspective on how technological progress is achieved. Imo this is not at all the physx situation which was proprietary, this time standards are used. MS regularly invites developers and hardware vendors to discuss future dx features. DXR was not dropped on the earth from nowhere, a lot of people were involved over several companies for a longer time. When we got new shader features in the past it just happened to be that the major hardware vendors were on faster and closer release cycles to each other (not like at the moment).
The ability to not use DXR like done by Crytek, is not going away, I would even argue that it also benefits from a focus on raytracing/compute-centrism in general (larger caches, divergence handling, scheduling etc.). We just were in a really long phase of almost no bigger feature steps. DXR is in its first incarnation, it won't be the last, same goes with DX's or VK's feature sets.
As Unity and Epic are quite active in pro market, they are happy to use that technology, even if for gaming mass adoption we are still a bit away. A few years down the line we will have different ways to do the things than we have today, but you have to start with "collective" experience somewhere. And as you guys know the chips will not give us those big gains from the past anymore, diminishing returns. We have to come up with more clever ways, problem specific optimizations etc. Personally I had wished for a more lower-level approach than hat DXR provides, but because it is the first wider incarnation one plays it safe with more abstraction, as the mechanisms underneath are easier to change. Once the hardware vendors find good "pillars" underneath that work for multiple, we will see lower-level access beyond current api design, as it's the same what happened with previous evolution.
Imo the next years are super exciting regarding technology, Intel's re-entry, new consoles etc. good momentum to drive progress.
DXR is theoretically agnostic, or will be if / when ATI decides to support it. It looks like they aren't going to do it until their whole range has ray-tracing specific hardware, which is a somewhat strange decision. In the mean time, this makes it nVidia exclusive, which isn't nVidia's fault of course.
Another drawback to DXR is that it's DX12 exclusive, so no support for Win 7, which a lot of people still have. Though I have heard some rumors that DX12 may come to Win7? Not sure how realistic that is to expect. So really, at least today, the number of systems running OS/GPU combinations that can use DXR is quite small.
But yes I agree, all of this has to start somewhere, and the development of silicon optimized for ray tracing is quite cool. My greater point was more that, you can do ray tracing on any GPU, and don't need a specific API or driver or OS or whatever. Of course, these cards with dedicated hardware and optimized drivers will be more efficient. For games this is somewhat of a moot point, even though older gen cards can do ray tracing, it would be too slow, and really even the latest cards are probably too slow to hit 60fps with ray tracing on in most games. But like you said, there are more interesting uses in the professional market where "fast" has a different meaning.
It looks like nVidia just released some new drivers that enable RTX even on cards that don't have the specialized RTX hardware, which is great to see. I should try some of the demos on my 1070 Ti =D
DX12 on Win7 is indeed a reality, although it seems MS does that for certain titles or so... (World of Warcraft got it). That said I also don't know if that includes DXR or not. There is a big push on improving HLSL to Vulkan (thx to Google's stadia running on vulkan and thx to MS open-sourcing the HLSL compiler), which shall further improve native vulkan adoption (we do see a good momentum).
A little more smart-ass about your DX11 statement DX11 spans a lot of different hardware, it's 10 years more or less. A lot has changed in how resources can be handled (bindless flavors), how work can be managed (async compute etc.), so more recent dx12-class hardware does make implementing something like DXR much more viable.
About the current big performance impacts we see in games: to me it looks as if folks "reached for the stars" (relatively heavy scenarios in very feature rich titles). This can be good, as a stress test gives back the most data (what works, what doesn't), but also means the initial perf is challenged. I hope with more hardware and experience around (like now with the new drivers), people will find other ways to use the api that isn't as heavy of a scenario, but that it becomes more like an additional tool.
When I see things like the minecraft project, which isn't using dxr, or quake2rtx, they give a glimpse of the future. No one knows that in detail, so I would not say raytracing triangles like in that api is exactly what we need in the long run, but some components of this new effort will be lasting longer for sure. Okay enough of my propaganda