I was watching this Nvidia presentation from last year where they talk about bringing iRay rendered scenes to Virtual Reality and something stuck out to me.
https://www.youtube.com/watch?v=uFahmqmnKX0For those unaware, Iray is a ray tracer/path tracing. It uses CPU & GPU to render scenes, but only 1 frame at a time (so not real time). And because it's meant to work on a frame by frame basis and is ray traced, there is theoretically no polygon/texture,shader limit etc.
However,
Huang says that they rendered out light probes and @5:19, he says "everything is real time" and plays with the HTC headset.
https://www.youtube.com/watch?v=uAVJ3QsJ0fY
Keep in mind, Virtual Reality that we see in games right now requires ton of polygon budgeting to keep up with the 90-120fps requirement. So what is going on with the final output? Is the demo in fact playing back in real time the same polygon count and path traced lighting of a standard iRay scene, or is it a video being delivered to both eyes (but how would you interact with it)?
Replies
Basically what he is demoing is a one button export solution to "make a quicktime 360 thingie" from a given iray scene at a given vantage point. Great for archviz/interior design presentations, but not VR in the sense you are thinking.
TLDR : It's just like VR porn video.
Still, it has me excited for the future of CG going forward. Instead of rendering stills from one camera spot, every viewpoint could be rendered at once. Now effortlessly translating the entire scene so you can look around it in VR would be the next step.
BTW: this does'nt require a vive or rift at all . A Gear VR reaches the same quality level ( mobile preformance is well enough for such applications), even on a plain android + google cardboard you can have this experience. On Cardboard with some quality tradeoffs only though ...
At a basic level, iRay can take snapshots of both a left and a right eye image which is used to make the 360 degree panorama. This is the stuff that runs on any headset and isn't performance taxing.
What Nvidia showed in the above videos was actually a bit more complex. It's using light field technology which actually gives a bit more control/interactivity such as toggling between lights, the ability to change color of objects interactively, and you can "teleport" around the scene. This was running off their Quadro cards so in a way it is real time. Just not enough for you to modify/interact with the entire scene.
There's one more thing. I could try it for myself (because I am an iRay developer) but there is a way to render scenes in real time for VR. Apparently, using the activeshade mode within 3DS Max. This is what I was interested in my first post, because I wanted to know if you can take the exact same scene that is made for CGI/offline render, but render it again in real time for Virtual Reality.
You might be wondering "why not just use Unreal Engine 4/Marmoset to view such projects in real time"? Well originally, this was something I was going to do. But because I already have everything made in 3DS Max and using the iRay renderer, I would still have to make certain changes when porting assets over and re-rendering them again for real time. But because there does exist solutions right now where iRay can actually render out scenes and have them be displayed in VR, I feel this saves me a lot of time from having to go back and forth between moving assets around to Unreal Engine 4, just so I can achieve the same purpose of having a VR ready image!
Sources:
https://forum.nvidia-arc.com/showthread.php?14883-nvidia-IRAY-VR
http://www.tomshardware.co.uk/nvidia-iray-vr-gtc-2016,news-52753.html
http://www.nvidia.com/object/iray-for-3ds-max.html
https://blogs.nvidia.com/blog/2015/12/02/architects-use-nvidia-iray/
(See : it all makes much more sense as soon as the word "realtime" is banned altogether from the description).
Bottom line : if you want to make "sphere" VR images, use any renderer that can render to a sphere camera. If you want to make a proper explorable VR environment, use a game engine. Nothing new here
For example, after I read all those links, I went and updated my software and found that the previous real time render mode was dramatically improved. Why is this important? Before the update, the real time mode was barely usable and thus the only practical alternative was sticking to the slow, frame by frame, path tracer. Now maybe it's just a coincidence, but after Nvidia started making their big push into VR, they revisited this function of iRay and gave it better support for real time shaders. No longer are certain materials like glass or SSS are broken anymore, and I can even explore any scene meant to be rendered for hours, now instantly with a bit of noise here and there.
It's still the same software, but even traditional ray tracers are now starting to borrow from real time technology and using it to push both mediums forward. See what I'm getting at now?
https://www.youtube.com/watch?v=_rAHM70jfzo
Nvidia is working on something similar, as well as Otoy.
But that's all starting to blur now. Even Pixar is starting to develop a new hybrid renderer that emphasizes real time.
http://www.cgchannel.com/2017/08/pixar-unveils-renderman-22-and-renderman-xpu/
Looks like the race for the real time ray tracing is now on. I feel like I betted at the right time to pursue this technology back in 2016, but instead of waiting, I've been keeping pace with Nvidia and Pixar's solutions. Especially with VR now being available on shelves.
Read an incredible new adventure each week in Spazzer & Chips. In the next issue : a free Viewmaster reel for every reader!