With the inception of PBR being the norm these days (and doing a pretty good job at mimicking reality I might add) I want to know, what's next?
For example, I've seen more and more that VR is starting to get its claws into more full length titles (Fallout and Doom for example) vs titles that resemble more of a tech demo. One of the major issues I see with realistic rendering techniques for VR is that normal maps don't work well enough due to them essentially faking more geometry when in VR you have actual depth perception which rids that illusion. Do you see something completely new coming to replace the normal map or will it come down to being able to throw more polygons into scenes to make those illusions more imperceptible?
Another example, smarter rendering with machine learning. Nvidia debuted this bit where they use "deep learning" to help their ray tracing engine fill in the gaps to 'guess' what certain pixels will be to lower rendering times
https://www.youtube.com/watch?v=1tbHkWmOuAA
Will the next step rely more on better hardware specs (being able to pack more on screen) or more on revolutionary code-writing? The simple answer is obviously both, but I'm hoping to get some more specific insight on what technologies the people here think are starting to mature to the point where we may be seeing them in the games of tomorrow.
Replies
https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf
And also Nvidia's real-time global illumination is really cool. I can see that being improved and used in games in the future. And I imagine as computers get more powerful, the more reflections it can do in real-time.
I don't think I've seen the Nvidia GI thing you were talking about. I'll need to look into it.
I give it 5 years until the PC master race compete to see who has the littlest noisiness.
On the topic of AI, noise, as it happens, is what researchers are working on right now - clever renderers that prioritise only what matters to the clarity of the image (as opposed to all of it equally) etc.
edit: watched the OP's video - yeah exactly they're moving in this direction fast.
edit: here's otoy's renderer - unity's new renderer - running realtime 2 years ago:
https://www.youtube.com/watch?v=FbGm66DCWok
or five years ago:
https://www.youtube.com/watch?v=aKqxonOrl4Q
With the right settings, current top end machines run it fine. With developer support as engines start to integrate this stuff, gpu manufacturers have no option but to respond and performance will rise.
I'm not sure I "get" what the difference in that nVidia denoiser demo is versus current routines and what makes it faster, but it looks promising.
When Arc System Works did their presentation on how they got their game to look like traditional anime but in 3D, I'm hoping we get to see more technology like that.
https://www.youtube.com/watch?v=Qx-ISmfOF4g&t=3s
https://www.youtube.com/watch?v=Eqm_MiONvtU
I'll be happy when the two videos I posted make headway for providing artists with new tools.
As far as the normal map VR thing, I understand the mechanics of it and why it still works as it should, I was merely providing an example of what I was talking about but I agree that it was not a great example.
@JordanN I think what you're describing is that you want more stylized shaders which tell the PBR system how to react to lighting and all that other stuff, a PBR system I believe would be good for any stylized application.
http://www.pbrt.org/
in 2004
wow...that was really cool, had no idea anything like that existed.
Between TB3's GI and the super experimental stuff buried in CryEngine, I'm excited to see Voxelized RT GI move into a deployable state for games. Seeing specular reflections and lighting being generated on the fly based off of voxelized scenes that can run on a consumer GPU really is something. It's not quite ray tracing, but it's way more feasible in the near term and produces promising results.
https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf <--- where PBR in the games industry was directly copied from.
https://www.youtube.com/watch?v=87E6N7ToCxs
The movie it was developed for. The very first iteration of a fully PBR production.
I said in my first post, I'm interested in what Arc System Works did. You can find a presentation they did and it goes into detail about it.
I also think people are either deliberately ignoring this sentence, or need to read it again.
"I want to see art branch out so that future games and movies don't have to mimic one common artstyle just so they can claim to be realistic."
there are styles which profit from pbr are and others where there is no gain using it.
There are more things then PBR out there, and rendering technique is absolutely important to art style and stylized art, there where a lot of things that wasnt possible before PBR. personally I love what it did for stylized art but its by no means an all around solution.
@CrackRockSteady
it absolutly lock you into a certain style, just as the old specular system did for that matter. things like sin city or prince of persia 4 would be 100% impossible to do with PBR.
now I dont think Jordans point was "that this is better then that", but rather "the more the better". thats at least what Im gonna take out of it:)
A realistically styled FPS like CoD and a heavily stylized game like Sunset Overdrive or Overwatch all use PBR. If you're going to try to tell me that these games are all the same art style, you're batty.
Edit: I also was not saying that anything and everything can (or should) be done with PBR
That being said !
There's one thing that clearly doesn't work well in VR at this time : realtime reflections. They end up being rendered independently for each eye (which makes sense) but the result is a constant shimmering effect because of the small differences between left and right eye. It's not noticeable in real life, but in VR it is very jarring. I really hope for this to be addressed soon.
Next gen is path tracing guys, there's no two ways about it. That's the next step. That, and leveraging machine learning to make it, and any post fx we have in the mean time less costly.
I give it a maximum 5 years before the PC master race are competing with each other for speed, and from there not long before nvidia/amd respond properly. In the mean time PVR hardware is speeding ahead - mobile hardware can currently do it better than desktop but it's not a case of far off future tech, just we're on hardware not designed _for_ it.
https://home.otoy.com/otoy-and-imagination-unveil-breakthrough-powervr-ray-tracing-platform/
That isn't to say we'll have perfect offline style renders on average gamer hardware this year, it's gonna be noisy and slow if it's all turned up, but if all you want is proper reflections, proper transparency etc on normal hardware then it's here already and working fast with no noisiness whatsoever.
I suspect devs will start to offer it as an optional setting and let PCMR to push hardware to improve)
although, PVR ain't so bad, might see PVR consoles again - dreamcast was
Dual top end Nvidias will do a fairly noise free beautiful 60fps render right now though, all on the GPU, so we're not terribly far off games.
Unity actually have been playing with it for years, I've seen a number of demos that run in the thousands of FPS when it's just the reflections you want. Even seen mobiles do it absolutely fine.
2-3 years old demo of what all of us are getting this year:
https://www.youtube.com/watch?v=FbGm66DCWok
From Otoy's own slides: and whilst I understand a company will market it's tech, I've seen it myself.
On the software side, the clever people doing research right now are working on renderers that minimise the work needing to be done by intelligently deciding what parts of a render actually needs samples etc, as opposed to the brute methods of the last 10 years.
https://en.wikipedia.org/wiki/PVR
My money is on plant variety rights
Here is some footage of it in action, it takes ~6 seconds to resolve a complex frame that isn't a mess of noise, which is very fast compared to a traditional CPU renderer, but needs to be roughly 350x faster (even more than that if we consider this is only rendering on half the screen) for real-world end consumer use. And again, this is rendering a canned cinematic, not dynamic gameplay.
https://www.youtube.com/watch?v=RxoH_Cwvwe0
Otoy's light field solution is, as far as I can tell, a light baking system which for all I know may be very fast, but is not real time path tracing.
For example:
Style Transfer
https://www.youtube.com/watch?v=0ueRYinz8Tk
Animation
https://www.youtube.com/watch?v=Ul0Gilv5wvY
The tech is still a bit too heavy (like the raytracing denoiser) to be used in games today, but down the line say 2-5 years from now we will have more of the "magic" usable in mainstream.
Likewise raytracing-like effects are already dominating games today (screen-space reflections etc., the voxel tracing etc. are all forms of "tracing"). That trend will continue.
We also get more and more games that use alternative ways to render their scenes (distance fields)
https://www.youtube.com/watch?v=q6flyIrvKCA
When I see the E3 videos today, and compare to say 10 years ago, it's amazing how far things come, the craftsmanship, the amount of detailing etc. That's why imo we will not see huge jump advances over night anymore, cause we have incremental improvements on so many fronts already. Another aspect is, that we have to work with the "incrementally" changing hardware as well, so it's not like over night we have infinitely more/different power accessible. High-end capabilities take a few generations to tickle down.
Even more than in applying as effects, the machine learning will help a lot in content creation, clean ups, deriving data from paint/photo/video etc. and therefore drive the costs of rich environments down. There will not be a "make art" button, but we will get much closer to automating processes. So yes I expect a lot movement on tools built around this.
All major research studios of game/film etc. are looking into this technology, so I suspect in the next years we will see a lot of movement there.
Therefore, as usual, exciting times ahead Maybe even more than before
Traditionally you're going to replace all your rasterisation code with raytracing code so that every pixel is drawn by emitting rays into the scene like a full offline renderer (30 rays per pixel etc) is going to be slow for now.
However we can use it in other ways, for example a hybrid renderer whereby you submit geometry to the path tracer, build a database of the scene and use shaders to define ray behaviour for much better shadows, true reflections, transparency, hell, even AI and sound propagation.
In the case of unity deferred, the Gbuffer can be reused as the inputs for the raytracer. For every pixel of the Gbuffer, you have the properties of the surface that was visible to the camera - normal, position, color and material ID - you take these objects use it to generate primary rays based on these properties. Rather than emitting rays from a camera, you emit rays from the surface that is defined by the Gbuffer.
it's not a dichotomy – you do not have to have only have one or the other. Pathtracing can coexist with traditional rendering and go beyond into all sorts of other uses like with sound, which is something that excites me a lot. Thats what you were seeing in that noise free nvidia demo I shared AFAIK. That's why it wasn't 5fps and full of noise like your video - not because its endless graphics cards, but because its a hybrid render - all the benefits like reflections etc.
Lots of power coming here!
the GDC vault might have a talk from 2013 or 14 on this but i don't have access so I can't share.
I am a mere artist at the end of the day quoting researchers i've seen at conventions so I'm going to have a talk with my more knowledgeable programmer brother tonight or tomorrow about this topic and see what I can glean.
The costs of tracing are just one portion of the problem, the other is shading the things you hit. These can be substantial, and it's not trivial to manage it efficiently as rays may diverge on the types of surfaces they hit etc.
There is another interesting technique that somehow relates to this future, which is called decoupled shading, or texture space shading. Basically it's like doing all shading in "lightmap" space. But the trick is that you only shade those texels that you actually need in the frame. This includes only updating the appropriate mipmap level of the shading texture. The benefit is that you also get implicit shader anti-aliasing, because we now sample the shading texture for final results, and texture sampling is "smoothing" out things and temporal stable across frames.
Finding out which texels to shade is typically done by geometry pass and rasterization, but you can imagine that another pass could collect all the indirect texels required by any means that are "fast enough".
In the end, the programmer's toolbox to improve anti-aliasing, to improve indirect lighting effects, keeps growing and improve those effects.
Neural Network (Screen Space) Ambient Occlusion
http://theorangeduck.com/media/uploads/other_stuff/nnao.pdf
http://theorangeduck.com/page/neural-network-ambient-occlusion
I'm still interested to see what will be the next big thing for artists to learn, like normal maps and PBR. I'm expecting to see a lot more semi-procedural applications to speed up asset creation like Quixel and Substance, but hopefully in other areas.
https://www.youtube.com/watch?v=jkhBlmKtEAk
https://www.youtube.com/watch?v=J3ue35ago3Y
https://www.youtube.com/watch?v=LXo0WdlELJk
https://www.youtube.com/watch?v=81E9yVU-KB8
Can't wait to light and render my portfolio material with this technology soon!
Besides the obvious that UE4 is real time, but if employers are mostly looking for static images first, would they even be able to tell the difference?
These ray tracing demos are running on $80,000 workstations, so can't anyone just claim their Offline work is real time, just give it a lot of render power.