After pouring over the
Quixel Megascans page, I want to learn the process behind capturing real-world surfaces.
"...each pixel of every map has been physically scanned, and every material component has been fully separated. Every reflectance value has been measured, and each normal reveals how light interacts with the surface and subsurface."
I'm familiar with capturing diffuse, and using polarizing filters to capture and derive Specular contribution, but that is just careful use of a DSLR. Creating Normal Maps from photos is kind of hit or miss from there, and generally not the best practice.
They hint at some proprietary technical wizardry, [ame="
http://www.youtube.com/watch?v=8alYZgkwClM"]without coming right out and claiming so.[/ame] Pretty sweet Quixel branded PCB though.
TL;DR
My question is:
How do you capture (rather than derive) these maps?
- Specular
- Gloss
- Normal
- Displacement
- Ambient Occlusion
- Translucency
- Scatter
- Transparency
Are we talking about some crazy laser/radar/holographic projection/scanning tech, or just clever use of a nice camera?
To anyone with experience capturing these materials that can guide me in the right direction, Thanks.
Replies
http://www.zarria.net/nrmphoto/nrmphoto.html
may not be the same but it gives you quite a good idea of how ready at dawn are doing it
Hopefully they'll go into depth a bit more when megascans is released. I'd really like to construct a DIY rig to produce my own (crappy) megascans.
Creating accurate normals from photos has been around for a while;
http://www.zarria.net/nrmphoto/nrmphoto.html
Splitting specular and diffuse let's you measure the specular intensity and know how much light is being reflected.
http://filmicgames.com/archives/233
Fox engine tech video talks about creation of linear textures from photos;
http://imgur.com/a/08z8k or https://www.youtube.com/watch?v=17nje72VnPE (the full talk)
if you think of a "reverse" process for SSbump (like normal maps) you can imagine using the right lighting setup to capture ambient occlusion from photos in the same way you capture normal maps (possibly using harsh angled lighting vs soft angled lighting and getting the difference?)
https://developer.valvesoftware.com/wiki/$ssbump
In the Quixel video you can see that they lined the inside of their box with LEDs - and I'm guessing they are doing that so they can tightly control the lighting situation from many different angles easily. When you split specular and diffuse lighting - you are really just taking data from one lighting situation and another and pulling out the difference. It seems to me that you could use very similar techniques for stuff like AO, translucency, etc.
lots of details here! they pretty much break down their whole setup, but it's just for albedo/normals. they are using generalized BDRFs for the reflectance of stuff like cloth, rather than trying to measure it, I think.
course notes; http://blog.selfshadow.com/publications/s2013-shading-course/rad/s2013_pbs_rad_notes.pdf
some slides from their powerpoint: http://imgur.com/a/DYv30
full slides; http://blog.selfshadow.com/publications/s2013-shading-course/rad/s2013_pbs_rad_slides.pptx
I think this pretty much confirms that this is possible with a very simple, low-cost setup (excepting the camera itself) - and no crazy lasers are required!
- Neutral Density: Light Craft Workshop 77mm Fader ND MKII ND Filter LCW
- Circular Polarizer: HOYA 77mm CIR-PL Slim