Home 3D Art Showcase & Critiques

Creating a Prototype MegaScanner

polycounter lvl 6
Offline / Send Message
QuantumTheory polycounter lvl 6
Hello,

Being curious and inspired by the work of the Quixel team, I decided to see if I can mimic some of their results and try my hand at constructing my own prototype "megascanner."

It's a 1 square meter wooden box with an open bottom and a hinged top. Each interior side has a set of white LEDs each wired to a dimmer switch and an external portable power source. The top of the box has a hole cut out for a camera lens to see into. The camera is on a clever vertical mount which aims the camera straight down into the box. Total cost: ~300$.

The camera used is a Nikon D5100 with a 35mm lens and a polarization filter to reduce reflections. The 35mm lens doesn't capture the whole surface area of the box. Ideally I think I'd use a 20mm lens.

In the beginning I wasn't sure if this was going to work. There is much to consider when photographing a surface and capturing details and exluding shadows. Would the normals come through? Would the albedo be flat enough? What about AO, spec/gloss/metal/roughness?

It's still early work but the initial results are promising and revealing. I only have progress on the albedo and normal. I have to do some tests to attempt a natural spec/gloss, even though it's likely impossible. Even quixel's megascans are a gross approximation so I don't feel like I have a huge mountain to climb in terms of quality.

B_0g-raWIAEyejv.jpg:large

Albedo

So the attempt at an albedo is on the left. The image came out pretty flat which is nice. But looking closer you can see some artifacts that I have to work out. The LEDs are obviously reflected on the smooth plastic on the buttons and D-pad. Those reflections can probably be taken care of with some diffuse paper covering the LEDs.

Shadowing from the cables onto the floor might be reduced by taking bracketed exposures and creating a tonemapped image. Need to test that more.

Normal

The normal map is generated by taking 4 images of the subject and composited. The simple technique is described here:

http://www.zarria.net/nrmphoto/nrmphoto.html

But I'm in the process of tweaking this approach. The main difference is that I have a 5th light source; the LEDS from the camera's perspective are on. This reduces the directional shadows from the other lights. Each image taken consists of one of the lights from each interior side on (left, right, top, bottom).

The result is promising but there are issues:
  1. The shadowing is considered in the normal map. The normal map has a "lump" effect wherever there was occlusion. The area around the cables are a prime example. I'm going to try tonemapped images from bracketed exposures for this.
  2. Obviously the lights contributed to the relief on the dpad. It also distorts the xbox button.
  3. Rendering normals using this technique doesn't render a proper z vector (blue channel). It's all white. I have some work to do to correct that.

Other Issues for Future Consideration:
  1. Tiling. Quixel just blends. It's evident on their leafy material. Some hand-tiling might work in some situations.
  2. Perspective distortion. As the surface gets closer to the edge of the viewable area, there is that parallax effect. Taking multiple shots as I move the box in increments of ~1m might not reduce the effect. A lens that can capture the entire area would help. That, and a larger box ;) And then there's:
  3. Stitching image sets of the same material to make one large material. Autopano Pro is great for my panoramas. For the hell of it, I used it for textures. Worked great, so I might try it for this when the time comes.
  4. Heightmaps. Quixel's heightmaps look like approximations/filtered from other images. I'll have to test.
  5. AO. Same as #4.

The big question on my mind is how to derive a reasonable spec/gloss. The Digital Emily project gives me some clues, but the white paper is far over my head. It seems they are taking two images for the spec/gloss: one with the polarization filter on, and one without. Then they subtract one image from the other and there you go. When I do that in Photoshop, I get different results even with their images. If anyone could offer some light on the subject, please do! Here is the link:

http://gl.ict.usc.edu/Research/DigitalEmily/

This thread will be a living doc. I'll post more results as I go and I'll appreciate any help and feedback!

Replies

  • radiancef0rge
    Options
    Offline / Send Message
    radiancef0rge ngon master
    You can't simple subtract the images, diffuse is 2*X and Specular is X-Y.

    Also they are stored on the camera in sRGB so you must first make the conversion to Linear space. Its best to do these in shader code.

    For Gloss just solve for G using the BRDF shader code.
  • PlateCaptain
    Options
    Offline / Send Message
    This is really cool.

    Is the box mobile enough that you could easily move it around? Outside, for example?
  • QuantumTheory
    Options
    Offline / Send Message
    QuantumTheory polycounter lvl 6
    You can't simple subtract the images, diffuse is 2*X and Specular is X-Y.

    Also they are stored on the camera in sRGB so you must first make the conversion to Linear space. Its best to do these in shader code.

    For Gloss just solve for G using the BRDF shader code.

    I'm not entirely sure what you mean by the first and last sentences ;) I'm trying to generate the spec/gloss textures, not solve for them.
  • mystichobo
    Options
    Offline / Send Message
    mystichobo polycounter lvl 12
    Nice! I've been really wanting to try something like this for a while, but time and space has been an issue.

    Have you seen the Ready at Dawn Siggraph 2013 course notes? They go over the construction and calibration of their textile scanner in there (page9 onwards).
    http://blog.selfshadow.com/publications/s2013-shading-course/rad/s2013_pbs_rad_notes.pdf
  • Daemon Vanderpool
    I haven't used Megascans yet, but I was wondering; How exactly, what all is involved when one wants to make a megascan? Will Quixel ever or have they ever described their exact process on how they make these? If they did that would be awesome. I assume since they use "HDR scanning" they use some sort of 3D scanner, any ideas on what sort of equipment might be involved?

    I tried looking to see what some of the best high quality scanners out there might be. http://www.artec3d.com/hardware/artec-spider/ "3D resolution, up to 0.1 mm"
    Not exactly within my budget at the moment, but I'm keeping my eye on this one for when I sufficient funds.

    I understand that perhaps Quixel might want to be a little quiet on the details of their megascan process, but I think it'd be pretty cool if they had a tutorial for those interested in it. Would also be pretty cool if they did that and then was also open to accept / reviewing contributions to their material library. Part of wanting to know how to do this is similar to how http://www.polycount.com/forum/showthread.php?t=153601&highlight=make+megascans felt about using someone else's content. For some people such as myself, it certainly does feel better knowing you created everything from scratch.
Sign In or Register to comment.