Is there a way to get a zdepth image (like z depth pass from 3D rendering) but of (from) a "normal" 2D image taken by camera (specifically probably some texture taken by photographing, like texture of a rocky ground etc.)...?
And if yes, can i even adjust "the focal point"... etc. where the "blacks" and where the "whites" start/end in the image?
I think it might be possible to somehow do in photoshop using maybe the "in focus" (sharper) and "out of focus" (blurier parts) of the image to estimate and get you some kind of Z depth of the image...?
I would like to experiment with it as a displacement map addition for example...
So, is it possible to get a Z depth image of/from a 2D camera (in real life) taken image?
Thank you
Replies
As of now I know only photogrammetry/ parralax based approach
ps. years ago I tried to use smoke for capturing the depth for a scene. Worked so so.
( instead to create a stereoscopic image or refocusable depth of field in the final image... the lytro lightfield camera for example )
Apparently this light field tech may be the holy grail for AR/VR immersion ( variable focusing headsets which facilitates focusing interest much like the natural human eye )
https://www.avegant.com/
https://www.technologyreview.com/s/610458/vr-is-still-a-novelty-but-googles-light-field-technology-could-make-it-serious-art/
When Paul Debevec puts his legendary CG research acumen towards advanced VR tech experimentations...
one may assume yummy delicious good tech immersion has just got to be around the corner soon enough?
https://www.blog.google/products/google-ar-vr/experimenting-light-fields/
They generally look for gradients across features in the image, then try to reconstruct depth from those. So it works OK-ish if the photo is of rounded river rocks, or plain bricks, or something without a lot of surface detail.
But even then the depth is not anywhere near accurate.
An early example from Ryan Clark, developer of Crazybump.
http://www.zarria.net/nrmphoto/nrmphoto.html
But that requires the camera to be moving because it uses the parallax to calculate how distant objects are. If you just have one static image the algorithm has no way to know which pixels are distant and which ones are close. There also exists software that can generate a depth pass from two stereo images, but in my experience they turn out quite noisy.
Crazy Bump and B2M really only calculate contour and curvature of surfaces, and as such don't give particularly good depth results. Especially if you need something like a rendered ZPass.