Is there a way to get a zdepth image (like z depth pass from 3D rendering) but of (from) a "normal" 2D image taken by camera (specifically probably some texture taken by photographing, like texture of a rocky ground etc.)...? And if yes, can i even adjust "the focal point"... etc. where the "blacks" and where the "whites"…
There are ways to generate a depth pass from video, like here: http://www.yuvsoft.com/stereo-3d-technologies/depth-propagation/ But that requires the camera to be moving because it uses the parallax to calculate how distant objects are. If you just have one static image the algorithm has no way to know which pixels are…
plenoptic cameras record depth information however I have not heard of anyone leveraging as much to create a depth channel. ( instead to create a stereoscopic image or refocusable depth of field in the final image... the lytro lightfield camera for example ) Apparently this light field tech may be the holy grail for AR/VR…
The output from filtering an existing single photo is pretty sucky. Try it and you'll see. Photoshop even has this now... Filters > 3D > Bump something something. They generally look for gradients across features in the image, then try to reconstruct depth from those. So it works OK-ish if the photo is of rounded river…