I am learning about rendering more and found out that I lack knowledge about two things. Not sure if shadows are part of postprocess or not and how lighting was computed before ray tracing in rasterization times?
I'm not an expert on computer graphics, but from what I understand of the subject:
- Raytracing is still rasterization (as in, you start from numerical data, like polygon meshes, and create a pixel image from that).
- Shadow is the absence of light, so a way to think of shadowed surfaces is also "surfaces with zero light". Therefore, if you have a way of knowing if a point on a surface is receiving lighting (be that lighting direct, coming straight from the light source, or indirect, having bounced one or more times from other surfaces), then any of the points receiving little to no light can be considered as shadowed. So raytracing finds where light is present in the scene, and implicitly finds the shadows too since it'll be the places where the light didn't reach.
- Using raytracing is a more accurate way to tell how much light the surface point at each pixel on screen is receiving, because it approximates better how light behaves, thinking of rays.
- Without raytracing you need another way to find how much light the surface points at each pixel on screen are receiving. Besides an algorithm (edit: it's not an algorithm, it's a mathematical model) like the Phong reflection model, which calculates how much light each vertex of a mesh is receiving (not to be confused with Phong shading, which interpolates the results of those lighting calculations between all vertices, to smooth out the lighting on the mesh), you can use another technique to explicitly find the points on a surface that are occluded from light sources. The two major techniques are shadow mapping and shadow volumes, of which the latter has gone down in use. Shadow mapping is when you create a depth image of the scene from the viewpoint of the light, then create a depth image of the scene from the viewpoint of the camera, then compare the two using projection to know which parts of surfaces can be seen from the light (making those parts lit), and which parts cannot be seen from the light (making those parts shadowed).
I don't think this counts as a post-process effect, because by "post-process" people usually mean something that is done after the camera view is completely rendered, like a color grading or film grain or some other effect that is applied on top. The way that shadow mapping needs low level information about the scene (instead of just needing the final image, like color grading), I'd say it's not a part of post-process.
Adrian Courreges has some awesome breakdowns of frame rendering, you can see how shadowing happens around the middle of the pipeline instead of at the end like a post-process effect would:
It is kind of a post process but not in a standard sense. If we treat things that gets applied to the image after the main passes were computed, then you can call it a post process yeah. Non ray traced shadows live in a buffer (dynamic texture / render target). Visually, they look like the image was captured from the light's perspective. Omnidirectional lights uses a cubemap for this(made out of 6 standard images), and directional light uses a standard image. Then this image is projected back to the scene. The texture contains depth information from the light's view. Anything higher than the captured depth value is shadowed - this means that other values represents a thing that is not visible to the light source and therefore shadowed.
Not sure if you are talking about surface shading or cast shadows though.
Replies
- Raytracing is still rasterization (as in, you start from numerical data, like polygon meshes, and create a pixel image from that).
- Shadow is the absence of light, so a way to think of shadowed surfaces is also "surfaces with zero light".
Therefore, if you have a way of knowing if a point on a surface is receiving lighting (be that lighting direct, coming straight from the light source, or indirect, having bounced one or more times from other surfaces), then any of the points receiving little to no light can be considered as shadowed.
So raytracing finds where light is present in the scene, and implicitly finds the shadows too since it'll be the places where the light didn't reach.
- Using raytracing is a more accurate way to tell how much light the surface point at each pixel on screen is receiving, because it approximates better how light behaves, thinking of rays.
- Without raytracing you need another way to find how much light the surface points at each pixel on screen are receiving. Besides an algorithm (edit: it's not an algorithm, it's a mathematical model) like the Phong reflection model, which calculates how much light each vertex of a mesh is receiving (not to be confused with Phong shading, which interpolates the results of those lighting calculations between all vertices, to smooth out the lighting on the mesh), you can use another technique to explicitly find the points on a surface that are occluded from light sources. The two major techniques are shadow mapping and shadow volumes, of which the latter has gone down in use.
Shadow mapping is when you create a depth image of the scene from the viewpoint of the light, then create a depth image of the scene from the viewpoint of the camera, then compare the two using projection to know which parts of surfaces can be seen from the light (making those parts lit), and which parts cannot be seen from the light (making those parts shadowed).
- https://www.scratchapixel.com/
Not sure if you are talking about surface shading or cast shadows though.