Ok so I was thinking about voxel cone-tracing for GI, especially now that UE4 is implementing a version of it, and have a few inquires as well as misgivings that I hope can be cleared up.
Foremost, is using the averaged, lower-res voxel level really a way to describe diffuse lighting properly? How does the approach take into account the brdf of the material if it is using evenly averaged values sampled around a point, and how can it be accurate unless the 'cone' is a hemisphere?
The main way I could see the tech working for diffuse is if each point were sampled from a weighted hemisphere. If I were to sample for a point with a normal facing completely to the right and perpendicular relative to the camera, given lambert shading we know that the light sources from the front would have very little influence (they go toward black aproaching the edges0, the diffuse lighting loses influence as the surface normal becomes perpendicular to the light vector. The lighting of a surface point that faces the light source on the other hand gets the full diffuse value. (and light from behind the object would be glance angle, and picked up by specular/reflectivity)
So unless I'm misunderstanding, a weighted hemisphere would be necessary to sample for any point, but using a voxel octree structure you'd be using averaged, unweighted values, right (if your just sampling lower(higher up?) on the voxel octree)?? If the tech really is using a weighted hemisphere, then that's great news, especially if you are able to manipulate the weighting and therefore manipulate the diffuse reflectance model of the object, but I haven't read anything that describes that.