Home Technical Talk

Theoretical LOD system?

ScottJ
polycounter lvl 13
Offline / Send Message
ScottJ polycounter lvl 13
Hi everyone, I am primarily a 3D artist and texturer with a big interest in technical stuff that relates to art.

some of you may know my dissertation on batching - I found this highly interesting and found out more stuff that was unrelated and thus not needed.

this included the transform bottleneck of transform bound or fill bound and LOD systems

when an object gets too many vertices/polys per screen size (set of pixels) or too few then performance is effected.

something that I have also looked into a lot is LOD systems in the past - with the understanding they work on distance - a quick and simple calculation.

So I have just been talking to a friend about LOD systems and was thinking how it would be great if it also included objects that scaled in real time (so may be big at one point and small at another, yet be the same distance from the camera)

to alleviate the amount of polys in view and also the transform and fill bottleneck I thought of a different system this is especialy true in games that are now using tessellation - with tessellation based on a per object distance.
The only reason LOD works is due to when an object is in the distance it takes up less pixels, making it less noticeable when it does change LOD form - something that scaling also does!


so with all this in mind I thought about a new system that would not be based on distance, rather would be based on screen space% - there is only one aspect that I am not sure about in performance terms - which I hope someone may be able to rule in or out.

my theory is to get the rendered scene stripped of textures and lights (re-use the already transformed scene) assign each object a number, then fill that object with a fill colour from a 32 bit greyscale, allowing for over 2 billion objects per level, jigger it around some more (64bit?) if you need higher numbers (or assign each object in view dynamically but this may be more costly, but I can never imagine you ever wanting 2 bill objects in view at any given time! meaning this would be future proof)

This is the performance aspect I am not sure how costly it would be:-
each object would then have its unique reference number/colour (filled in and then used like a post process effect)then using this texture of the scene in greyscale with each objects unique number, using the same/similar math as photoshop does when you use the magic wand tool, picking by colour with a tolerance of 0 to calculate how many pixels on screen that object is taking up! (taking an average game now running at 1080p, this is a maximum of 2073600 different selections on screen - for higher res than this, they will have higher end gpu's)

With this information, knowing the pixel count and the object colour/reference, base its LOD from this figure so
low pixel count = low LOD
High pixel count = High LOD

if the object is taking up less than a pixel (incase it still needs to be rendered with AA is it may effect the surrounding pixels) to render at its lowest LOD if its in view (no difference here to a standard LOD system as they will render the lowest LOD at the furthest distance)

this could even then create a better LOD system where it could possibly automatically adjust the LOD based on its known poly/vertex count and its pixel useage to drop the transform or fill bottleneck (the model itself would need to be still made with this limit in mind too as usual... if people even do this in industry)

The only potential issue I can see is transparent objects, like windows, someone much cleverer than me probably can factor in an alpha cut out - again possibly a theory killer here again...

what do people think, not being e programmer I have little real world experience/knowledge of really technical stuff like this and was just curios to see if this would work - and how performance costly it would actually be.

Replies

  • trebor777
    Options
    Offline / Send Message
    trebor777 polycounter lvl 10
    :) We have a tool like that at work.
    But you have to consider animated stuff (especially characters) as well and make sure that the LOD works well with the silhouette under different camera angles. Rather than rendering you could look into simple screenshots, with a black/flat lighting (to get silhouette in the vieamport). then it calculates an average ratio for the different poses, for the optimizing tool on how much to decimate.
  • passerby
    Options
    Offline / Send Message
    passerby polycounter lvl 12
    There already are some engines that los based on % of screen a object takes up, like source engine.
  • RC-1290
    Options
    Offline / Send Message
    RC-1290 polycounter lvl 7
    This is a concept I thought a bit about before. To me, it seemed most useful as a tool to use like a profiler, rather than a realtime LOD system. Using it to determine the amount of detail you should use for various assets.
    passerby wrote: »
    There already are some engines that los based on % of screen a object takes up, like source engine.
    Source engine levels are quite static though.
  • ScottJ
    Options
    Offline / Send Message
    ScottJ polycounter lvl 13
    This sounds very interesting, while I was writing that post it did give me an idea as a profiler so to show artists the maximum amount of detail that is needed.

    I am not sure why this method isn't used more it seems to have more benefits that the old distance based LOD
  • trebor777
    Options
    Offline / Send Message
    trebor777 polycounter lvl 10
    just that calculating a distance is easier and faster in a game engine than determining the size of an object on screen, in order to do the switching.
  • gray
    Options
    Offline / Send Message
    an interesting idea.

    i think that trebor777 is right tho. its just one calculation of the distance formula.

    there is perhaps a more significant problem. speaking in general about the graphics pipeline to get into 'screen space' (ie) device coordinates you have to push the geometry through the whole pipeline. so when you actually do the calculation its to late to swap out the geometry. the geometry is baked into device coordinates so to speak. whereas you do the distance calculation in the beginning in world coordinates and can therefore choose which geometry to push thorough the pipeline.

    edit:
    there is also another issues. lod is inherently based on depth not scale. lets say you have a relatively small object close to the screen and a huge object far off in the distance. even tho the distant object is much larger then the close object in screen space you want the huge distant object to be lower resolution and the small object at high resolution.
  • alfalfasprossen
    Really interesting Idea, but the problem would be that you always have to render the complete scene (without lodding?) for every frame and do a really heavy-weight pixel-calculation stuff. checking how many pixels on the screen have the same colour will impact your performance (your worst case would be cheking all pixels again for all objects. then there comes stuff like anti-aliasing into play which will make it more complicated.

    The way you normally check for screen-size coverage of an object is to project the bounding box into screen space. That will give you a rough estimate on how small your object is and can be integrated into the LOD calculation.
  • ScottJ
    Options
    Offline / Send Message
    ScottJ polycounter lvl 13
    so to come back on a couple of the points made

    first is regarding having to get the full res LOD for the calculation, unless the silhouette is completely different from LOD - LOD then it wouldn't matter, if the lower LOD starts taking up too much for that res, then step it up to the next LOD.

    the second, regarding pushing everything through the pipeline again, as its already rendered that frame and you don't want to re-calculate it with the new LOD, just do a differe to next frame for the LOD swap, 1 frame of a higher LOD isnt going to be that important as the optimisation come on the next frame, so a 1 frame lag spike to effect, I doubt it would be noticeable.

    with regard to performance, this was my initial concern.
    however for complex games nowadays, could this system potentially save more resources than it cost to calculate compared to the simple distance check?

    I think grey may of stumbled upon the problem, large objects may take all the screen at any distance from the camera, causing this method to fail - though are not most games now made of smaller modular objects? - this would definatly be related to the type of game you are making though.
  • gray
    Options
    Offline / Send Message
    in general if you look at most algorithms that get widespread adoption in graphics it is because they cover the highest % of use cases. for instance you can do depth sorting in a variety of ways. but zbuffer is used 99.9% of the time because it is simple, efficient and works 100% of the time. its sort of a Darwinian survival of the fittest. so if a distance calculation works in a larger percentage of use cases and is more then likely faster it will be adopted more.

    another thing to think about is complexity of the algorithm itself. calculating the distance formula runs in constant time. (ie) every time it takes the same amount of cycles to run. a screen space pixel approach would be highly variable in that the computation per object per frame is in constant flux. this would be very hard to profile in any simple way. it also increases somewhere between linear and exponential as your resolution increases. whereas the distance formula is resolution independent.

    as a final point lets look at the guts of distance in c.
    distance=sqrt((pow((x2-x1), 2))+(pow((y2-y1), 2)));
    
    obviously there will be some more computation but thats about it. my guess is that it will be near impossible to do anything in gl or dx in an image buffer that comes anywhere close to that in terms of cycles to compute.

    but to actually know for sure you would have to whip out your compiler and get cracking on the code and do a proper comparison and benchmark.
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    The GPU provides ways that allow you do implement algorithms to do the calculation of visible pixels per object. You could do this either similar to histogram generation on your id-buffer, or incrementing a counter for each object's pixel that passes depth test. That is still not super cheap however and it does mean you need to have that z-pass.

    Simply approximating the projected area of an object based on its boundingbox/sphere is way faster, and hence popular. I'd hope most engines take the object size in pixels into account and not just do simple range-based decision making.
  • Denny
    Options
    Offline / Send Message
    Denny polycounter lvl 14
    UDK uses screen space for characters / skeletal meshes. You can see the screen space value in the animset editor when setting the lod value. At least that's how it behaves as distance seems irrelevant to when the lod changes level.
  • gray
    Options
    Offline / Send Message
    i suspect that even if you project a bounding box and get the distance its still far more efficient then using imagebuffers. and it would be resolution independent.
Sign In or Register to comment.