Home General Discussion

Unlimited Detail

Replies

  • EarthQuake
    Options
    Offline / Send Message
    RexM wrote: »
    Okay, so procedural generation is more a solution for streaming textures.

    What textures? With a purely voxel world, you do not have textures.

    It also wouldn't be a very efficient way to stream either, as generating a procedural texture/model is fairly processor intensive. For stream you want fast access to data, generating data on the fly is the opposite of that.

    IMO it seems like you're just throwing out buzzwords, and have little knowledge of any of this on a technical level.
  • EarthQuake
    Options
    Offline / Send Message
    RexM wrote: »
    Nah, Notch said the entire world would take something like 500 Petabytes.... xD

    I think he was gauging their implementation against his implementation too much.

    Please provide sources. Links to articles, links to PDFS, quotes of people who you profess have made certain claims. Without this it is very hard to have a conversation with you.

    Drop the he said she said stuff.
  • RexM
    Options
    Offline / Send Message
    http://notch.tumblr.com/post/8386977075/its-a-scam

    lol, I remembered it out of context.


    Now you and everyone else who posts needs to provide sources too.
  • EarthQuake
    Options
    Offline / Send Message
    And what Carmack said, and all of the people in this thread that also said it?
  • RexM
    Options
    Offline / Send Message
    Sorry, not going to go digging around just to appease you...
  • EarthQuake
    Options
    Offline / Send Message
    What? You're the one who is making claims, if you want to be taken seriously, you need to actually you know, back up what you're saying.

    Not just "Oh jimmy said this, that guy is an idiot".
  • RexM
    Options
    Offline / Send Message
    You have made claims that need sources too... stop trying to turn this one-sided.
  • Fuse
    Options
    Offline / Send Message
    Fuse polycounter lvl 18
    I heard John Carmack has a 1.21 jiggawatt CPU
  • r_fletch_r
    Options
    Offline / Send Message
    r_fletch_r polycounter lvl 9
    Notch said a 1 km island of unique detail would take 500 petabytes. UD demostrated a tiny dataset. just a few trees a statue, a few bits of grass and a few rocks. Hardly unlimited.
  • vargatom
    Options
    Offline / Send Message
    To be fair, Notch has made that calculation with regular voxels, a sparse octree data structure would cut that down significantly. Because it's, well, sparse ;)

    Also, in reality you wouldn't want 4 points per millimeter, although at least 1 per centimeter would be required for a real world scenario.


    Again, here's Nvidia's cathedral demo as a practical example:
    http://research.nvidia.com/publication/efficient-sparse-voxel-octrees
    That scene requires 2,7 GB of data, after compression. I think we can agree that it's a very small scene compared to the average FPS level, but let's be fair and say that it'd only be 10 times as big, and let's have 20 levels of these indoor scenes. That'd be about 200 times 2.7, 540 GB of level data already.

    Now how much space would Manhattan in Crysis 2 require?
  • PolyMonstar
    Options
    Offline / Send Message
    Ok, lets do some estimation based on one of the only stupid voxel asset creation pipelines out so far.

    I hate the interface, but 3dcoat may be a good representation of what the issue with voxels is.

    1 model, looks like shite, but its 573,844 voxels (surface voxels) storage wise, this is approximately 7.5 megs

    Now, if the island they're using is actually 21,062,352,435,000 voxels as they claim, AND YES ITS VOXELS, stop trying to argue anything different because they don't want to market it as what it is.

    Now, if the island were made out of that many UNIQUE voxels, UNTEXTURED as is my cheap test model, then you're looking at 275,279,776 megs of data, or, 275.3 terabytes of storage on ram. This is not a vague guess, this is based on a very good representation of the tech and assets.

    Im considering grabbing UDK assets and resampling them in this just to see how many assets it would take *looking good* to fill up 4 gigs of storage, as per the average computer/console for some time now.

    Again, 275 terrabytes would probably be needed, IF NOT MORE, because of stored data/normals/spec/diffusergb/alpha/transmitence/emissive/ to even try to pretend its able to render like a game engine, based on everything that is lacking in what i quickly threw together.
  • RexM
    Options
    Offline / Send Message
    First off, what modern game uses 100% unique assets...?

    Secondly, you fail to consider compression technologies. How much texture space would textures take up in memory if no compression was available?
  • Brendan
    Options
    Offline / Send Message
    Brendan polycounter lvl 8
    RexM wrote: »
    First off, what modern game uses 100% unique assets...?

    Secondly, you fail to consider compression technologies. How much texture space would textures take up in memory if no compression was available?

    Well, I tried this on the game I'm working on, and it's usually 4x as big for the raw textures than the compressed (DXT1/DXT5 mostly). Maybe a bit higher rate for the lightmaps (40mb from 180mb or so).




    Also with this voxel thing, why couldn't it just filter it into cubes of smaller and smaller sizes, and for each cube that has nothing in it, it's called as a nothing cube.

    SO it would hold the scene basically like those images that load at a shitty resolution, then a better one, then a much better one, then a final one. If most parts of the scene are air (and let's face it, everything is going to be hollow), then you really only need the surface data, right? Everything else can be killed and not stored in the biggest chinks of air as possible?

    Yes, that will sound nooby, and I'm not quite sure what's going on here, but it seems the most logical way to not have to use a gigabyte to store the data for a cubic meter of air.
  • PolyMonstar
    Options
    Offline / Send Message
    RexM wrote: »
    First off, what modern game uses 100% unique assets...?

    Secondly, you fail to consider compression technologies. How much texture space would textures take up in memory if no compression was available?

    Compression HOW exactly? Maybe I'm a bit feeble minded, but doesn't it mean that wile smaller on disk, in order to be processed, it has to be re-expanded post processing onto the ram in order to be shown?

    My point about the uniqueness was based on your parroting the earlier claims. There's no way for it to be UNLIMITED detail. And that's also why I said I would need to convert other assets; by taking them to a "nice" level of voxel detail, I can make some very good observations on how many unique assets you could get away with. It was purely a question of immediate storage for an entire island.

    I would love for John Olick to show up and throw down some knowledge on the compression side of things, as well as the animation.
  • RexM
    Options
    Offline / Send Message
    The compression would have to be similar to DXT compression, which allows compressed textures to stay compressed even in video memory.

    Also, the whole point is unlimited geometry. They never said anything about being able to support unlimited amounts of unique detail.
  • PolyMonstar
    Options
    Offline / Send Message
    DXT is like that because its already supported on the hardware... It's also lossy, not something I'd want to have delegating mesh details.

    It also isn't rendering unlimited limited detail. Its screen resolution dependent. But Limited Limited Detail doesn't really sell as a technology pitch.
  • RexM
    Options
    Offline / Send Message
    Compression doesn't have to be lossy, and even better texture compressions (that compress 4 times more than DXT) have been developed. The one I am talking about comes from the Milo Kinect demonstrations. On a .pdf tech sheet for that game, they talked about a compression technique that they developed which was as good as DXT in quality but 1/4 the size.

    I think that Milo uses mega texture tech though, so it looks like the new texture compression is actually from the development of the mega texture method.
  • eld
    Options
    Offline / Send Message
    eld polycounter lvl 18
    RexM wrote: »
    Compression doesn't have to be lossy, and even better texture compressions (that compress 4 times more than DXT) have been developed. The one I am talking about comes from the Milo Kinect demonstrations. On a .pdf tech sheet for that game, they talked about a compression technique that they developed which was as good as DXT in quality but 1/4 the size.

    I think that Milo uses mega texture tech though, so it looks like the new texture compression is actually from the development of the mega texture method.

    One part of the secret of megatexture is that because everything is unique, the compression can be done from the perspective of the actual player, with far away areas and hard to see places can be shrunk down on the texture. It becomes a bit hard when everything is instanced.

    But then again, unlimited detail would be saving space a ton of space and could stream in "mips" of meshes if they were using sparse voxel octrees, but you said they weren't :(

    One thing is for certain though, while one of bruce dells faithful workers might have a graphical background, he himself does not, and has constantly been factually incorrect in whatever technical term he chooses to use, and has himself admitted to renaming voxels for the sake of misleading people.
  • ambershee
    Options
    Offline / Send Message
    ambershee polycounter lvl 17
    RexM wrote: »
    Compression doesn't have to be lossy, and even better texture compressions (that compress 4 times more than DXT) have been developed. The one I am talking about comes from the Milo Kinect demonstrations. On a .pdf tech sheet for that game, they talked about a compression technique that they developed which was as good as DXT in quality but 1/4 the size.

    I think that Milo uses mega texture tech though, so it looks like the new texture compression is actually from the development of the mega texture method.

    Milo did use a form of megatexture style compression, but that doesn't mean that the compression is lossless. It was still very lossy; it was however adequate for purpose. Textures are much more easily compressed than geometry, since loss in fidelity in a flat image is usually much harder to see, especially when it does not have clearly defined edges. Loss in geometry fidelity are much more obvious because the silhouette is very easily picked up with the eye, and inaccuracies will be jarring.
  • RexM
    Options
    Offline / Send Message
    Compression doesn't always have to be lossy.
  • eld
    Options
    Offline / Send Message
    eld polycounter lvl 18
    This once again goes back into another area where big companies have spent tons of time and money on R&D, it's unlikely that euclidean will be doing anything groundbreaking, but more likely, use known techniques.
  • vargatom
    Options
    Offline / Send Message
    Texture compression heavily exploits the various aspects of the human visual perception system. Color and luminosity information are separated, stored at different resolutions, and with repeating blocks. Still, I think everyone can spot the artifacts in a heavily compressed JPEG.

    If you don't use lossy compression, there are very very hard limits on how much disk space you can cut. Depending on the type of data it can be fro 5% to 99% (same data is repeated over and over again). It's a question of the amount of unique information you have.
    I'll look up Olick's presentation later to see what kind of data sizes and compression levels he's been able to get.


    As for unique geometry, we've already discussed why it's necessary. With repeating tiles you get a Super Mario world, which would be ridiculous. Basically this, but in 3d:
    SuperMarioBros3-World5-Area3.png
  • vargatom
    Options
    Offline / Send Message
    Also, you can't keep the data compressed in memory because there's no hardware support to address it and use it.

    Runtime memory issues can be overcome with a streaming system. Again Olick's paper has info on this one so we have a point of reference. Although I do remember it wasn't 100% perfect, there were some issues with it...
  • vargatom
    Options
    Offline / Send Message
    RexM wrote: »
    Also, the whole point is unlimited geometry. They never said anything about being able to support unlimited amounts of unique detail.

    Understand that raycasting can work on triangles just as well as on voxels. You could render this "unlimited detail" with polygons as well. The main difference is in how the renderer works compared to rasterization.

    The reason some people suggest using voxels is that they offer a few things over polygons in this case:
    - unique geometry and texturing at the same time
    - faster authoring (in theory) because no need for highpoly/lowpoly workflow, no UV maps
    - you can store the data itself in a sparse octree; putting triangles in such an octree wouldn't be as efficient... in short the acceleration structure is also the main data structure

    The reasons it still hasn't been implemented in a game are:
    - less efficient approach to store data compared to textured polygons, especially when combined with displacement; leads to huge datasets and this is the main problem for now as there is no solution on current hardware
    - octree is static, can't use it for dynamic geometry; this can be solved by using a hybrid renderer
    - less flexibility in general (geometry and color information are linked, at the same density, can't compress as well etc)


    IT's actually a really simple thing and most people can easily understand the drawbacks and why this tech still needs 5 to 10 years before it can become feasible.
    Also, rasterization and polygons won't stay still in that 5 to 10 years either...
  • vargatom
    Options
    Offline / Send Message
    Brendan wrote: »
    Also with this voxel thing, why couldn't it just filter it into cubes of smaller and smaller sizes, and for each cube that has nothing in it, it's called as a nothing cube.

    SO it would hold the scene basically like those images that load at a shitty resolution, then a better one, then a much better one, then a final one. If most parts of the scene are air (and let's face it, everything is going to be hollow), then you really only need the surface data, right? Everything else can be killed and not stored in the biggest chinks of air as possible?

    Yes, that will sound nooby, and I'm not quite sure what's going on here, but it seems the most logical way to not have to use a gigabyte to store the data for a cubic meter of air.

    Actually that's almost exactly what sparse voxel octrees are :)
  • vargatom
    Options
    Offline / Send Message
    I've re-read John Olick's presentation, here's a few bits of info:

    A single voxel requires 8 bytes of data, RGB XYZ (normal) Spec intensity + 1 byte for the data structure itself. After compression, exploiting redundancies and such, it gets down to about 1 byte per voxel.
    However it's not easy to calculate how many voxels you actually need to store an object. On average he says it's close to what a very high poly object (think Zbrush source art) with similar compression would require.
    (this also proves that using tessellation and displacement maps is a more efficient way to store geometry data, especially if you're already using unique virtual texturing)

    At runtime the data is expanded to 52 bytes per voxel in his implementation, using additional memory for the data structures.
    So obviously any practical implementation will absolutely require a streaming system that will only load the detail levels necessary to render the scene. This is one of the more complicated elements of the system and has to be highly optimized to minimize background loading.
  • ambershee
    Options
    Offline / Send Message
    ambershee polycounter lvl 17
    RexM wrote: »
    Compression doesn't always have to be lossy.

    If you compress something like an image, you lose data. There is no other way around this.
  • vargatom
    Options
    Offline / Send Message
    Err, there's lossless compression formats like PNG, TIFF with LZW, I think Gif is another example (although it's of course palettized). They're obviously nowhere near as efficient as JPEG, though.
  • ambershee
    Options
    Offline / Send Message
    ambershee polycounter lvl 17
    Lossless compression is limited though; you're never going to get the kind of compression that's being suggested as being possible without some kind of approximation algorithm. PNG is a lossless compresion, but that's only because we can make a lot of assumptions over the constitution of the image. If the image has the potential to be completely random, PNG compression won't actually work.
  • RexM
    Options
    Offline / Send Message
    There are no textures with voxels and point clouds.

    Texturing is achieved by coloring each atom/pixel/voxel/point. No bitmap images used.
  • ambershee
    Options
    Offline / Send Message
    ambershee polycounter lvl 17
    I think you've completely missed the point. Quite a lot.

    I'm beginning to wonder if you're more than just a proponent of Euclidean.
  • Calabi
    Options
    Offline / Send Message
    Calabi polycounter lvl 12
    Just had a thought probably unrelated and stupid, but couldnt we have fractal geometry?

    I mean so each object has self similarity, so that you zoom in but its smaller instances of the larger objects or something like that.

    Perhaps instead of things having to be in memory and it knowing what colour that point cloud/voxel has to be, it just guesses using algorythms from known knowledge which is from a smaller limited more realistic amount of memory.
  • RexM
    Options
    Offline / Send Message
    ambershee wrote: »
    I think you've completely missed the point. Quite a lot.

    I'm beginning to wonder if you're more than just a proponent of Euclidean.

    No, I think it is you who is missing the point.

    Voxels do not use texture images for textures, so you don't have a point to make in this instance.


    Hahahaha... thinking I work for Euclideon because I think there may be something to their work.
  • vargatom
    Options
    Offline / Send Message
    Calabi wrote: »
    Just had a thought probably unrelated and stupid, but couldnt we have fractal geometry?

    I mean so each object has self similarity, so that you zoom in but its smaller instances of the larger objects or something like that.

    Search 'gigavoxels' on youtube, it's about two years old.
    Looks crap though ;) certainly not something you'd want in most games.
  • Esprite
    Options
    Offline / Send Message
    Esprite polycounter lvl 9
    RexM wrote: »
    No, I think it is you who is missing the point.

    Voxels do not use texture images for textures, so you don't have a point to make in this instance.


    Hahahaha... thinking I work for Euclideon because I think there may be something to their work.

    Are you an 8-year-old?
  • RexM
    Options
    Offline / Send Message
    Please post about the thread topic, or don't bother posting in here at all.

    Childish, immature comments like yours are not welcome here.
  • Esprite
    Options
    Offline / Send Message
    Esprite polycounter lvl 9
    I disagree sir. The moderators will determine if I am not welcome here. I'll call you out on your behavior and argumentative flaws until I get tired of it. :)

    Calling me a child for calling you a child . Someone better call Leo.
  • Esprite
    Options
    Offline / Send Message
    Esprite polycounter lvl 9
    RexM wrote: »
    The compression would have to be similar to DXT compression, which allows compressed textures to stay compressed even in video memory.

    Also, the whole point is unlimited geometry. They never said anything about being able to support unlimited amounts of unique detail.

    You use texture compression as comparison here. Yet when other people bring up the flaws of texture compression in terms of data/losslessness which would also possibly occur w/ voxels. Your response is that voxels don't use textures.

    Your willingness to ignore evidence and redirect to an unrelated point is a nice trademark of someone who wishes to remain ignorant. No amount of arguing or discussion is going to change your opinion.

    You even mock someone for being confused as to why you are doing this. Trying to spin it in a way to give credence to your arguments in favor for Eculideon.

    So IMO you are either a troll or have a mindset of an 8-year-old and need to grow up.
  • Neox
    Options
    Offline / Send Message
    Neox godlike master sticky
    Calabi wrote: »
    Perhaps instead of things having to be in memory and it knowing what colour that point cloud/voxel has to be, it just guesses using algorythms from known knowledge which is from a smaller limited more realistic amount of memory.

    sounds like an awesome plan, just think of the possibilities for characters! :poly122:

    fractal_characters.gif
  • RexM
    Options
    Offline / Send Message
    Esprite wrote: »
    You use texture compression as comparison here. Yet when other people bring up the flaws of texture compression in terms of data/losslessness which would also possibly occur w/ voxels. Your response is that voxels don't use textures.

    Your willingness to ignore evidence and redirect to an unrelated point is a nice trademark of someone who wishes to remain ignorant. No amount of arguing or discussion is going to change your opinion.

    You even mock someone for being confused as to why you are doing this. Trying to spin it in a way to give credence to your arguments in favor for Eculideon.

    So IMO you are either a troll or have a mindset of an 8-year-old and need to grow up.


    How is it unrelated??

    There are no texture compression issues due to lossy quality to deal with as there are no textures.

    How is that difficult to understand?

    There are compression algorithms that are great and are also not lossy. Plenty of them.

    You're the person who needs to grow up in this instance. Instead of wanting to discuss the topic at hand, your only goal with coming into this topic was to slander and insult me.
  • Esprite
    Options
    Offline / Send Message
    Esprite polycounter lvl 9
    You ignored evidence people provided related to textures not really getting compressed enough. The key point being the non-lossy compression methods used for textures don't save enough data. Just because they are voxels doesn't necessarily mean those issues magically disappear because they are not textures.

    I'm done. You are a troll. I won't waste my breath on you anymore.
  • vargatom
    Options
    Offline / Send Message
    I've already posted Olick's results on the compression of voxel octrees.

    Important elements:
    - Geometry won't get distorted because you don't store world position data at all; it's implicitly defined in the octree itself. The position of the voxel in the game scene is derived from it's position in the octree, so no need to store it again.
    - Color values are stored at every level of the octree as that is the implicit LOD system. Think of it like MIP mapping for geometry and textures at the same time. So child nodes only have to store an offset value from the parent node's color, and that's of course an average of it's children's colors. Same goes for specular.
    - Normals weren't discussed, can't comment on that.
    - The rest of the data (1 byte out of the 8 bytes stored) is related to the data structure but statistical analysis shows repeating patterns which can compress pretty well.

    The 1:8 ratio is a relatively conservative value, and 1 byte per voxel with lossless compression is quite good indeed.

    Then again if you run some calculations it's still a lot of space.

    Let's say we want to do Rage with voxels. A 2km by 2km outdoor area with 1 voxel per inch is about what they have for texture resolution for the environment (but this would actually make the geometry look like it's melted).
    That's ~40 voxels per meter, 6.4 million for an area with zero height. Let's multiply that by 10 which is an extremely conservative value (hills and walls would require 40 times as much voxels per 1 meter of height) and we get about 60 GB of compressed data for this area.
    So, we're already getting 3 times as much data as the entire game for the X360, and I'd like to repeat that the smallest geometry element is 1 inch thick in this case, which is obviously problematic and quite cubist. And Rage has quite more content than a single 2km * 2km level, and a lot of objects and characters on top of it as well.

    So obviously voxels are going to need a LOT of disk space even with compression. And then we can start to argue about the amount data, as Rage also encodes surface qualities in the megatexture and the 1 byte specular value isn't much either (although I don't know how id handles specularity... they probably use 1 byte as well). So 8 bytes might not be enough for a real game after all and that may increase disk space requirements even further.
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    Carmacks speech at quakecon mentioned the problems they ran into while trying to get the game compressed enough to be distributable. He said they had a really hard time doing it. They added another layer of compression on top of the dxt compression for disk storage, and they also only stored full resolution data where the player could actually get to (super low res on the roof of a building for instance).

    From how he was talking I got the idea that they totally stripped out the normal and specular data on environments and got rid of dynamic world lighting (maybe some muzzle flashes).
  • vargatom
    Options
    Offline / Send Message
    Dynamic lighting was never there. Just think about it, because of the unique texturing you can get lightmaps as detailed as the color textures, you can bake some pretty high quality stuff.
    I'm not sure if they're using normals at all, but all the character and dynamic object textures are supposedly stored in the same texture atlas as the game world, so maybe it's not stripped out. But yeah they have tools to determine where the player can go and everything else gets low detail.

    As for the compression issues, that probably was a real problem. I hear that anything else but a full HDD install on the 360 has some streaming issues and texture pop-in as the engine tries to load higher MIP levels.
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    Well you generally still want at least some dynamic world lighting in almost any game. Stripping out normal and spec would save tons of memory and streaming time. Leaving them in would only help for dynamic lighting and specular. I kinda find it hard to believe that they kept normal and spec data when the only benefit is specular especially with the problems they were having. Maybe there are some screenshots or videos showing specular on the world but I didnt notice any.

    I also dont see any reason for them to store character/item textures in a megatexture, but they could have normal and specular data for only the tiles they want I guess.
  • vargatom
    Options
    Offline / Send Message
    Well there was a presentation on the engine and it had screenshots of the texture atlas that included characters and dynamic objects. Might not be combined with world geometry textures though.

    AFAIK they've promised editors with the PC version so sooner or later we'll find out ;)
  • ZacD
    Options
    Offline / Send Message
    ZacD ngon master
    They said the pc version has a lot of cool debug tools, can't wait to check them out, I hope the PC sales on rage are great.
  • EarthQuake
    Options
    Offline / Send Message
    A normal map is used to replace the vertex normal of a mesh, I think it would be a fair assumption to say that every unique asset in rage that had a high res source is using a normal map, and likely every texture in the game. Its pretty uncommon to do spec without normals, and removing normals from many assets would simply break the lighting.

    So really, i'm struggling to see why anyone would think normal maps were removed from rage?

    Pre-baked radiosity/ao/whatever doesn't have any affect on normal maps or make them redundant. That isn't how it works. Normals are required for proper specular, which is a dynamic effect. Even without point lights, you likely have some ambient light which is effecting specular, at all times.
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    Yes, that was my point. The lighting is baked into the diffuse megatexture, that includes normal map shading, so the only reason to have normal info is for specular shading or some other view dependent thing. I havent seen any screenshots or videos that actually show any noticeable specular on world objects.

    The amount of data that would be freed is very high and seems like the first thing they would go for when trying to get it on a disk.

    Not only do you need 3 normal channels and 3 spec channels (hopefully), you also need info about where the light is coming from to cast the specular. That could be done at runtime from actual realtime lights, but if that is the case you also need to know what lights hit what pixels and at what intensity (EDIT: i guess realtime or prebaked shadow maps could be used for this, but thats still quite a bit of work especially with lots of lights in view, plus it only works on direct lighting). The alternative is to store a RNM for the entire thing. This is a massive amount of data for just a single shading feature, which seems to either not be very visible (if its there), or not be very important (if its not).

    EDIT:
    There if definitely specular on at least some parts of environments:
    rage-20_837_522_90.jpg

    Maybe its an option per map or per object?
  • pior
    Options
    Offline / Send Message
    pior grand marshal polycounter
    Yeah I think if Rage had no spec whatsoever it would be pretty obvious in every video shown! It would simply look off, since any specular highlight is view dependent ... I am 99% sure that a megatexture stamp holds at least diffuse, normals and greyscale spec ...
Sign In or Register to comment.