No flaming, I just wanted to show everyone how fucking awesome this tech is. At TED
this guy demonstrated his product SeaDragon with PhotoSynth. SeaDragon is a zoomable environment with the super-sweet feature of rendering gigabytes of image data flawlessly and smoothly.
His technique (SeaDragon) is shear brilliance and I'm pretty sure it's similar to what Carmack is using in his new engine.
Check out PhotoSynth for yourself.
I'm just sayin' that this is a monumental achievement in realtime computer graphics that is not only revolutionary for games and cg but any piece of software that needs to display an assload of data in realtime (just watch the TED demo I linked above!). I can't wait for this tech to become widely adopted and put to use everywhere (and where wouldn't we want to render massive amounts of data in realtime?). Pretty cool stuff right?
Replies
http://boards.polycount.net/showflat.php...true#Post211722
</cough>
I got too excited and didn't bother to search... and... sorry
As i said, i haven't tested it because my hardware is four years old. However when i described (in more detail) to the programmers i work with (i work at a gamedev company here in Greece), they said that this should work.
I hope to get new hardware soon so i can test that kind of stuff :-/
another thing is that often you want more than a single texture. and have multiple uvs, I think one would saturate stuff with multiple rendertargets quickly...
mostly I think its some clever special texture packing (using a offset texture or so). But well once enemy territory is released, we all will know
Also yes, multiple textures with different UVs will need many render targets. On the other hand, from what i've seen in the id tech 5 video, they don't seem to use multiple textures per surface (i'm not counting normal/specular/bump/other maps there, since they can use the same UVs and can be likewise broken in submaps much like the diffuse maps).
Since this may be considered as a limitation (one diffuse, one normal, one specular, etc all with the same normalized UVs) per *pixel* you can think of it as a "limitation exchange". You put this limitation, but you get "therionmaps" (as i like to call mine :-p).
Ah, i wish i had a GF8800...
...and a SLI motherboard...
(and a CPU for that mobo, and some compatible memory too and while we're there, let's get a new hard drive too - i don't *need* that, but my disk is full... and if i'm going to buy the above stuff, getting a new disk won't make much of a difference in the total cost)
the mipmap is dead..
this would also mean you could create heightmaps of near infinite complexity?
Heightmaps depend on the usage. For terrain geometry made by heightmaps, i think with a similar method (creating subheightmaps) it is possible. For usage with shaders (f.e. for parallax or relief mapping), it's just like other textures.
sgi had that in their high-end hardware in the old days
http://www.sgi.com/products/software/performer/presentations/clipmap_intro.pdf
This is great because you can have as my different texture layers, effects, multi-pass (normal, spec, bump, etc, etc) in the entire scene... but it'll only cost you one drawcall. For a console game this would not be an issue. For PC's, if I remember correctly, Windows has a limit of how many drawcalls in can process during a window of time. That's why often times videocards aren't really being used to their full potential. Simply because they're so much faster than what windows can process. They basically sit there twiddling their thumbs waiting for windows to give them something to do.
The only downside is you then run into issues of running out of texture memory. This method would literally run you into the high hundreds, even gigs of texture memory usage.
Pretty soon we'll probably have video cards the size of desks hanging off our PCs.
You shouldn't run out of texture memory if the scene is being sampled in screen-space, then it only streams the relevent textures from HDD into memory. Then you're talking about HDD speeds limiting the graphics quality