I've been messing with different ways of doing this over the last few days and got great results (thanks for input from EQ as well), but I never went as far as the guys from EVE Online went. They created some rather nifty shader stuff which went well above what I was doing.
http://www.eveonline.com/devblog.asp?a=blog&bid=724
I was wondering in particular, if anybody understands what they were doing with the the city light shader setup. They don't go into a lot of detail about it. It just shows UV chunks from random bits, UV'd in a wonky fashion, being mapped to a square texture? Seems like something was left out.
Ingame, their results are fantastic. No texture stretching anywhere and everything is nice and sharp. They even have storms in the planet cloud cover and such - looks stunning.
Replies
That's some pretty awesome stuff, thanks for pointing it out Vassago.
Me and P
Anyways i understand that the texture on the left is basically spherically mapped to the first uv set of the sphere, and that it contains data that points to a place on an atlas texture. My question refers to the baking procedure. Was a custom sampling method created or was one of maya default methods used. And what sort of method was it, im having trouble picturing it, but im thinking that maybe a series of quads was placed where cities would be, and they where mapped to somewhere on the atlas texture, then somehow uv space info was transferred from the quads onto the uv set of the sphere. Is this somewhat accurate, or am i on the wrong track.
As for baking - my guess would be: UV quads on an atlas sheet and RTT their UV's as bitmap. Using this texture, place the quads all over the sphere surface then project that sphere and RTT the diffuse.
Then we swapped the 'city light texture' with one that is a rgb value of the uvs. I had to create that texture using a cgfx shader because some color correction/gamma was probably messing things up in photoshop. Bake everything into the sphere's Uvs, done
I guess i can look for that uv color texture if needed
Yeah it's neat, too bad it's note mine ^^ some guy at CCP dig that up from a paper I think.
What's funny is that there was more artifacts with high res textures. I guess the trillinear filtering does a better job with a low res than dxt, Uvs in this case are a very linear information. The mask had to be highres tough so they might have ended up on two different textures or fetched from a lower mipmap I don't really know about the final implementation.
uv...even always linear ? .. maybe
I couldn't find the file so i made a new one the same way
http://www.mentalwarp.com/~moob/show/polycount/uvColor.png
i didn't 'test it' even if i don't see anything that could really go wrong
(a 2048 png of lossless gradient is just 22Ko ? nice ^^)
a 256 should be able to store the full range of 8bits, so i also rendered a 128bits dds too. but it seems to be in linear space and i can never get the exact same image when i convert it to 8bits. another interesting mystery. anyway i can't tell which one is more accurate...but it shouldn't matter at all.