Ok, so there has been some disagreement between myself and a friend of mine *cough* oniram *cough* as to a technique I used to create a normal map. The technique I used is to add floating pieces. I had seen this be extremely effective on other normal maps so I figured, "why wouldn't it work with bigger objects?" because in the end it's all going on a plane anyway.
Here is the piece of the floor texture I'm making. It's a scifi floor btw.
Here is the image from a side perspective view where you can see that details are floating. everything except the main grate body(the box around and the x shape in the middle) are floating.
With the exception of one issue around the corners of the grate at the bottom(easily photoshopped out), I had no problems rendering it.
I'm trying to find out if there is truly a drawback to this when using it for floors. It's a quick and easy way to stack detail without having to worry about topology that much. Oh and excuse the screen grabs
Replies
If you looked at some of the source highpolys for Doom3's textures (made in Lightwave), they were stacked up a good 10-20 units out from each other and rendered as a flat normalmap using "renderbumpFlat" in Doom3.
I'll have to try that out.. nice!
http://www.iddevnet.com/quake4/ArtReference_CreatingModels#head-3400c230e92ff7d57424b2a68f6e0ea75dee4afa
If I have gross-as-fuck messy shading/meshflow in the interior of an inset area I will frequently put a plane or whatever in the bottom of it to hide that. I've been doing that in a production environment for 3 years and it has been nothing but useful, hope this helps
Seriously, end result is what matters, not workflow. If you sacrifice a goat to get your normal maps working and it does it consistently and efficiently, then sacrifice goats or use floaters or whatever. The only time a method isn't correct is when it doesn't give the desired output.
Hmm, I don't get it, what errors does this solve? Supersampling gives antialiasing around the edges of floaters, but it doesn't fix the problem where floaters don't cast good AO because they're too far away.
I thought EQ had a nice workaround for this.
you probably knew that already:)
here's what he was trying to say.
using mental ray naturally captures better line detail than scanline without supersampling. the preset Ambient Occlusion (MR) option for render to texture will leave shadows in areas that you absolutely do not want them. Light tracer filters that out pretty much when you render as a complete map with scanline. the supersampling is required (sometimes) just to account for the sharply filtered pixels.
On a more awesome note, you need to post a tutorial on the cage-import/export-ma-thingy so i can see and render out the AO for the foot locker (which i finished)
This way you get to keep the falloff and Spread settings. giving you nice cavity shading that the light tracer cant do
so what happens to the rest of the objects that are meant to make AO that is further than the max distance you set?
Why would you float geometry that is supposed to be providing you with cavity/intersection shading. that makes no sense. If its supposed to be providing an indent then float it. if not just mate it with the surface.
not necessarily floating geometry. heres an example. you have geometry floating 1 unit above your mesh. if you set the max distance to 1, will the AO cut off at 1 for the entire mesh? even if you have other pieces of the mesh (not necessarily floating) that create AO with a distance further than 1.
I'm not sure i get how you are conceptualizing this. so ill just explain it as i understand it
for every pixel rendered a cone of rays is shot from its world space position. these rays wont return hits for faces more than the max distance. so faces further than the max distance from the pixel will not cause occlusion shading. The end result is corners and contact areas will get shadowed but the floaters wont.
Im not saying your wrong im just interested in why its done this way
fake is exactly why. because of a limit of polygons in games, its impossible to capture all the detail from a high poly mesh. baking the AO as an "extreme" is just a way to fake the shape of something so that it looks a lot more detailed than it really is. this way also it helps (speaking for myself here dunno is many other people think this way) for a base of what to do with a texture.