Didn't want to resurrect the old "Approach to techy stuff" thread so I'll post it here. Please do add your ideas and views on this subject.
I was tempted to try this shape myself to see what I got working best. Baked with Maya's Transfer Maps.
So what really is the best approach for a surface like this?
Replies
I would not bevel something like the one 2nd from left unless it's going to be really huge or in the player's face all the time, you don't get a much better normalmap result, in fact it can be harder to UV-map too, and you waste polys.
There is a time and a place for geometry like that, though.
If all you where doing was beveling the edges I wouldn't use a normal map for that. Adding bevels to the geometry isn't nearly as costly as it once was, especially on bit objects like the edges of buildings. If you can eliminate one texture from going through the pipeline by adding a few polys to the geometry then its a smarter move resource wise.
hahaha I wasn't commenting on which would be best. Opps! I agree far right for that. Or possibly the second from the left depending on the objects place and prominence in the world.
I was trying to say I would only use far left as a high poly if I planned to take it into a sculpting app, its a huge waste to sub-d something that much just for a few bevels... and its even more of a waste to normal map something just for a bevel, when you can might be able to get away with tossing a few more polys on the object.
But yea to actually answer the question, far right. But make the most out of your normal map, toss in more stuff then just a weak bevel!
But if you plan on taking an the object into the UT3 Engine those hard edges with smoothing groups applied across them wouldn't light properly. So the two middle ones would work best for that.
If you have all edges soft (1 smoothgroup) you're going to get smoothing errors withthe one at far right - what you [posted here looks like a render rather than realtime.
If you bevel edges like 2nd one from the left, you won't get much smoothing errors, however the mesh will still look a bit funky on a lot of engines.
If you have hard edges (multiple smoothgroups), you'll be best to use the image below, and you'll have to break the uv's to islands to avoid seams. (though the added edges likely not to effect performance, still doesn't hurt to have a clean mesh)
Hardsurface meshes with normalmapping are cool, but smoothing errors hinder things unfortunately.
They are using a default Lambert material, meaning no spec. One directional light that are lighting all of them.
Oh and of course the 3 to the right each have a bump2D node + file node with normal maps that are generated by comparing them to the far left one.
#3 isnt absolutely terrible, but at the cost of extra poly's, and worse results, i wouldn't really recommend using it. You'll have to add a lot more edges to get a result comparable to using smoothing groups, and at that point it really isnt worth it.
EDIT: Touche, EQ.
I personally blame the wide-spread misconception that models with OS maps can not be animated or deformed, both of which are incorrect.
Flag system as in making it possible to reuse texture areas with overlapping UVs? Like a flag for each UV shell?
We've discussed this a few times here, and i think there are a few different ways to do this. One of the most simple, imo, is to just offset the mirrored uv's out of the 0-1 range, thus making it easy for the shader to "Tag" it as mirrored.
Another would be to use maybe a different material on the other half, that has a tag to mirror? This wouldn't be very efficient.
And a system i've talked to CrazyButcher about a few times, a sort of hybrid tangent/OS system, where you use the tangents to figure out the direction/rotation/etc etc, i think that could be a very robust system if its possible.
I mean, some of them which are sharing same smoothing areas look crap due to the lack of polygons and extreme angles right? When these are compared to a highres perfectly smoothed version shouldn't the normal map neutralize the smoothing errors on the lowres version by adding correct color variation?
With more accurate, slower, offline rendering(scanline in max for example), yes that is the case. But the way most realtime engines do it, no, it just doesn't work out how you would want/expect. Welcome to the biggest annoyance dealing with NM crap
I blame the damned programmers.
Yeah I figured this was the case, reality kicking theory's ass as usual.
EarthQuake, I remember seeying assets you've made using OS, but I've never saw mirrored OS - how would you say that's working out ? are there any noticable seams around the mirrored areas ?
And if not, why the hell isn't that becoming mainstream ? I would assume a lot of companies would have benefited from it since it saves polycount and make eases up the work for their artists.
Still looking quite good imo. Not using mayas "high quality render", won't do nothing for ya with the CGFX shaders.
EQ: What app did you bake / display those meshes in? It looks like the baker and/or shader aren't calculating things correctly. As you can see from the Maya bakes and shaders, the output normals are much stronger, and the previews are much less blobby (which implies it's calculating tangent basis better). You can see a couple of small issues on the far right mesh but not hugely noticeable.
Basically if you can't display a normalmap like that in an engine then something is probably wrong with your shaders, or your normal map baker.
Funny, in all engines I worked with, I had smoothing errors like EQ pointed out.
In the doom3 engine you guys been working with, the tangent support is a bit more advanced - as the engine treats seperate UV islands kinda like smoothing groups.
Also note the kodde removed polys from the bottom and I believe one of the sides, which eases the smoothing errors a bit.
EarthQuake's mesh is all closed, and in most situtations you would deal with a lot of closed meshes with sharp corners.
The other solution too, which a lot of people overlook I think, is to manually edit your normals. A little tweaking here and there can produce a perfectly nice normal bake with no extra overhead. No time lost either, the time you'd spend adding cuts or edge loops can account for rotating/unifying a few vertex normals.
Also, we no longer use the id-style "unsmoothedTangents" which you refer to, Chai - our game model formats now use vertex normal data directly from Maya, so if it looks correct in Maya then it looks correct in the game.
In the meantime I wrote a Maya script for doing what unsmoothedTangents does, it just makes all UV island borders hard - produces excellent bakes for most objects, mostly useful on mechanical/hard surface things.
Yes I know that the Object Space shouldn't differ whether lowpoly has soft/hard edges. But since what works in theory doesn't always work in practice I'm not taking any chances.
These are once again screengrabs from Maya 2009 viewport. Regular Lambert materials using Bump2D nodes, no specular. High quality rendering enabled to be able to view normalmap in realtime.
The UV seam is the edges pointing at the camera. Notice the errors in the Tangent Space Soft Edges version.
I can share my shape in the latest picture. It has quite nasty smoothing when using 1 smoothing group/softening all edges.
Included is:
-Lowpoly version
-Highpoly version
-Tangent Space Normal Map, All soft edges
-Tangent Space Normal Map, All hard edges
-Object Space Normal Map
Also try generating your own normal maps.
Please do post results here.
Heres a tangent map rendered from max compared to kodde's:
Heres a scanline render of the same scene:
It's amazing how much of a better result you get with scanline :poly105: It would seem to me that the built in realtime shaders in Maya (that is what your using right kodde?) produce a more accurate result than max's. I dunno which one is more representative of how it would appear in game. I guess it varys from engine to engine.
Some of the examples shown in this thread look surprisingly good for as low res the low poly is but i am sure you will run into problems when rendering it in a proper engine, it'll look soft, there will be waviness and you'll just hate it.
This also kind of goes along with why i'm pretty much opposed to using object space normal maps. See if I build my low poly tight enough (a decent amount of support edges, nothing crazy high) and i use tangent normal maps, I can reuse chunks of that normal map over and over. I can break off part of a desk to make a window trim, I can use a chunk of a pillar as a floor tile. All I have to do is build another low poly mesh and unwrap it to my previous texture. If my normal maps are in object space or my normal map is fucking crazy wavy from supporting some super low poly mesh I cant do this.
oh BTW, I use xnormal to render all my normals so max and maya can eat it!
http://dl.getdropbox.com/u/499159/nmtest.rar
Also, make sure if you guys are viewing a nm from maya, in max, or vice versa that you invert the green channel. It looks like the main difference in NZA's result is that the right one doesnt have the green channel fliped as it should.
If Maya's or Max's viewports are not representative to how most game engine normal map implementations work then what other app should an artist use to preview normal maps? That is if the artist isn't already working directly with a specific engine at a games company.
xNormals 3D viewer perhaps? Any ideas?
If you can get a plugin for xnormal to read the exact bi/tangents from your file format, that can be a huge plus too.
EDIT: err yea what EQ said.
- Hugely expensive rendering techniques you don't find in real time games.
- Stick to standard point lights.
- Don't use photometric lighting it could give you drastically different results.
- Don't use final gather, Global illumination, radiosity or just about anything that calculates bounce lighting or Ambient occlusion at render time.
- Don't use 50 lights to flood the scene and wash out shadows, faking some kind of sky light, when you'd never get that many lights around a character in game.
I'd also toss in that, even if you preview it in engine, the lighting set up in your test scene, might be different then what players will finally see... so really even testing it in engine isn't going to be as great as actually testing it in game =/
Ah, i'm not saying that it's not beneficial to delve into this as deep as possible, I'm actually saying that a lot of the tests previously posted in this are pretty much stopping at viewing their results in max or maya and saying "yep this works" or "this looks like ass" and this isn't a very good test because max/maya rendering is not accurate especially if you render using scaline/mental ray (these are actually super best-case scenarios because somehow they manage to make shit look perfect no matter how low or high poly the low res mesh is).
Also even if a dev provides an artist with a .fx shader for max the chances of it looking different in engine are still pretty high just because of the way different devs handle importing geometry and generating all of the additional mesh info that we artists never see.
I would also say that the low poly methods that look the best in one engine are probably going to look different in another engine. Heck i know that even in UE3 using different lighting methods on a mesh will reveal different problems from normal mapping. Using vertex lighting for example will be much less forgiving of super low poly normal mapped assets than using light map lighting.
Now as far as UE3 goes, and a few other engines like idtech, don't these engines ship with thier own normal map generators? You would think that you're going to get the best possible results using these custom made apps, simply because they use the correct model format, with the correct bi-tangents/tangents etc.
I'm curious Jordan, but is there a reason you dont use the epic tool for generating normals? I've never done anything with UE3 so i may be missing something obvious here.
Do you mean image based lighting when you say photometric? Because this is pretty common these days, in games. And really, weather you're using a point light, or convoluted(is that the right word, lol?) lighting from a cube map, the problems we're talking about here are artifacts from smoothing errors, so really if your normal is facing the wrong direction, you're going to get similar errors weather you're using a point light, or PRT, or image based lighting etc etc.
These days I use Xnormal to render my normals. It quick as hell and I dont have to worry about running out of memory. Max has become such a pain in the ass to render models, especially if they are in the multi-millions of polies, let alone over 10mil.
This is why you are getting nasty-ass seams on your hard normal instance.
I would definitely favour the hard edge method out of your examples the kind of wavy/multi-coloured interpolation in your soft edge cases is bad juju in my opinion.
A sexy smooth lavender colour is going to behave better.
-Things may look fine in a simple example like this but if you perform deformation on your asset it could totally change the way the normals are interpolated across a face, also you may want to delete part of the asset this will also ruin the result.
A hard edge (flatter normal map) will not be affected so much and is more flexible.
-Also if your mesh is not triangulated I suppose the end result in game could be triangulated differently and not match.
-These are very simple shapes just inheriting bevels from their high poly brethren.
Things would be different if you have more shapes (screws, vents whatever) modelled in the high poly. With soft normals across 90 degree edges your only option is to have the surface transfer rays fire out interpolated around the edge.
This may warp and distort the shapes on the surface. If the edge is hard the rays can be made to fire straight and parallel on flat areas and not distort the forms.
With a single bevel the 90 degree angle has been turned into 45 this is still going to interpolate across the flat areas and give some slightly sketchy results.
To echo the point JordanW made about tangent space maps over object space ones. Having this kind of interpolation over flat areas means you cant use the same flat area of the map on some other geometry. It becomes bespoke and locked down to the original sample geometry.
Sure, i remember people complaining about it long ago.
I generally use xnormal when i can, at 8ml one of our programmers used the SDK to create an import plugin for xnormal, so that we can read the exact normals, tangents/bi-tangents, etc from out ingame file format. This helped us get as accurate results as possible with the tech we had.
So, what sort of format do you use in xnormal? Do you export SBM from max? I guess if you get all of that info from max when you export to unreal, and the same info is being exported from max when you export and SBM, you wouldn't really have any problem.
I'm just curious if anyone has brought up these issues with your tech team. Or ever had the need to really(you would notice some differences between what you get in xnormal, and what you get in game).