I don't know if the current Xnormals has this option.. or if it's possible..but maybe setting up multiple meshes to queue. Cause usually with complicated meshes I have to bake out smaller pieces individually to avoid errors and it can get super slow. Thanks! You're a god.
how is the viewport planed to work? it would be cool to have an "activate viewport" button so you can simply start up the prog without having to wait until it draws an (possible) very very highpoly mesh
bc normally I don't use the viewport .. just simply bake my maps and I love it when the prog starts up very fast
I don't know if the current Xnormals has this option.. or if it's possible..but maybe setting up multiple meshes to queue. Cause usually with complicated meshes I have to bake out smaller pieces individually to avoid errors and it can get super slow. Thanks! You're a god.
Instead of doing a batch on multiple peices and then having to combine them all later, you can just create an "exploded" scene where you move your objecst appart so they arent clipping together, then you can render your normals/ao all in one go and save a massive amount of time worrying about batching etc
Instead of doing a batch on multiple peices and then having to combine them all later, you can just create an "exploded" scene where you move your objecst appart so they arent clipping together, then you can render your normals/ao all in one go and save a massive amount of time worrying about batching etc
for normal maps thats good, but how is exploded workflow good for AO?
how will the exploded high poly occlude each other(since they are far away from each other) to create the shadows?
@rollin: Yep, I'll add a "disable/freeze viewport" button.
@ErichWK: You'll be able to define which meshes you want to use for baking. For example, when you press the "Render" button a dialog will appear. Then, you select which lowpoly meshes wanna use and the same for the highpoly meshes.
For the AO a cast shadows/receive shadows option will be enabled... so you could define which highpoly meshes cast shadows over other meshes.
I could add an automatic explode option too.
Maybe there is some way to achieve the same thing but it would be awesome if you could exclude high poly objects from baking on a low poly object basis. Maybe as a right click option on the low poly objects.
Then you could render the whole AO map in one go and get proper shadows from all high poly objects but only bake from the objects relevant for each low poly object. Kinda like an explode except without moving anything.
I guess this automatic explode might be capable of the same thing though.
i know that, but it isnt the same as the multiple high poly elements casting shadow on itself. that method uses the low poly to cast that which is useful but not what i am talking about. essentially exploding the high poly results in in-accurate AO of the high poly.
Oh dear! thanks for the responses, and Jogshy, super excited for Xn4! It's an absolute life savor, considering Maya's baking process can be temperamental at times.
Man, the next xnormal looks super sweet... I love the idea of the viewer too...so I could look at my low poly to see if the Normal map was seamless... and in real time...I love that
I was hoping you would upload a video of a character in the viewport to see how the new viewer will play out... keep it up man...
I'm currently playing a bit with DX11's tessellation and new techniques like face mapping (PTex,etc...).
I would like to hear about what you ( the artists ) think about these new techniques.
Are you using them currently in your pipeline? What are the problems you're finding using them? What new fuctionally would you require to use them effectively?
I'm using xnormal to bake my diff,ao etc map for the usage in cry engine2.
But if I want to use normal maps in ce2 without having seams in the normal map where the uv seams are, I need to use a feature from the 3dsmax RTT.
It's called output as normal bump and is a checkbox at the pretty much end of the render to texture dialog.
The problem is I can't find such an option in xnormal.
Is such a feature in xnormal?
If not could you integrate such a feature?
Then I wouldn't need to bake the normal map in 3dsmax extra if I want to use seamless normal maps in ce2
i know that, but it isnt the same as the multiple high poly elements casting shadow on itself. that method uses the low poly to cast that which is useful but not what i am talking about. essentially exploding the high poly results in in-accurate AO of the high poly.
While its true you can end up with "double AO" when doing the high + low AO method, for all practical purposes its a much better workflow. Sure it isnt 100% "correct", but i've never had a situation where the quality of an asset suffers from it. Plus you get these benefits:
1. Works with the standard workflow you should be using to bake normals.
2. "Low AO" is actually more accurate to the ingame mesh you'll be using. A good example is a 32 sided cylinder intersecting another object in the high, but only a 8 sided cylinder in the low, this can often create an "ao shape" that doesn't match the lowpoly well, and points out the jaggedness of the low more.
3. Separating your AO into Low and High passes, gives you more flexibility in editing, a very good example; You can easily mask mesh chunks that rotate, move, etc and expose and area that was otherwise occluded, good for anything that needs to animate. With the standard high only workflow, this is a much more tedious process, as you have to work around the "high freq" AO as well.
4. And of course, no need to do excessive editing to fix ray intersections. Even if the software could avoid these ray intersections for you, you still wouldn't get any of the above benefits, and many assets would still require a lot of rework on the AO side.
[edit] Lol, i just realized this post was about a billion years old, oh well, still good information i hope!
I'm currently playing a bit with DX11's tessellation and new techniques like face mapping (PTex,etc...).
I would like to hear about what you ( the artists ) think about these new techniques.
Are you using them currently in your pipeline? What are the problems you're finding using them? What new fuctionally would you require to use them effectively?
thx
Decided to tackle tessellation and began research lately...
Seems very little in so far as,
an artist centric pipeline and techniques.
As any new tech develops surely there should be equal input from a large artist community.
( if any "involved" pipeline discussion already exists i'd sure appreciate a point in that direction )
What I have tried so far... ( and an often used strategy with new tech )
is to just substitute my assets with whitepaper demo sample files and viddy-ing the results.
However,
when trying the same with the MSDN june directX SDK for 2010...
It appears that such a pipeline may not be as straightforward as many have assumed.
As Density based and LOD Distance methods will make adaptive strategies "fast".
The mapping to support these technologies ( within the SDK samples @ least ) involve two parts.
a "height" map consisting of a normal map for such information. and
an edge definition map. mapping edge integrity consisting of red on black ( the edge "density" in red )
In similar pipelines ( unigine?/youtube-ed? ) I have noted similar workflows where the red map seem to contain both height and edge information?? )
Not sure if the samples in the SDK support subdivision surface displacement? vector displacement? pn-triangle? However...
This method seems to be reinforced by the DirectX 11 Tessellation features recently exposed in UDK.
where:
Ok, I am judging this based on 5 minutes of quick inspection as I haven't had a chance to find out about this feature officially. So take it with a grain of salt and rest assured we will have complete documentation of it in the near future on UDN.
That input takes in a 2-component vector I believe (X and Y, or R and G...however you want to look at it). The first component defines the tessellation factor for the edges and the second component defines the tessellation factor for the interior. It should also require the D3D11TessellationMode property of the base Material node (in the D3D11 section) to be set to something other than MTM_NoTessellation. Most importantly, it only works if your video card supports DirectX 11.
In which case I imagine this might be a good direction to start?
If I knew that the edge density map was something that could easily be "baked" and subsequently previewed within
a baking application's dx viewer. ( along with all it's tesselated glory ). perhaps in xNormal 4?
um... please?
Anyone more tess savvy than me? please hip me to the goods.
Another nervous question unanswerd regarding tessellation as LOD "substitute".
I am wondering if for such a strategy to work...
That the assumption is that the only geometry to exist would be the lowest geometry possible.
( what you currently use for your lowest LOD level )
Expecting tessellation mapping alone to carry out all the magic of re-introducing the original hi-detail?
from the examples I have seen so far...
The tessellated outcome at times seems to vary wildly.
( I wouldn't trust displacement to find the best silhouette from my lowest LOD? )
maybe a 4000 tri model but a 600 tri model??
I wonder if a hybrid approach will exist. Using both methods
( original LOD handling for extreme distances and tessellated solutions at medium to close? )
Or perhaps it is assumed that the next generation of consoles will have the power to handle hi-poly
tesselation friendly LOD bases?
finding the perfect shape of the highpoly is pretty easy on multible tesselation levels, depending on how tesselation works
it it tesselates like turbosmooth, i must admit its hard
(you'd have to render your maps from a turbosmoothed version of your lowpoly model)
+ height is adjustable, so one can bulge up the model or something
- uncanny workflow
but if it just tesselates without moving the generated vertexes, the information of the position would only depend on the maps you're doing
+ common workflow
more or less exact recreation of the source mesh shape
- the shape of the realtime rendered mesh is fixed, so no tweaking possible
each different mesh needs a scale value for the heightmap, otherwise the mesh would look weird
pretty interesting would be the 2 phase displacement again
engine has a very lowpoly basemesh good for animation but without vidible detail like a nose and such
this gets tesselated and displaces to the baseshape of the aimed character
tesselated again and displaces for the details
that way production pipeline would require no rigging, just the low proxy mesh with uvs, and the artists can create their maps from their highpoly models and different models would be like skins in the 90s
edit: plus morphing from one mesh into another would just be texture blending
i like some of the UI changes mentioned by Rick Stirling about haveing a whole asset slot that lets you choose the high, low and texture and this would be good for batch baking too put in multiple whole assets to bake.
but having the 3d view as part of the main interface really i use xnormal cause i can throw a highpoly with millions of polys in with out waiting for that crap to draw on screen.
and for the material editor well arent most people going to be targeting a game engine im not going to bother trying to set up a approximant material in xnormal when i can just pop into UDK or marmoset and do the material there.
That's gotta be so discouraging, I'm not happy when I forget to save after 30 minutes! Best of luck on getting back on track and looking forward to xN4!
If xn4 need to be monetized then make it like that, no problem but don't stop to develop. Please! we need cool features like round edge baker. Can't imagine how it hard to support free tool with so massive audience.
Replies
that's looking great!
how is the viewport planed to work? it would be cool to have an "activate viewport" button so you can simply start up the prog without having to wait until it draws an (possible) very very highpoly mesh
bc normally I don't use the viewport .. just simply bake my maps and I love it when the prog starts up very fast
and cool you're working on v4.0 again
Instead of doing a batch on multiple peices and then having to combine them all later, you can just create an "exploded" scene where you move your objecst appart so they arent clipping together, then you can render your normals/ao all in one go and save a massive amount of time worrying about batching etc
for normal maps thats good, but how is exploded workflow good for AO?
how will the exploded high poly occlude each other(since they are far away from each other) to create the shadows?
@ErichWK: You'll be able to define which meshes you want to use for baking. For example, when you press the "Render" button a dialog will appear. Then, you select which lowpoly meshes wanna use and the same for the highpoly meshes.
For the AO a cast shadows/receive shadows option will be enabled... so you could define which highpoly meshes cast shadows over other meshes.
I could add an automatic explode option too.
Then you could render the whole AO map in one go and get proper shadows from all high poly objects but only bake from the objects relevant for each low poly object. Kinda like an explode except without moving anything.
I guess this automatic explode might be capable of the same thing though.
http://wiki.polycount.net/Ambient_Occlusion_Map
i know that, but it isnt the same as the multiple high poly elements casting shadow on itself. that method uses the low poly to cast that which is useful but not what i am talking about. essentially exploding the high poly results in in-accurate AO of the high poly.
I was hoping you would upload a video of a character in the viewport to see how the new viewer will play out... keep it up man...
I would like to hear about what you ( the artists ) think about these new techniques.
Are you using them currently in your pipeline? What are the problems you're finding using them? What new fuctionally would you require to use them effectively?
thx
I'm using xnormal to bake my diff,ao etc map for the usage in cry engine2.
But if I want to use normal maps in ce2 without having seams in the normal map where the uv seams are, I need to use a feature from the 3dsmax RTT.
It's called output as normal bump and is a checkbox at the pretty much end of the render to texture dialog.
The problem is I can't find such an option in xnormal.
Is such a feature in xnormal?
If not could you integrate such a feature?
Then I wouldn't need to bake the normal map in 3dsmax extra if I want to use seamless normal maps in ce2
cheers
sebi3110
While its true you can end up with "double AO" when doing the high + low AO method, for all practical purposes its a much better workflow. Sure it isnt 100% "correct", but i've never had a situation where the quality of an asset suffers from it. Plus you get these benefits:
1. Works with the standard workflow you should be using to bake normals.
2. "Low AO" is actually more accurate to the ingame mesh you'll be using. A good example is a 32 sided cylinder intersecting another object in the high, but only a 8 sided cylinder in the low, this can often create an "ao shape" that doesn't match the lowpoly well, and points out the jaggedness of the low more.
3. Separating your AO into Low and High passes, gives you more flexibility in editing, a very good example; You can easily mask mesh chunks that rotate, move, etc and expose and area that was otherwise occluded, good for anything that needs to animate. With the standard high only workflow, this is a much more tedious process, as you have to work around the "high freq" AO as well.
4. And of course, no need to do excessive editing to fix ray intersections. Even if the software could avoid these ray intersections for you, you still wouldn't get any of the above benefits, and many assets would still require a lot of rework on the AO side.
[edit] Lol, i just realized this post was about a billion years old, oh well, still good information i hope!
Decided to tackle tessellation and began research lately...
Seems very little in so far as,
an artist centric pipeline and techniques.
As any new tech develops surely there should be equal input from a large artist community.
( if any "involved" pipeline discussion already exists i'd sure appreciate a point in that direction )
What I have tried so far... ( and an often used strategy with new tech )
is to just substitute my assets with whitepaper demo sample files and viddy-ing the results.
However,
when trying the same with the MSDN june directX SDK for 2010...
It appears that such a pipeline may not be as straightforward as many have assumed.
As Density based and LOD Distance methods will make adaptive strategies "fast".
The mapping to support these technologies ( within the SDK samples @ least ) involve two parts.
In similar pipelines ( unigine?/youtube-ed? ) I have noted similar workflows where the red map seem to contain both height and edge information?? )
Not sure if the samples in the SDK support subdivision surface displacement? vector displacement? pn-triangle? However...
This method seems to be reinforced by the DirectX 11 Tessellation features recently exposed in UDK.
where:
In which case I imagine this might be a good direction to start?
If I knew that the edge density map was something that could easily be "baked" and subsequently previewed within
a baking application's dx viewer. ( along with all it's tesselated glory ). perhaps in xNormal 4?
um... please?
Anyone more tess savvy than me? please hip me to the goods.
I am wondering if for such a strategy to work...
That the assumption is that the only geometry to exist would be the lowest geometry possible.
( what you currently use for your lowest LOD level )
Expecting tessellation mapping alone to carry out all the magic of re-introducing the original hi-detail?
from the examples I have seen so far...
The tessellated outcome at times seems to vary wildly.
( I wouldn't trust displacement to find the best silhouette from my lowest LOD? )
maybe a 4000 tri model but a 600 tri model??
I wonder if a hybrid approach will exist. Using both methods
( original LOD handling for extreme distances and tessellated solutions at medium to close? )
Or perhaps it is assumed that the next generation of consoles will have the power to handle hi-poly
tesselation friendly LOD bases?
it it tesselates like turbosmooth, i must admit its hard
(you'd have to render your maps from a turbosmoothed version of your lowpoly model)
+ height is adjustable, so one can bulge up the model or something
- uncanny workflow
but if it just tesselates without moving the generated vertexes, the information of the position would only depend on the maps you're doing
+ common workflow
more or less exact recreation of the source mesh shape
- the shape of the realtime rendered mesh is fixed, so no tweaking possible
each different mesh needs a scale value for the heightmap, otherwise the mesh would look weird
pretty interesting would be the 2 phase displacement again
engine has a very lowpoly basemesh good for animation but without vidible detail like a nose and such
this gets tesselated and displaces to the baseshape of the aimed character
tesselated again and displaces for the details
that way production pipeline would require no rigging, just the low proxy mesh with uvs, and the artists can create their maps from their highpoly models and different models would be like skins in the 90s
edit: plus morphing from one mesh into another would just be texture blending
but having the 3d view as part of the main interface really i use xnormal cause i can throw a highpoly with millions of polys in with out waiting for that crap to draw on screen.
and for the material editor well arent most people going to be targeting a game engine im not going to bother trying to set up a approximant material in xnormal when i can just pop into UDK or marmoset and do the material there.
Did not know it was this close either.
Can we have a look at the interface, it's my biggest issue with Xnormal, eyes bleeding, you know
Is it still developed?
Thx