If you use as lowpoly model a cube with hard edges then xNormal 3.17.0 ***WON'T *** break the cage. It will use an averaged one by default ( that is, without discontinuities over the cage ).
However, if you need the cage's faces broken-by-smooth-group ( which is rare ) then you need to:
1. Load the 3D viewer
2. Show cage->Edit cage
3. click on the "Break" button
4. click on the "save meshes" button and answer "yes" to the auto-assign meshes.
( there's no need to select all the vertices... if no vertex is selected then the break operation will be performed over the whole mesh ).
Why you need to load the 3D viewer and why this should not be done "in one click without loading the 3D viewer"? Because some break/weld operations require interpolation. That interpolation can produce some undesired effects so that's why xN needs visual confirmation from the artist.
The cage by default will not break the edges now, however *using* the cage is still a slow and overly complicated process. The default method should simply be load your mesh into the lowpoly tab, set a ray cast distance and render with an averaged projection mesh, but there are more steps to it than that to use the cage period. And have the option to set the projection mesh back to mesh normals instead of averaged.
Honestly you should only ever need to go into the 3d editor, and manually set up a cage for very specific cases, not every model.
Why you need to load the 3D viewer and why this should not be done "in one click without loading the 3D viewer"? Because some break/weld operations require interpolation. That interpolation can produce some undesired effects so that's why xN needs visual confirmation from the artist.
This really isn't the case, an artist does not need to set up the cage manually for every asset, and i'll say it again, this must be done for every small change to the mesh, which is very very *very* slow when you're dealing with test bakes, iterations and and change requests on an asset. I've been using xnormal for YEARS, and rarely have i ever needed to to visually see the cage. I'm sure some people do it this way, and the functionality is already there to do that, but the process of just loading in a mesh and baking should be as easy and painless as possible.
I just checked 3.17.0 Beta 5(what i happen to have installed here)
The default rendering method using xn will render broken edges. Using a cage is an entirely different process and can not be considered the "default behavior" as it requires a different workflow to use.
In comparison, load up maya, go to transfer maps, add your high and low, and you will get a correct, averaged-projection mesh created for you and assigned correctly automatically.
Load up maya, set up a RTT for your lowpoly, a cage will automatically be applied to your mesh and used.
In both of these apps you can do further tweaks until your heart is content, but they will *work* as soon as you assign your high and low meshes and hit render. This is not the case with XN.
New users have no idea you can even use xnormal to get a nice, averaged-cage result. They just think the default broken-edge method is how it works. This leads to lots them simply assuming "this is how it works". Which is a problem.
I'm not trying to break your balls here Santiago, you know how much i respect you and the work that you've done with XN. I'm simply looking at this from an objective viewpoint as someone who has been around the block more than a few times with all the major baking tools, and its clear to me that XN has the worst implementation when it comes to this problem.
I know what you mean, Earth. I should make all more intuitive, easy to use and productive.
Think that xn3 is the sum of TONS of patches... and it has really become huge and difficult to manage. There is also a lot of confusion about the uniform ray distances, cages, ray blockers and the MatchUV feature.
xn3 is a huge mess, I admit and I assume it. It's currently too complex and big to change some things
xn4 is a different thing because I have the opportunity to create all from zero ( == if you had the chance to change the things... which ones would you change? My answer: A lot :poly136: ).
Just some related things which are planned for xn4:
1. Kill the uniform ray distances. Why? Because I really don't like numeric/parametric things you cannot really visualise in an intuitive way.
2. All will be done visually using the averaged cage ( with the option of using the surface/geometry normals ), there won't be any break in the cage.
Good news: I found an algorithm to compute the cage automatically for the 99% of the cases. You won't ever have to move the slider ( although I will allow you to move it and also to edit the cage's faces/vertices manually )
3. Integrated viewer ( which btw, could be disabled to use less resources ) with on-screen manipulators ( to move vertices/faces, etc... ).
4. New load/save scene model instead of load-all-the-time one. You'll have an scene. You import things into the scene. The scene is loaded/saved completely without having to use external file references. Things will load much much much faster.
5. Auto-exploding. That's more a promise than a fact currently, but well.. :poly124:
6. Object lists ( yes, object enumeration in a list box so you can select things, etc... )
The point is that I could patch and patch and patch xn3 until I die... but I really prefer to assume xn3 is not perfect and concetrate in xn4 instead, so I can change the core things which were a bad idea in xn3 :poly142:
I'm looking to start using xnormal, but since it doesn't support Material ID's I'd have to explode my low poly model to bake it.
Correct me if I'm wrong, won't exploding my low poly, break up my smoothing groups and mess up the bake results? I'm baking a model that has all 1 smoothing group.
You explode separate mesh chunks, not chop off individual sections from a continuous mesh. So no, what you're worried about is not a problem.
What do you mean explode seperate mesh chunks?
For example, I have a lot of characters that need to be continuous, lets say something like a army solider which has alot of straps etc...
If I tried to just use xnormal the raytrace wouldnt work cause there are too many distances to consider projecting a small part onto the whole mesh so I'd have to do it 1:1
Straps high to straps low etc... and take apart the low poly mesh.
You can manually tweak the cage, render 2 maps(one with a large ray distance, one with small) and composite them together or do some tests and tweak your mesh so that you dont have extreme difference in areas, IE: make your low match up with your high a bit better.
If your straps are modeled as solid chunks welded to the character you really shouldn't have any problems, if its sloppy and just intersected into the mesh you'll have issues, but you should reconsider your topology at that point.
Things like straps that would hang separately, bags or other items should simply be separate mesh chunks, which are easy to explode, and if they aren't separate you really shouldn't have issues with intersecting or other problems like that.
Taking apart the low should never be an option(on a continuous mesh) as you will break your mesh normals and introduce all sorts of smoothing errors.
Once I put my problem to this thread and I receive a few suggestions, why I have things like this:
Unfortunately I checked all advices.
Actually I'm using 128-256 rays, 140 angle (checked 140-180), 3.17 xNormal, HP model exported normals (checked average too), bias 0,08 (checked 0,2), and still baked AO looks like this:
I think, there could be small difference between this problem in past and in present. Now i can't improve quality of AO, increasing subdivision of HP geometry. I think before I could make these postering more dense.
I'm running into a very frustrating problem, something I have never encountered before. I baked a nice looking normal map for a character but the only problem was the some flipped normals on the high res. Other than that the bake was pretty sweet.
So I fixed the flipped normals and then reimported my high res OBJs and now I keep getting results like this:
Seems like rays are not hitting some areas on my high res, but no matter what I increase my frontal ray distance to I keep getting the same problem. My high res meshes look fine and are lined up perfectly with my low res. Any idea as to why this is happening?
This is a really stupid suggestion, but based on your image I'd say: are you sure you didn't export both your high AND low to your highpoly file?
Go into the 3d viewer and hide everything but the highpoly, then see if there isn't a lowpoly in there as well.
Actually I'm using 128-256 rays, 140 angle (checked 140-180), 3.17 xNormal, HP model exported normals (checked average too), bias 0,08 (checked 0,2), and still baked AO looks like this:
128-256 rays is preview quality really. Put that to 1024-2048 with 4AA for production quality.
And spread agle==140 seems too much. 160 is fine.
Use the cosine distribution.
To remove banding you can try also to enable the jittering.
I baked a nice looking normal map for a character but the only problem was the some flipped normals on the high res. Other than that the bake was pretty sweet.
So I fixed the flipped normals and then reimported my high res OBJs and now I keep getting results like this:
As MightyPea said, looks like if you exported the lowpoly attached to the highpoly.
Everytime it says the number of vertices/topology doesn't match even though it does.
Remember you cannot add/remove vertices on the cloned mesh. You can only move vertices.
Be sure also that all the lowpoly mesh's faces has UVs ( do not delete UVs in your Unwrap UV tool ).
If you're performing a manual triangulation be sure your sequence is:
1. Triangulate the lowpoly mesh
2. Clone the lowpoly mesh.
3. Move vertices on the lowpoly mesh
4. Save the cloned mesh as external cage
and NOT
1. Clone que lowpoly mesh
2. Move the vertices
3. Triangulate
(because some programs use the face normals to decide the triangulated edges... so the topology won't match)
Avoid the max2obj exporter ( use the gw:Obj one if you need to output as .OBJ).... but I would save as .SBM to avoid problems.
Remember you cannot add/remove vertices on the cloned mesh. You can only move vertices.
Be sure also that all the lowpoly mesh's faces has UVs ( do not delete UVs in your Unwrap UV tool ).
If you're performing a manual triangulation be sure your sequence is:
1. Triangulate the lowpoly mesh
2. Clone the lowpoly mesh.
3. Move vertices on the lowpoly mesh
4. Save the cloned mesh as external cage
and NOT
1. Clone que lowpoly mesh
2. Move the vertices
3. Triangulate
(because some programs use the face normals to decide the triangulated edges... so the topology won't match)
Avoid the max2obj exporter ( use the gw:Obj one if you need to output as .OBJ).... but I would save as .SBM to avoid problems.
What I tried to do was convert my edit poly to edit mesh, then cloned it.
Then I used projection to make a cage and clicked export cage.
So I can't use the export cage option, I have to use a different model ?
Well I got it working great using the xNormal SBM exporter. There is a tut on the xnormal site on how to set it up, super easy, but cage works great using it.
Basically just setup the stack in max as follows
+Projection
+Xform
+Edit Mesh
+Edit Poly
Export using SBM and import into xNormal and check use cage box.
128-256 rays is preview quality really. Put that to 1024-2048 with 4AA for production quality.
And spread agle==140 seems too much. 160 is fine.
Use the cosine distribution.
To remove banding you can try also to enable the jittering.
Ohh man, thanks with this jittering. I should figure it out by myself ^^ Well.. now i should rebake all stuff again.
btw: jogshy please add a real(seperate) limit ray lenght for ao. it's really annoying and makes using xnormal impossible in some cases..
Currently you can setup a cage and check in the "Limit ray distance" option, I'm not sure if that fits for you. xn4 will allow you also to control which objects cast/receive AO shadows
this is something that I know Max' can do but im not sure if xnormal can
in max lets say I have a complex object of a sort
lets say its a gun with 3 parts
now normally for proper bakes you would need to explode your mesh prior to exporting to xnormal into those 3 chunks so they do not touch one another and there normals bake proper per each chunk
now in max there is a option to set on both the high poly and low poly a material ID for each section and a option on baking that only to touch the rays of selected material ID
So if chunk 1 low poly and chunk 1 high poly both share the same material ID then chunk 2 low and high and chunk 3 low and high will not effect it , even if the mesh is in its non exploded form.
now this is great for hard edge stuff but my question comes to this
now with zbrush if im sculpting something complex (organic/hard edge based thing)
I would need to sculpt/export each chunk separate and have it bake each part 1 by 1 and then combine all the maps in photoshop for a proper normal map (but baking all the chunks/objs at once is ok for a AO map since there all intersecting each other properly)
now is there a way in xnormal that every mesh I export upon the low and high poly theres a way to set it that low poly chunk only effects high poly chunk with the same material ID or naming convention or something in that manner?
Ive been reading around and im not sure if this is something xnormal can do or not
If this is possible I really would like to know, this will really help our workflow in our job if something like this exists or not.
If im not explaining this well I can draw a quick diagram
Speaking of which, I remembered at one point you mentioned multiplatform support for 4.0. If not already planned, can you add support for internet rendering? I have a server box at a different location running ubuntu enterprise that I can steal some rendering power from when its idle. Hell, maybe even make its address open for some users (if I can dedicate just one or two cores without effecting the rest of the systems processes). If I get a cheapo ati card since it can fit one, I could add opencl support to it as well.
I remembered at one point you mentioned multiplatform support for 4.0. If not already planned, can you add support for internet rendering?
The xn4's render should operate on TCP/IP(agent+coordinator model) so you'll able to render through Internet, yep. In fact, I'm investigating the Amazon EC2 service currently.
now in max there is a option to set on both the high poly and low poly a material ID for each section and a option on baking that only to touch the rays of selected material ID
Well, currently xn3 does not support MultiSubObj materials.
xn4 should support that you describe plus the ability to render multiple UV chunks to different files in one pass. You'll be able also to write custom shaders, so you could output any information to "multiple render targets" or to define an heuristic to skip determinated hits.
Some cards have reduced accuracy in order to gain some speed. In fact, many pre-DX10 GPUs use a 16/24bits floating point model which, together with texture compression/automatic mipmap generation, can affect the normal map's representation.
AFAIK, in 32bits color mode, a DX10 GPU you should not have problems displaying normal maps. Using OpenEXR/TIFF 16/FP can improve the appearance if the application can load 16bits images ( like Photoshop... for games 16 bits is absolutely overkill ).
I have a request that might be a big parenthesis in the grand scheme of things, but I was doing a big batch render earlier and thought it would be nice with Growl/Growl for Windows push notification support, so that when everything is done a notification is pushed out and my phone gives me a little beep about it.
Not essential, but useful in an office environment when I often set something to render and step away to discuss the latest issue with a co-worker.
I have a request that might be a big parenthesis in the grand scheme of things, but I was doing a big batch render earlier and thought it would be nice with Growl/Growl for Windows push notification support, so that when everything is done a notification is pushed out and my phone gives me a little beep about it.
Not essential, but useful in an office environment when I often set something to render and step away to discuss the latest issue with a co-worker.
Interesting, thx! I'll take a look into Growl, it's a new thing to me :poly136:
Curious how Xnormal 4.0 is going, any cool news for us Jogshy?
I had to re-start the development from zero one month ago. I've changed from wxWidgets to Qt.... and seems the 3.17 versions never end
Not essential, but useful in an office environment when I often set something to
render and step away to discuss the latest issue with a co-worker.
Wow, Growl ROCKS! I really like it!
Ok, I have added Growl support for the upcoming 3.17.2. I've placed three events currently:
1. Map rendering finished
2. Conemap finished
3. Simple AO tool finished
which are the tasks that more time consume. Do you need any other one?
Hey everyone, I'm having a problem, figured I'd put it in the xnormal thread since its related....
I have a mesh I started in this newfangled sculpting app, sculptris, and its got crazy topology, so I retopoligized a low poly version, unwrapped it, and now I want to get a displacement map from the highpoly version out of xnormal, so I can continue working in another sculpting app. First off, can xnormal even do that? I have been at this for days, scouring the internet seeing people say "oh yeah displacement maps? Xnormal does that fine!" Without giving so much as clue as to how to do it....
Well I've only used xnormal to make normal maps and ao maps, but displacement is giving me so many problems. I have been using the height map, perhaps out of ignorance. Here is a screenshot of my predicament.
This height map result was obtained after bringing in a subdivided version of my lowpoly mesh (as the internet recommended) and this high poly, which is basically all triangles. Apart from the hand, there isnt too much difference in proportion, and there are no overlapping or inverted uvs.
Well I've only used xnormal to make normal maps and ao maps, but displacement is giving me so many problems.
Are you using the auto-normalise heights option? or the manual one?
Is your object more or less centered at (0,0,0) with a ResetXform or Freeze transforms applied?
Have you setup the cage in a way that covers completely the lowpoly mesh or are you using the Match UV feature?
I reset both xforms, they are on top of each other nicely, im using the auto normalize setting for heightmaps, and i wasnt using a cage....but I just made one using the projection modifier in max and pushed it so it covered the highpoly...the resulting xnormal height map didnt change at all...
I reset both xforms, they are on top of each other nicely,
But sure the models are near (0,0,0). It's VERY important, or the radius for the heights's auto-normalization won't be calculated properly.
I just made one using the projection modifier in max and pushed it so it covered the highpoly...the resulting xnormal height map didnt change at all...
Remind the object needs to be triangulated(Edit mesh ) before applying the Projection modifier. The SBM exporter cannot export quad-based cages.
Also sure you have checked in the "Export cage" in the SBM exporter and the "Use cage" in the corresponding lowpoly's slot inside xNormal.
ell since I'm here....How do you render a VECTOR displacement map?
In the same way that a normal map, AO map or height map, with the exception that you'll need an output image format with alpha support ( like the TGA or TIFF... so RGB==xyz direction, alpha=height )
I recommend you to use the MatchUV feature to get more precision ( which means the lowpoly==highpoly at subdiv 0, so the UVs match ) and to remove the need of a cage.
See the spiked ball example ( make the highpoly visible, it's hidden by default ).
So...yeah I get vector displace, it has to be a tiff or tga...but I mean how do you do it, I dont see a "vector displacement map" in the xnormal maps list
SBM = simple binary mesh = the native format of xNormal. It has several advantages over the .OBJ format.
xNormal includes some exporters for Maya and 3dsmax.
I dont see a "vector displacement map" in the xnormal maps list
Direction map ( It was named in that way because it can be also used to see distortions in the cage ).
I had to re-start the development from zero one month ago. I've changed from wxWidgets to Qt.... and seems the 3.17 versions never end
Wow, Growl ROCKS! I really like it!
Ok, I have added Growl support for the upcoming 3.17.2. I've placed three events currently:
1. Map rendering finished
2. Conemap finished
3. Simple AO tool finished
which are the tasks that more time consume. Do you need any other one?
Wow, I haven't looked back here since I didn't expect anything to come from my request so soon. That's awesome I think those three are probably good enough for me, if someone else can think of something more I'm sure they'll speak their mind. Really cool, nice one!
Tomorrow probably I'm gonna unveil a R&D GPGPU project ( not related with xNormal, but yes with 3D, rendering and graphics ) I've been developing some time ago... don't ask :poly124: Surprise, surprise. I'll create another thread for it and I hope it could be as monstruous as this one with the time
It looks absolutely wonderful - greatly look forward to the standalone xn4!
In the meantime, the past few xN versions seem to be lacking Optix... is it removed? I was really enjoying it, but only for a couple models before updating and now I can't find it anymore... do I have to download the plugin separately..? Or is it no longer available..?
EDIT: I found a post on your blog that explains why I can't see it... but it's strange because I remember using it in a previous version. Still on XP with a single GPU... so I am confused.
In the meantime, the past few xN versions seem to be lacking Optix... is it removed? I was really enjoying it, but only for a couple models before updating and now I can't find it anymore... do I have to download the plugin separately..? Or is it no longer available..?
EDIT: I found a post on your blog that explains why I can't see it... but it's strange because I remember using it in a previous version. Still on XP with a single GPU... so I am confused.
Indeed, if you are under WinXP and you own only a GPU then Optix will be disabled. The reason: I simply cannot disable the WinXP's watchdog.
You've two options to solve it:
1. Migrate to Vista/7, because I can disable the watchdog for these OSs.
2. If you want to stay with XP ( which I strongly discourage due to the lack or VRAM virtualization and the bad x64 and multicore support ), just buy an extra GPU, go to the xn's plugin manager->Optix->Configure-> and check the "Ignore this device" option for the GPU connected to the monitor ). The watchdog will only activate if there if the desktop is attached to the GPU.
And yep, probably you've seen a 3.17 beta under WinXP with the Optix renderer enabled with only one GPU... but I bet the WinXP's watchdog will abort the rendering process after 5s forcing you to reboot via a nice BSD or VPU recover :poly136:
And yep, probably you've seen a 3.17 beta under WinXP with the Optix renderer enabled with only one GPU... but I bet the WinXP's watchdog will abort the rendering process after 5s forcing you to reboot via a nice BSD or VPU recover :poly136:
That must have been it! I never had any problems, though my XP is a bit gutted. I guess I will just have to upgrade to Win7 soon. Thanks for the response, quite look forward to the surprises and new features as always. Even if I cannot use them with my current system, they look amazing. :P
edit: I have some onboard video - is it possible to use that? It is the GeForce 6100 chipset... but with a graphics card in the pci-e slot it does not even appear in my device manager, so probably not accessible by xN.
edit: I have some onboard video - is it possible to use that? It is the GeForce 6100 chipset... but with a graphics card in the pci-e slot it does not even appear in my device manager, so probably not accessible by xN.
If it does not appear in the device list I'm afraid it cannot be used ( probably lacks some kind of feature I need like OpenGL 3.2 or some CUDA feature ).
Any idea people's? Been noticing this on a few of my models recently. The UV islands aren't exactly matching in Max with display textures in dx. While the Max burnt ones are much smoother. I haven't noticed this in the past, or possibly, just being more observant now.
This is with 3.17.0.5088
I saw 3.17.1, but it looks like for bug fixes unrelated.
Replies
If you use as lowpoly model a cube with hard edges then xNormal 3.17.0 ***WON'T *** break the cage. It will use an averaged one by default ( that is, without discontinuities over the cage ).
However, if you need the cage's faces broken-by-smooth-group ( which is rare ) then you need to:
1. Load the 3D viewer
2. Show cage->Edit cage
3. click on the "Break" button
4. click on the "save meshes" button and answer "yes" to the auto-assign meshes.
( there's no need to select all the vertices... if no vertex is selected then the break operation will be performed over the whole mesh ).
Why you need to load the 3D viewer and why this should not be done "in one click without loading the 3D viewer"? Because some break/weld operations require interpolation. That interpolation can produce some undesired effects so that's why xN needs visual confirmation from the artist.
Honestly you should only ever need to go into the 3d editor, and manually set up a cage for very specific cases, not every model.
This really isn't the case, an artist does not need to set up the cage manually for every asset, and i'll say it again, this must be done for every small change to the mesh, which is very very *very* slow when you're dealing with test bakes, iterations and and change requests on an asset. I've been using xnormal for YEARS, and rarely have i ever needed to to visually see the cage. I'm sure some people do it this way, and the functionality is already there to do that, but the process of just loading in a mesh and baking should be as easy and painless as possible.
The default rendering method using xn will render broken edges. Using a cage is an entirely different process and can not be considered the "default behavior" as it requires a different workflow to use.
In comparison, load up maya, go to transfer maps, add your high and low, and you will get a correct, averaged-projection mesh created for you and assigned correctly automatically.
Load up maya, set up a RTT for your lowpoly, a cage will automatically be applied to your mesh and used.
In both of these apps you can do further tweaks until your heart is content, but they will *work* as soon as you assign your high and low meshes and hit render. This is not the case with XN.
New users have no idea you can even use xnormal to get a nice, averaged-cage result. They just think the default broken-edge method is how it works. This leads to lots them simply assuming "this is how it works". Which is a problem.
I'm not trying to break your balls here Santiago, you know how much i respect you and the work that you've done with XN. I'm simply looking at this from an objective viewpoint as someone who has been around the block more than a few times with all the major baking tools, and its clear to me that XN has the worst implementation when it comes to this problem.
Think that xn3 is the sum of TONS of patches... and it has really become huge and difficult to manage. There is also a lot of confusion about the uniform ray distances, cages, ray blockers and the MatchUV feature.
xn3 is a huge mess, I admit and I assume it. It's currently too complex and big to change some things
xn4 is a different thing because I have the opportunity to create all from zero ( == if you had the chance to change the things... which ones would you change? My answer: A lot :poly136: ).
Just some related things which are planned for xn4:
1. Kill the uniform ray distances. Why? Because I really don't like numeric/parametric things you cannot really visualise in an intuitive way.
2. All will be done visually using the averaged cage ( with the option of using the surface/geometry normals ), there won't be any break in the cage.
Good news: I found an algorithm to compute the cage automatically for the 99% of the cases. You won't ever have to move the slider ( although I will allow you to move it and also to edit the cage's faces/vertices manually )
3. Integrated viewer ( which btw, could be disabled to use less resources ) with on-screen manipulators ( to move vertices/faces, etc... ).
4. New load/save scene model instead of load-all-the-time one. You'll have an scene. You import things into the scene. The scene is loaded/saved completely without having to use external file references. Things will load much much much faster.
5. Auto-exploding. That's more a promise than a fact currently, but well.. :poly124:
6. Object lists ( yes, object enumeration in a list box so you can select things, etc... )
The point is that I could patch and patch and patch xn3 until I die... but I really prefer to assume xn3 is not perfect and concetrate in xn4 instead, so I can change the core things which were a bad idea in xn3 :poly142:
Correct me if I'm wrong, won't exploding my low poly, break up my smoothing groups and mess up the bake results? I'm baking a model that has all 1 smoothing group.
What do you mean explode seperate mesh chunks?
For example, I have a lot of characters that need to be continuous, lets say something like a army solider which has alot of straps etc...
If I tried to just use xnormal the raytrace wouldnt work cause there are too many distances to consider projecting a small part onto the whole mesh so I'd have to do it 1:1
Straps high to straps low etc... and take apart the low poly mesh.
Or is there another way?
If your straps are modeled as solid chunks welded to the character you really shouldn't have any problems, if its sloppy and just intersected into the mesh you'll have issues, but you should reconsider your topology at that point.
Things like straps that would hang separately, bags or other items should simply be separate mesh chunks, which are easy to explode, and if they aren't separate you really shouldn't have issues with intersecting or other problems like that.
Taking apart the low should never be an option(on a continuous mesh) as you will break your mesh normals and introduce all sorts of smoothing errors.
Unfortunately I checked all advices.
Actually I'm using 128-256 rays, 140 angle (checked 140-180), 3.17 xNormal, HP model exported normals (checked average too), bias 0,08 (checked 0,2), and still baked AO looks like this:
I think, there could be small difference between this problem in past and in present. Now i can't improve quality of AO, increasing subdivision of HP geometry. I think before I could make these postering more dense.
So I fixed the flipped normals and then reimported my high res OBJs and now I keep getting results like this:
Seems like rays are not hitting some areas on my high res, but no matter what I increase my frontal ray distance to I keep getting the same problem. My high res meshes look fine and are lined up perfectly with my low res. Any idea as to why this is happening?
Go into the 3d viewer and hide everything but the highpoly, then see if there isn't a lowpoly in there as well.
And spread agle==140 seems too much. 160 is fine.
Use the cosine distribution.
To remove banding you can try also to enable the jittering.
As MightyPea said, looks like if you exported the lowpoly attached to the highpoly.
Everytime it says the number of vertices/topology doesn't match even though it does.
I've even gone as far as converting it to an edit mesh so it's triangulated, exporting that and it still doesn't work.
Anyone have a solution
Be sure also that all the lowpoly mesh's faces has UVs ( do not delete UVs in your Unwrap UV tool ).
If you're performing a manual triangulation be sure your sequence is:
1. Triangulate the lowpoly mesh
2. Clone the lowpoly mesh.
3. Move vertices on the lowpoly mesh
4. Save the cloned mesh as external cage
and NOT
1. Clone que lowpoly mesh
2. Move the vertices
3. Triangulate
(because some programs use the face normals to decide the triangulated edges... so the topology won't match)
Avoid the max2obj exporter ( use the gw:Obj one if you need to output as .OBJ).... but I would save as .SBM to avoid problems.
What I tried to do was convert my edit poly to edit mesh, then cloned it.
Then I used projection to make a cage and clicked export cage.
So I can't use the export cage option, I have to use a different model ?
Basically just setup the stack in max as follows
+Projection
+Xform
+Edit Mesh
+Edit Poly
Export using SBM and import into xNormal and check use cage box.
Magic!
Ohh man, thanks with this jittering. I should figure it out by myself ^^ Well.. now i should rebake all stuff again.
1024-2048 ? ;o I don't have i7 i980X :P
btw: jogshy please add a real(seperate) limit ray lenght for ao. it's really annoying and makes using xnormal impossible in some cases..
this is something that I know Max' can do but im not sure if xnormal can
in max lets say I have a complex object of a sort
lets say its a gun with 3 parts
now normally for proper bakes you would need to explode your mesh prior to exporting to xnormal into those 3 chunks so they do not touch one another and there normals bake proper per each chunk
now in max there is a option to set on both the high poly and low poly a material ID for each section and a option on baking that only to touch the rays of selected material ID
So if chunk 1 low poly and chunk 1 high poly both share the same material ID then chunk 2 low and high and chunk 3 low and high will not effect it , even if the mesh is in its non exploded form.
now this is great for hard edge stuff but my question comes to this
now with zbrush if im sculpting something complex (organic/hard edge based thing)
I would need to sculpt/export each chunk separate and have it bake each part 1 by 1 and then combine all the maps in photoshop for a proper normal map (but baking all the chunks/objs at once is ok for a AO map since there all intersecting each other properly)
now is there a way in xnormal that every mesh I export upon the low and high poly theres a way to set it that low poly chunk only effects high poly chunk with the same material ID or naming convention or something in that manner?
Ive been reading around and im not sure if this is something xnormal can do or not
If this is possible I really would like to know, this will really help our workflow in our job if something like this exists or not.
If im not explaining this well I can draw a quick diagram
OpenCl support sooner than 4.0?
Speaking of which, I remembered at one point you mentioned multiplatform support for 4.0. If not already planned, can you add support for internet rendering? I have a server box at a different location running ubuntu enterprise that I can steal some rendering power from when its idle. Hell, maybe even make its address open for some users (if I can dedicate just one or two cores without effecting the rest of the systems processes). If I get a cheapo ati card since it can fit one, I could add opencl support to it as well.
The xn4's render should operate on TCP/IP(agent+coordinator model) so you'll able to render through Internet, yep. In fact, I'm investigating the Amazon EC2 service currently.
Well, currently xn3 does not support MultiSubObj materials.
xn4 should support that you describe plus the ability to render multiple UV chunks to different files in one pass. You'll be able also to write custom shaders, so you could output any information to "multiple render targets" or to define an heuristic to skip determinated hits.
http://boards.polycount.net/showthread.php?t=72445&page=3
AFAIK, in 32bits color mode, a DX10 GPU you should not have problems displaying normal maps. Using OpenEXR/TIFF 16/FP can improve the appearance if the application can load 16bits images ( like Photoshop... for games 16 bits is absolutely overkill ).
Fixed the TGA bug and the crash loading dotXSI/Collada files.
I have a request that might be a big parenthesis in the grand scheme of things, but I was doing a big batch render earlier and thought it would be nice with Growl/Growl for Windows push notification support, so that when everything is done a notification is pushed out and my phone gives me a little beep about it.
Not essential, but useful in an office environment when I often set something to render and step away to discuss the latest issue with a co-worker.
Interesting, thx! I'll take a look into Growl, it's a new thing to me :poly136:
Wow, Growl ROCKS! I really like it!
Ok, I have added Growl support for the upcoming 3.17.2. I've placed three events currently:
1. Map rendering finished
2. Conemap finished
3. Simple AO tool finished
which are the tasks that more time consume. Do you need any other one?
I have a mesh I started in this newfangled sculpting app, sculptris, and its got crazy topology, so I retopoligized a low poly version, unwrapped it, and now I want to get a displacement map from the highpoly version out of xnormal, so I can continue working in another sculpting app. First off, can xnormal even do that? I have been at this for days, scouring the internet seeing people say "oh yeah displacement maps? Xnormal does that fine!" Without giving so much as clue as to how to do it....
Well I've only used xnormal to make normal maps and ao maps, but displacement is giving me so many problems. I have been using the height map, perhaps out of ignorance. Here is a screenshot of my predicament.
This height map result was obtained after bringing in a subdivided version of my lowpoly mesh (as the internet recommended) and this high poly, which is basically all triangles. Apart from the hand, there isnt too much difference in proportion, and there are no overlapping or inverted uvs.
Can someone help me out?
Is your object more or less centered at (0,0,0) with a ResetXform or Freeze transforms applied?
Have you setup the cage in a way that covers completely the lowpoly mesh or are you using the Match UV feature?
Just gotta up the contrast a bit more and paint out the hands and im good to go!
Remind the object needs to be triangulated(Edit mesh ) before applying the Projection modifier. The SBM exporter cannot export quad-based cages.
Also sure you have checked in the "Export cage" in the SBM exporter and the "Use cage" in the corresponding lowpoly's slot inside xNormal.
In the same way that a normal map, AO map or height map, with the exception that you'll need an output image format with alpha support ( like the TGA or TIFF... so RGB==xyz direction, alpha=height )
I recommend you to use the MatchUV feature to get more precision ( which means the lowpoly==highpoly at subdiv 0, so the UVs match ) and to remove the need of a cage.
See the spiked ball example ( make the highpoly visible, it's hidden by default ).
So...yeah I get vector displace, it has to be a tiff or tga...but I mean how do you do it, I dont see a "vector displacement map" in the xnormal maps list
xNormal includes some exporters for Maya and 3dsmax.
Direction map ( It was named in that way because it can be also used to see distortions in the cage ).
http://www.polycount.com/forum/showthread.php?t=73997
As xn4 was using a lot of OpenCL I used that to raise my OpenCL's skill :poly124:
In the meantime, the past few xN versions seem to be lacking Optix... is it removed? I was really enjoying it, but only for a couple models before updating and now I can't find it anymore... do I have to download the plugin separately..? Or is it no longer available..?
EDIT: I found a post on your blog that explains why I can't see it... but it's strange because I remember using it in a previous version. Still on XP with a single GPU... so I am confused.
You've two options to solve it:
1. Migrate to Vista/7, because I can disable the watchdog for these OSs.
2. If you want to stay with XP ( which I strongly discourage due to the lack or VRAM virtualization and the bad x64 and multicore support ), just buy an extra GPU, go to the xn's plugin manager->Optix->Configure-> and check the "Ignore this device" option for the GPU connected to the monitor ). The watchdog will only activate if there if the desktop is attached to the GPU.
And yep, probably you've seen a 3.17 beta under WinXP with the Optix renderer enabled with only one GPU... but I bet the WinXP's watchdog will abort the rendering process after 5s forcing you to reboot via a nice BSD or VPU recover :poly136:
edit: I have some onboard video - is it possible to use that? It is the GeForce 6100 chipset... but with a graphics card in the pci-e slot it does not even appear in my device manager, so probably not accessible by xN.
This is with 3.17.0.5088
I saw 3.17.1, but it looks like for bug fixes unrelated.