I hear you can open the 3d viewer and force a weld on the cage to solve the issue, but as EQ said... every time?
If you use 3dsmax, use the SBM exporter with the projection modifier.
If not, enter the 3D viewer, adjust the cage and Save meshes... not everytime... just each time you modify the mesh's topology.
Newest drivers for Quadro fx 1800 are 191.87
I have Win7.
I'm afraid then you need to wait until NVIDIA updates the drivers to 197.XX or above. The 191.X does not include the required features for CUDA-OpenGL interop.
You should tessellate a bit your lowpoly model, beveling those corners.
Other option is to break the cage so hard edges are rendered better.
I'm not sure about ray fails as you don't prive a wireframe and ray fails map and neither the cage setup/distance tapes.
If you use 3dsmax, use the SBM exporter with the projection modifier.
If not, enter the 3D viewer, adjust the cage and Save meshes... not everytime... just each time you modify the mesh's topology.
I don't mean to be a dick but it's still a bit fiddly to use the 3d editor. Do you think in future versions there can be a tool to do this in the list of source meshes, or somewhere like that?
although i guess with your new interface the mesh will always show so it wont be as much time waiting for the viewer to start
I don't mean to be a dick but it's still a bit fiddly to use the 3d editor.
The xn3 UI grow without any control. The 3D viewer was initially designed just to preview to normal maps in a model... but had to be patched.. .and patched... and patched... and patched... and patched again... and again... until it converted into a monster very hard to manage and use.
Do you think in future versions there can be a tool to do this in the list of source meshes, or somewhere like that?
xn4 will include vertex/cage/models manipulators, a tree with the scene objects, etc...
Hey Jogshy! Thanks, as always, for continuing development on this tool...
I've finally started using the Optix renderer and it's fantastic. It's very pleasing to have one's maps render so quickly. But I'm wondering... will there be any documentation on the settings for ambient occlusion?
The old tutorial appears quite dated. With the settings now I can only really fiddle a small amount and either get some sort of AO bake, or a pure white texture. I don't really understand what's going on.
I've finally started using the Optix renderer and it's fantastic. It's very pleasing to have one's maps render so quickly. But I'm wondering... will there be any documentation on the settings for ambient occlusion?
Well, actually it's an alpha version. A lot of features aren't available yet and it's quite bugged. After the implementation is complete it should work exactly as the CPU-path AO and I could add the final documentation.
To document it in alpha state is a bit labourious because all can change in two days
1. There is no adaptative sampling now. I removed it because it was causing bad quality. So now it's equivalent to use a very low threshold(like 0.0000001)
2. There is no weighting option. If you use the cosine distribution then will be automatically weighted. If you use the uniform distribution there won't be any weighting.
3. Now you can control the occluded/unoccluded color to adjust the constrast of the AO.
If you're getting a too-white AO:
1. Use the uniform distribution
2. Turn off the attenuation ( == 1.0/0.0/0.0 ).
3. Use a big spread angle ( f.ex: 178 degrees ).
4. Set the occluded color to pure black (rgb 0,0,0) and unoccluded color to pure white (rgb 255,255,255)
5. Turn off the ray distance limitation.
I got a litle problem about baking the Base texture:
when im baking from 2 seperate objects it does 2 colores
but when i bake from 2seperate objects and one have a 2 diferent materials i cant bake it into 3 colores is there any chance to bake a colors from seperate materials? its easier in hi poly to just asign a diferent material nor to make a seperate object
but when i bake from 2seperate objects and one have a 2 diferent materials i cant bake it into 3 colores is there any chance to bake a colors from seperate materials?
xn3 does not support MultiSubObj materials, so you must assign one texture for each mesh file.
what you can do(and i have generally done with XN) is to create a texture with all of your colors on it, UV(just a very quick planar map is all, nothing complicated) your cage mesh, make selections as you normally would to apply the different materials, but just move/scale these to fit into the corresponding uv color grid in your texture. Then just load that texture to your mesh and do a diffuse bake
Is there, or will there be, a way to retain the high polygon meshes in memory after a bake is done so it doesn't have to re-load them all after just a few minor tweaks? This would be pretty handy, with a manual way of flushing the memory at the click of a button.
Is there, or will there be, a way to retain the high polygon meshes in memory after a bake is done so it doesn't have to re-load them all after just a few minor tweaks? This would be pretty handy, with a manual way of flushing the memory at the click of a button.
i totally agree that'd be nice, sometimes you just do very small tweaks and it has to load them all again
Another thing i wondered about is, is there a way to batch render a bunch of models? So i import my lowpoly lets name it low_01.obj, import all highpolies that belong to it, use that as a preset. Load low_02.obj and assign another bunch of models, do that with every part of the lowpoly model and connect it to the belonging highpolies.
Set up low_01.obj to render its stuff to texturename_01.tga, low_02.obj to render into texturename_02.tga and so on, hit render and go out do something else or whatever.
Would be a huge timesaver as i normally never bake a whole object at once and always break them into smaller chunks that don't intersect, pretty much like what exploding would do.
Is there, or will there be, a way to retain the high polygon meshes in memory after a bake is done so it doesn't have to re-load them all after just a few minor tweaks? This would be pretty handy, with a manual way of flushing the memory at the click of a button.
Currently that would require too many changes for xn3. Saving all as .SBM should help to load faster.
For xn4 I'm adopting import-> "load/save scene" policy instead.
I just loaded a low poly model, a hi poly one trying to rendering a normal map.
Parameter a the default one.
The first time it works but the second time it will create an error!
After that, to create cavity map from Normal map works infinitely, there are no problem.
xn3 does not support MultiSubObj materials, so you must assign one texture for each mesh file.
xn4 will support MultiSubObj mats.
thnx man,
So till x4 ill use: select by material seperate export maby little slower but I still like Xn, 2 bad i cant use color bake from maya7 but it seams doesnt fit the normal/ocullusion bakes from Xn
1. Are you using the 3.16.13 or the 3.17 beta? Are you using the Optix map renderer or the Default Bucket Renderer?
2. Is xNormal or your mesh files placed in a folder with Kanji characters? I have not tested that very well so perhaps it's due to that. Pls, try with ASCII folders to see what happens.
Hi guys, I'm new to max and I just want to ask, how am I going to export my mesh into xnormal? I mean what is the right settings and what format? Can you please direct me to a tutorial about it cause I can find any. Thanks!
Hi guys, I'm new to max and I just want to ask, how am I going to export my mesh into xnormal? I mean what is the right settings and what format? Can you please direct me to a tutorial about it cause I can find any. Thanks!
You can either export your mesh as a .sbm file or (my preferred method) as a .obj.
I'm attempting to create a standardised art pipeline where my intention is to use Xnormal to bake my normal maps from high resolution meshes created in ZBrush. Xnormal typically works faster, crashes less and produces better results than when baking maps in Maya.
I have recently run into a problem where Xnormal produces a distortion in the normal map which is hard to see on the normal map itself, but shows up when viewing it on the model inside Maya:
As you can see there are dark areas appearing on the upper part of the objects, these areas do not appear when baking the normal map inside Maya. I have tried any combination of normal settings for the high and low resolution meshes in Xnormal with no better result. I'm using Xnormal version 3.17.
Somehow I "fixed" this problem. The image I showed earlier was a testbake from a corner piece of a book - I had the whole highpoly book but only a low poly corner. When I tried to bake the whole low poly book, it turned out perfect!
Could the prolbem be that I only baked a only an offsetted piece of the highpoly?
This is great because with melscript I can render picture in my workspace, and do everything I want. using another set of program out of my with system command..
Does someone know if it exists for xNormal too? library of command DOS for xnormal?
Here are some comparison screenshots of the normal maps baked in xnormal and Maya. I'm wondering if anyone else has encountered the same problem and found a way to fix it.
Xnormal doesn't seem to sufficiently compensate for the averaged normals on the low polygon model, which creates shading distortion when the normal map is applied to the model in Maya.
any chance to bake a base light? for example like on a scene I prepared in maya? or a simple ambient 80%+ light that is above 20%?? it sometimes helps a lot when you mix it a bit with diffuse color.
On the other hand: any chance to setup maya/modo renderer that bakes would match the xNormal bakes? from what i tried those bakes dont work together
That's one of the new things I wanna add for xn4, yep.
For xn3 it's almost impossible without changing the complete program.
any chance to setup maya/modo renderer that bakes would match the xNormal bakes? from what i tried those bakes dont work together
I don't know about Modo but Maya should work because if you export the models from Maya using the SBM format(with the export tangent basis option enabled) the Maya's tangent basis will be used ( yep, I'm exporting the Maya's vectors directly ).
Yep, for both ray tracing and dual parameterization methods.
Floating point precision is also supported using OpenEXR/TIFF FP/Raw/HDR formats.
See the spiked ball example, I used that to perform DX10's tessellation with bended spikes/mushrooms/ears without having to perform Catmull/Loop subdivision.
Hi, i'm interested of one thing. In xNormal 3.15 I didn't use CAGE in Low Definition Mesh's options and I only corrected lenght of rays.
In 3.16.12 RC2 if I didn't check CAGE I had bugs on normal map.
Now I installed 3.17.0 beta5 and I'm using this same models and options and if I check CAGE I have bugs (other then previous mentioned), which disappear when I uncheck CAGE.
Now I installed 3.17.0 beta5 and I'm using this same models and options and if I check CAGE I have bugs (other then previous mentioned), which disappear when I uncheck CAGE.
Have you setup your cage in a way that the highpoly model is completley covered?
Have you rendered a "wireframe and ray fails map" to see is there are rays failing?
Are you using the old 3dsmax max2obj exporter? Are you exporting the meshes as .SBM directly? Have you performed a triangulation + ResetXForm/Freeze transforms before exporting?
I didn't touch cage, I didn't performed (on purpose) a triangulation, reserXForm/Freeze transforms.
HP isn;t completly covered.
I didn't renderer a "wireframe and ray fails map".
I am not using export selected from 3ds max 2009 and mudbox 2009.
I don't have a problem with that, just curious why once I should check cage in xNormal, and another time I don't have to.
Hi everybody, I'm a new xnormal user. Great tool I have a quick question - is there a tutorial for generating a Radiosity normal map - how to change the light setup, numper of lights in the scene etc. because now, when I bake RNM it looks like a big AO ?
how to change the light setup, numper of lights in the scene etc. because now, when I bake RNM it looks like a big AO ?
Yep, well... the default settings for the RNM perhaps looks a bit dark so you should platy a bit with the contrast parameter.
Number of lights: hopefully ( all it's relative! ) xNormal only supports one light so...
Light setup: there is no built-in support in xNormal to preview the RNM maps ( that one of the things in the TODO list )... So you'll need Hammer or other Valve's editor. Sorry, I don't know any other program to preview RNM maps ( perhaps the 8monkeylabs's toolbag? )
On the other hand, I've not tested the RNM maps very much ... perhaps there are some bugs there.
My xnormal keeps crashing everytime i try to back an AO map, im trying to get a 1024*1024, my pc is good enough to not crash, core i7, 6gb ram etc, but it jsut keeps crashing my comp, anyone got any ideas?
My xnormal keeps crashing everytime i try to back an AO map, im trying to get a 1024*1024, my pc is good enough to not crash, core i7, 6gb ram etc, but it jsut keeps crashing my comp, anyone got any ideas?
Can you render an AO map for the Smiley example without problems?
Why in xNormal 3.17 I don't have Antialiasing threshold, Ambient Occlusion adaptive threshold, adaptive interval or enable weighting options? I read about this in tutorial:
Why in xNormal 3.17 I don't have Antialiasing threshold, Ambient Occlusion adaptive threshold, adaptive interval or enable weighting options? I read about this in tutorial:
Thanks for fast answer.
What's up with Antialiasing threshold, there write it can improve quality. Though
I don't complain on any quality problems (oh, maybe padding sometimes looks terrible, but it's probably my fault), but if something could be better, why don't try?
My xnormal keeps crashing everytime i try to back an AO map, im trying to get a 1024*1024, my pc is good enough to not crash, core i7, 6gb ram etc, but it jsut keeps crashing my comp, anyone got any ideas?
When you say crash, do you mean BSOD, hard reset or freezing?
When using the Cuda Renderer i was getting what i thought was crashing, as during rendering everything would become unresponsive.
I would go through the process of reseting the PC and trying again with the same results but later found out that it was not actually crashing as if i left it to render after a few mins everything would go back to normal.
Im guessing the GPU is using such a massive amount of processing that windows isnt redrawing at a normal speed which looks like everything has frozen.
Is this information from the Polycount Wiki still applicable?
Unfortunately Xnormal splits the cage where ever the model has hard edges, causing ray misses in the bake. You can fix the hard edge split problem but it involves an overly complex workflow.
Not really. The newest versions use an averaged cage by default.
But yep, you've the option to break/weld manually the cage's inside the 3D viewer.
Unless you can set up a cage in one click, the above statement still applies.
You can fix the hard edge split problem but it involves an overly complex workflow.
I would still say that this is accurate, if the same workflow to create and save cages remains. By default the cage may be averaged, but using a cage is far from default behavior for the app. So, by default you get the same broken edges from hard edge splits, unless you load the viewer, adjust cage, save, make sure everything is loaded correctly, rinse and repeat for any change no matter how small to your mesh.... Its the same discussion again unless i've missed some new features?
Replies
If not, enter the 3D viewer, adjust the cage and Save meshes... not everytime... just each time you modify the mesh's topology.
I'm afraid then you need to wait until NVIDIA updates the drivers to 197.XX or above. The 191.X does not include the required features for CUDA-OpenGL interop.
You should tessellate a bit your lowpoly model, beveling those corners.
Other option is to break the cage so hard edges are rendered better.
I'm not sure about ray fails as you don't prive a wireframe and ray fails map and neither the cage setup/distance tapes.
although i guess with your new interface the mesh will always show so it wont be as much time waiting for the viewer to start
xn4 will include vertex/cage/models manipulators, a tree with the scene objects, etc...
I've finally started using the Optix renderer and it's fantastic. It's very pleasing to have one's maps render so quickly. But I'm wondering... will there be any documentation on the settings for ambient occlusion?
The old tutorial appears quite dated. With the settings now I can only really fiddle a small amount and either get some sort of AO bake, or a pure white texture. I don't really understand what's going on.
To document it in alpha state is a bit labourious because all can change in two days
Yep, there are some changes:
1. There is no adaptative sampling now. I removed it because it was causing bad quality. So now it's equivalent to use a very low threshold(like 0.0000001)
2. There is no weighting option. If you use the cosine distribution then will be automatically weighted. If you use the uniform distribution there won't be any weighting.
3. Now you can control the occluded/unoccluded color to adjust the constrast of the AO.
If you're getting a too-white AO:
1. Use the uniform distribution
2. Turn off the attenuation ( == 1.0/0.0/0.0 ).
3. Use a big spread angle ( f.ex: 178 degrees ).
4. Set the occluded color to pure black (rgb 0,0,0) and unoccluded color to pure white (rgb 255,255,255)
5. Turn off the ray distance limitation.
when im baking from 2 seperate objects it does 2 colores
but when i bake from 2seperate objects and one have a 2 diferent materials i cant bake it into 3 colores is there any chance to bake a colors from seperate materials? its easier in hi poly to just asign a diferent material nor to make a seperate object
xn4 will support MultiSubObj mats.
i totally agree that'd be nice, sometimes you just do very small tweaks and it has to load them all again
Another thing i wondered about is, is there a way to batch render a bunch of models? So i import my lowpoly lets name it low_01.obj, import all highpolies that belong to it, use that as a preset. Load low_02.obj and assign another bunch of models, do that with every part of the lowpoly model and connect it to the belonging highpolies.
Set up low_01.obj to render its stuff to texturename_01.tga, low_02.obj to render into texturename_02.tga and so on, hit render and go out do something else or whatever.
Would be a huge timesaver as i normally never bake a whole object at once and always break them into smaller chunks that don't intersect, pretty much like what exploding would do.
For xn4 I'm adopting import-> "load/save scene" policy instead.
You can save your settings as multiple XML files ( for example set1.xml, set2.xml, etc... to a folder... for example to G: "myWork" )
Then, from the command line do this: ps: it's g: \ myWork, but I cannot put that due to smileys....
what that... my computer is new and good configuration ...
It happened when I when to render a 2nd time my normal map.
Then I need to shut down the program, and launch it again for rendering a second time T_T.
Parameter a the default one.
The first time it works but the second time it will create an error!
After that, to create cavity map from Normal map works infinitely, there are no problem.
About the detail of my config :
■Intel® Core™ i7 Processor 860 (Quad /定格2.80GHz/TB時最大3.46GHz/L3 Cache 8MB/HT対応)
■NVIDIA® GeForce® GTX285 搭載 Video Card (1GB / PCI Express2.0)
■4GB Memory (DDR3 SDRAM PC3-10600 / Dual Channel)
■350GB HDD (SATA Ⅱ)
■DVD SUPER MULTIDRIVE
■Intel® P55 Express CHIPSET ATX Motherboard
■Windows® 7 Home Premium 64bit Pre-Installed
thnx man,
So till x4 ill use: select by material seperate export maby little slower but I still like Xn, 2 bad i cant use color bake from maya7 but it seams doesnt fit the normal/ocullusion bakes from Xn
Ok, two more questions:
1. Are you using the 3.16.13 or the 3.17 beta? Are you using the Optix map renderer or the Default Bucket Renderer?
2. Is xNormal or your mesh files placed in a folder with Kanji characters? I have not tested that very well so perhaps it's due to that. Pls, try with ASCII folders to see what happens.
For some reason the generated normal map get waaaay to intense. Does it have something with the ray distance?
How do I fix that?
You can either export your mesh as a .sbm file or (my preferred method) as a .obj.
If your new to xnormal, try this link:
http://www.antonkozlov.com/Tutorials/xNormal.html
[/quote]
Are you using a tangent-space normal map or an object-space?
What output format are you using?
And what xNormal version are you using, pls?
I'm attempting to create a standardised art pipeline where my intention is to use Xnormal to bake my normal maps from high resolution meshes created in ZBrush. Xnormal typically works faster, crashes less and produces better results than when baking maps in Maya.
I have recently run into a problem where Xnormal produces a distortion in the normal map which is hard to see on the normal map itself, but shows up when viewing it on the model inside Maya:
As you can see there are dark areas appearing on the upper part of the objects, these areas do not appear when baking the normal map inside Maya. I have tried any combination of normal settings for the high and low resolution meshes in Xnormal with no better result. I'm using Xnormal version 3.17.
Somehow I "fixed" this problem. The image I showed earlier was a testbake from a corner piece of a book - I had the whole highpoly book but only a low poly corner. When I tried to bake the whole low poly book, it turned out perfect!
Could the prolbem be that I only baked a only an offsetted piece of the highpoly?
http://www.renderingsystems.com/downloads.php
This is great because with melscript I can render picture in my workspace, and do everything I want. using another set of program out of my with system command..
Does someone know if it exists for xNormal too? library of command DOS for xnormal?
1. Open a DOS console.
2. Put this: xNormal.exe -?
But it's currently limited to the simple AO tool and map rendering.
Yeah basicaly I need to do that to get the help first :
haha! I can see you catch me over there :
http://eat3d.com/forum/questions-and-feedback/batch#comment-1009146
Then lets continue the conversation to the source!
And thank you for the comments!
Xnormal doesn't seem to sufficiently compensate for the averaged normals on the low polygon model, which creates shading distortion when the normal map is applied to the model in Maya.
Normal map comparison:
On the other hand: any chance to setup maya/modo renderer that bakes would match the xNormal bakes? from what i tried those bakes dont work together
For xn3 it's almost impossible without changing the complete program.
I don't know about Modo but Maya should work because if you export the models from Maya using the SBM format(with the export tangent basis option enabled) the Maya's tangent basis will be used ( yep, I'm exporting the Maya's vectors directly ).
- Vertex color support for dotXSI files
- Improved AO quality
- Max/Maya 2011 support
- Tons of bugs corrected
Floating point precision is also supported using OpenEXR/TIFF FP/Raw/HDR formats.
See the spiked ball example, I used that to perform DX10's tessellation with bended spikes/mushrooms/ears without having to perform Catmull/Loop subdivision.
In 3.16.12 RC2 if I didn't check CAGE I had bugs on normal map.
Now I installed 3.17.0 beta5 and I'm using this same models and options and if I check CAGE I have bugs (other then previous mentioned), which disappear when I uncheck CAGE.
Anyone know why this happening?
Have you rendered a "wireframe and ray fails map" to see is there are rays failing?
Are you using the old 3dsmax max2obj exporter? Are you exporting the meshes as .SBM directly? Have you performed a triangulation + ResetXForm/Freeze transforms before exporting?
HP isn;t completly covered.
I didn't renderer a "wireframe and ray fails map".
I am not using export selected from 3ds max 2009 and mudbox 2009.
I don't have a problem with that, just curious why once I should check cage in xNormal, and another time I don't have to.
Danke!
Martin
Number of lights: hopefully ( all it's relative! ) xNormal only supports one light so...
Light setup: there is no built-in support in xNormal to preview the RNM maps ( that one of the things in the TODO list )... So you'll need Hammer or other Valve's editor. Sorry, I don't know any other program to preview RNM maps ( perhaps the 8monkeylabs's toolbag? )
On the other hand, I've not tested the RNM maps very much ... perhaps there are some bugs there.
http://donaldphan.com/tutorials/xnormal/xnormal_occ.html
The "enable weighting" was an AO misconception. I was not taking into consideration the probability functions that the hemisphere sampling should use.
Btw, I've just uploaded the final 3.17.0.
What's up with Antialiasing threshold, there write it can improve quality. Though
I don't complain on any quality problems (oh, maybe padding sometimes looks terrible, but it's probably my fault), but if something could be better, why don't try?
When you say crash, do you mean BSOD, hard reset or freezing?
When using the Cuda Renderer i was getting what i thought was crashing, as during rendering everything would become unresponsive.
I would go through the process of reseting the PC and trying again with the same results but later found out that it was not actually crashing as if i left it to render after a few mins everything would go back to normal.
Im guessing the GPU is using such a massive amount of processing that windows isnt redrawing at a normal speed which looks like everything has frozen.
Do you have any HDD activity during your crash?
But yep, you've the option to break/weld manually the cage's inside the 3D viewer.
Unless you can set up a cage in one click, the above statement still applies.
I would still say that this is accurate, if the same workflow to create and save cages remains. By default the cage may be averaged, but using a cage is far from default behavior for the app. So, by default you get the same broken edges from hard edge splits, unless you load the viewer, adjust cage, save, make sure everything is loaded correctly, rinse and repeat for any change no matter how small to your mesh.... Its the same discussion again unless i've missed some new features?