I sure this is completely my fault, since I don't really have any experience exporting, but could someone help to explain why I get http://i5.tinypic.com/14ipx95.jpg whenever I try to test Xnormal's 3d model viewer? Tested with both 3ds and obj, on both a head mesh of mine and a default box.
1) You have not marked as "visible" the meshes. Go to highpoly / lowpoly tab and sure the "visible" checkbox is checked ( it is by default tho )
2) You are trying to view a void file. For example, the file contains cameras, lights, etc... but no geometry at all. Notice too some programs that use NURBS ( for example Rhinoceros ) export the data too as NURBS and not as quad/triangle meshes. You need to convert the NURBS to meshes.
3) The highpoly model must contain vertex positions and optinally vertex normals. The lowpolymodel must contain vertex positions and UVS and optionally normals.
4) It is possible that the meshes you are trying to import contain too much degenerated faces ( zero area faces because on index is the same ) or tons of duplicated vertices/faces. xNormal will remove these so after cleaning the meshes the triangle set can be void.
5) Dont trust in the max2obj exporter of 3dsmax. Is really old and problematic.
6) The 3ds importer atm is a bit bad. I am working to improve it.
Like all this can sound too much complicated, feel free to send me your problematic model to granthill76 [at] yahoo.com and I will take a look if you want!
This time corrected some bugs, added ASE importer, x64 version ( bye bye memory limits ), shadows in the 3d viewer, optimized a lot the graphics driver, improved the documentation, automatic rendering ( without user intervention using the command-line ) and other minor things!
[ QUOTE ]
New shadows look awesome, I just tested it
Thanks Santy!
[/ QUOTE ]
Thx! However they have some problems... Aren't adaptative ( if you put far the light the shadows degrade a lot ) and need manual biasing And can be slow... If you have performance problems with shadows try to set the size to 512x512 or 256x256 in the graphics driver configuration dialog ( in the plugins manager )
I was trying to implement ones without bias and with area/penumbra... but they only moved well in a x1900/gf7800 with SM3.0, so implemented other method to allow to run them in a modest Radeon9700... Perhaps when everybody have SM3.0 cards or above I could implement them back hehe... And yep yep, MoP, I destroyed your artwork again haha I should find some environment models and textures for that "pink" room
Btw, have anyone tested the new x64 version? I tryed it using the Windows Vista Beta 2 x64 with 4Gb of RAM to see if I could avoid the 3Gb memory limit of Windows XP 32-bits and apparently worked. If you have some feedback for the x64 I will be very pleased thx!
I was really amped to check this out but I get the following error when trying to install...
--
This installation package is not supported by this processor type. Contact your product vendor.
--
Any fixes for this, or am I just fubar'd? I have an errror log also if that'd help.
[ QUOTE ]
I was really amped to check this out but I get the following error when trying to install...
--
This installation package is not supported by this processor type. Contact your product vendor.
--
Any fixes for this, or am I just fubar'd? I have an errror log also if that'd help.
[/ QUOTE ]
That's prolly because you downloaded the x64 version and you're trying to install in a Windows 32bits.
If you are using Windows XP/Vista 32-bits you need to download and install the xNormal_3_7_2_win32.zip
If you are using Windows XP/Vista 64-bits you need to download and install the xNormal_3_7_2_win64.zip
Solved some bugs, optimized/improved compatibility of the graphics driver ( much to the new NVIDIA's intrumented drivers ), re-enabled dual-core and hyperthreading support. You can get a 200% speed increase if you have a Pentium 4 Extreme Edition, Pentium D, Athlon64 X2 or Intel Core Duo CPU so I highly recommend you to download this new patch
I'm having trouble with a massive file. It loads up about 90% and it errors out. One time it said the mesh was void tho it will usually just say "Error loading the models: Can't import Wavefront OBJ file ****" The file is around 520 mb and 6 million triangles.
I'm on a athlon x2 3800+ with 2 gigs of ram and a 7600. I'm wondering if more ram would even help in this case i seem to remember asking you this before and you saying windows wouldnt even allocate it right.
Also a minor annoyance with the new way you select file formats(export image, import mesh, ect) bothers me because you have the word XNormal before all the formats, so i cant hit tab and just hit T on the keyboard for targa or A for alias wavefront OBJ for example.
[ QUOTE ]
The file is around 520 mb and 6 million triangles
[/ QUOTE ]
You can try to export only the vertex positions for the highpoly model. The normals and texture coordinates won't be used, so you can save tons and tons of space.
Also, if you can, specify the decimal digits when exporting the OBJ to 3 digits. Some programs set it by default to 6 which can put excesive data into the file.
What kind of file do you use? OBJ? What program you used to export it? Modo, 3dsmax, Maya7, Blender?
[ QUOTE ]
Also a minor annoyance with the new way you select file formats(export image, import mesh, ect) bothers me because you have the word XNormal before all the formats, so i cant hit tab and just hit T on the keyboard for targa or A for alias wavefront OBJ for example.
[/ QUOTE ]
Yep yep I was thinking on that already. Probably in the next version will remove all those xnormal-prefixes.
[ QUOTE ]
More RAM would work if you installed the 64bit version of Windows.
[/ QUOTE ]
Absolutely. With a 64-bits OS you can extend much much more your memory quantity. 32Bits are no longer good because Windows can only manage 2Gb of RAM ( 3Gb with a nasty trick or 8/16Gb for the server versions )
To finish... do you see the left RAM indicator going down fast? You can also edit the xNormal_debugLog.txt file and see how much memory is consuming the program for the "acceleration" structures.
I will make some tests with very high polygon meshes to see if I can improve this a few
ps: I detected in windows2000 some of the controls in the UI are not well painted ( image crops, bad autosize in file labels, etc... ). I'm working to solve all this too.
TBH EQ if your model is that poly-heavy, you can probably just drop a sub-div level or two in Zbrush, bake out a heightmap, and use that as an overlaid "fine detail" map. Should be able to bypass any problems with uber-dense meshes that way.
Or use masking in ZBrush to split up the mesh into useable chunks, just mask it across places where seams won't be obvious.
Either that or you're being wasteful with your polys
Changed the plugin file filters so you can select it fast using the keyboard as suggested, changed some internal structures to allow bigger meshes consuming less momery and corrected some bugs under Windows 2000.
Thanks a lot for this. We're hoping to switch to xNormal completely here at 8monkey Labs for all of our normal mapping. One of the big things holding us back was problems generating maps for high-poly meshes. As EarthQuake previously mentioned, he was trying to generate a map from a 6-7 million poly head I gave him and it would fail. I just got it to work with 3.7.4. w00t.
We're interested in developing an exporter for our Marmoset Engine mesh file format. Any comments before we jump in?
Keep up the great work. We really appreciate the quick updates, too.
One thing i forgot to mention. Ever since i've been using this dual core system here calculating ambocc maps will take 100% of both cores on my cpu. Which means i cant do anything else while rendering ambocc maps, not even listen to mp3s in winamp without it skipping. I dont seem to recall this happening in earlier versions, or posibly it was just on my single core system that it didnt.
Oh, I guess I should mention that when I try to use that high-poly mesh, xNormal gives me an error: "Unexpected error. Can't construct highpoly raytracer"
Then I have to switch to the "memory conservative" raycaster and it works. I have 2GB of RAM, which seems like enough, and when it's calculating stuff, I still have half my RAM left. So it doesn't seem like a RAM issue, but switching raycasters does fix the problem.
We're interested in developing an exporter for our Marmoset Engine mesh file format. Any comments before we jump in?
[/ QUOTE ]
In theory will be easy to do a mesh importer for xNormal using its C++ SDK and Visual Studio. You just need to implement the IMeshImporter class and output a Geometry with the positions and texture coords. Optionally you can output too the vertex normals, tangent basis and cage information. You can include too other vertex data like the skinning info, vertex colors using the OtherVertexData property and other triangle data using the OtherTriangleData property, so you could re-export the information after calculating the normal maps if you need. See the SDK documentation for more info.
If you have any doubt just send me an email and I will try to answer you asap. I am trying to integrate a 3D engine with xNormal atm , so any comment or feedback will be appreciated.
[ QUOTE ]
Then I have to switch to the "memory conservative" raycaster and it work
[/ QUOTE ]
About the error with high polygon meshes, the version 3.7.4 is optimized to manage better big meshes. If you get errors with the fast raytracer can try to use the memory conservative one. The fast one can use up to 20 times the memory of the conservative one!!!. See the free RAM indicator on the left and the xN_debugLog.txt file to know how much memory is used.
Probably to use big big meshes you should consider to use a 64bits operating system with more than 2Gb of RAM ( for example Windows Vista Beta 2 x64 or Windows XP Professional 64-bits edition )
Take into consideration too that only the >>free<< RAM counts. Installed RAM doesn't matter really. Only the free one counts.
Windows can, in fact, take like 500Mb of memory at start if you have installed a few applications, antivirus, tools, etc... Try to disable some service and close unwanted applications to reduce the memory used.
Other advice to reduce the memory consumption is to launch xNormal as a "command line" application instead of in "user mode". Just set the settings, save them and then launch xNormal.exe [mySettingsFile]. For example, if your settings are called reallyBigMeshes.xml launch as xNormal.exe c:\test\reallyBigMeshes.xml
On the other hand, I really don't know the mesh limits of xNormal yet. With my poor computer I can't manage meshes with more than 650k in 3DSMAX so can't really test hehe...
[ QUOTE ]
Ever since i've been using this dual core system here calculating ambocc maps will take 100% of both cores on my cpu.
[/ QUOTE ]
Well atm I'm using a feature of Windows NT called "Thread pool/Working threads". I can't really control the priority of the threads because this is done completely by the operating system ( all the threads use "normal" priority ). In theory Windows should distribute well all the tasks but in the practice only very few OS's routines are optimized for the new multicore CPUs.
Btw, are you using a Hyperthreading CPU or a multi core one? I heard HT can be really annoying sometimes... Also if you are using Windows with a dual-core CPU there is a patch that solves some problems....
As temporal "patch" you can do this... Launch xNormal and begin render the maps... Then CTRL+ALT+SUPR and select xNormal.exe. Then mouse right click and manually downgrade its priority from normal to below normal.
In a next version I will try to improve a few more this, but need to investigate a few more and to get a new multi-core CPU to test well hehe!
It seems like with the raycasters you could bake things other than normals into a low poly object. Like, say, any texture? So could you bake diffuse textures from a high to low poly? Because that is Teh Future. Have you thought about doing this?
[ QUOTE ]
It seems like with the raycasters you could bake things other than normals into a low poly object. Like, say, any texture? So could you bake diffuse textures from a high to low poly? Because that is Teh Future. Have you thought about doing this?
[/ QUOTE ]
Yep yep could calculate lighting ( lightmaps + PRT ) using the highpoly and then bake the illumination into the lowpoly... but 3DSMAX/Maya/XSI/Lightware could do this much better than I could afford ( because they use über shadows and radiosity and I'm gimp! ).
PRT has tons of problems atm... I think in the Siggraph 2006 will be presented a new method based on wavelets but is still very experimental.
Atm I am using a modifyed version of xNormal in a 3d engine to bake the lightmaps too hehe! ( called xEditor :P ) but is still not finished!
I think hes refering more to baking diffuse maps exported from zbrush with thier funky auto-mapping applied to a highres model onto the lowres model to use as a base for texture work. We're using max to do this currently, but it would be great to get everything set up in one program.
[ QUOTE ]
I think hes refering more to baking diffuse maps exported from zbrush with thier funky auto-mapping applied to a highres model onto the lowres model to use as a base for texture work. We're using max to do this currently, but it would be great to get everything set up in one program.
[/ QUOTE ]
So you apply a base texture to the highpoly and get baked to the lowpoly or...??? Or diffuse texture = base * diffuse lighting? I don't understand the concept hehehe
yeah, if the highpoly mesh has a base texture on it, and the lowpoly mesh just has some good UV-coords, then somehow the raycaster samples the pixel colour on the highpoly and bakes that down to the lowpoly UVs, with or without the ambient occlusion (could be an option - fullbright, or with AO).
kinda like what 3dsmax7/8 do with Render to Texture - they can take all surface information (diffuse, spec, whatever) from the highpoly mesh and bake it all down to the lowpoly.
Yeah, the concept is that you sample the diffuse (or whatever texture) directly with no lighting. Then you bake that to the low poly. It's the same idea as normal maps, but you sample the pixel of the diffuse transformed out of the UV space of the high poly and render that to the low poly.
So what you can do is create a fully textured and modeled film-quality high-poly object. Then you model a low poly and bake everything. It's awesome because you always have the highest quality there and if you ever need to reuse the content for a sequel or better engine, you just model a better low poly and press 'bake'. Plus your low poly object will likely look better.
3ds max and Microwave are the only apps that I know do this well. We ran screaming from 3ds max years ago and Microwave is out for Maya, but we're currently running screaming from that, too. So it'd be great if xNormal could be our savior.
Ok I will investigate that for the upcoming versions. The only problem I see is that I'm not supporting materials for the mesh importers... So I bet should implement it to achieve this or am I wrong? Also not sure if will worth the effort with great render-to-texture programs like Maya, XSI or 3dsmax... Also, I can reveal you I'm preparing a new tool... A free 3D engine with editor.. this editor can bake radiosity, shadows and all that xNormal can atm... In 30 years I will finish it
However, any volunteer to send me some nice example to test this a few please? I'r lazy and my programmer art sux haha
[ QUOTE ]
Just got an error, apparently you only support up to 2048 textures? I'm trying to use a 4096x1024 in the 3d preview and it wont let me.
[/ QUOTE ]
Yep atm max texture is 2048x2048. That's because some old ATI cards only supports 2k x 2k max and I need to maintain the compatibility with all the cards, sorry. Damm Earthquake you always wanna pass the limits hahah!
Perhaps I should change this and get the real maximum available texture size instead of setting a hard-coded limit.. yep!
No, no need to support materials. I just want to load my high-poly, choose a targa texture, load my low-poly, and hit bake. I feel it would be a great addition to your app because those other "great render-to-texture programs" aren't as great as you think they are. Plus the fact that xNormal is free is a big help.
I can send a test for you, no prob. I mean it's up to you, I don't want to pressure you. It's just that you seem to be very competent and on the ball with xNormal. I just want to help make it awesome.
[ QUOTE ]
No, no need to support materials. I just want to load my high-poly, choose a targa texture, load my low-poly, and hit bake.
[/ QUOTE ]
Ok! So I just need to add a "select your highpoly texture" in the high polygon model menu, good! The texture could be really big if you want.. Like 32k x 32k... I am already implementing a virtual memory manager to manage things from the disk.
[ QUOTE ]
I can send a test for you, no prob.
[/ QUOTE ]
Sure, much thanks, all the help is welcome!
[ QUOTE ]
I mean it's up to you, I don't want to pressure you.
[/ QUOTE ]
Beeep beeep i'r a bot! I can tell you this is going to be implemented in a few hours, np! Supereasy if no multi-materials are needed in the highpoly model.
Wow, awesome. This will be much appreciated. By the way, my zip file is being uploaded as I speak. For some reason the connection is getting slower and slower.... I'm using FileFront, maybe it's that.
Hey, Jeff is working on a mesh importer but having problems still. Right now he's looking for a way to read Unicode text to debug the log output. Unicode seems to be a pain. Any suggestions?
Jeff is getting an error when trying to generate a normal map with our custom mesh importer. He loaded in the custom mesh file and once he tries to generate a normal map, it loads the high poly, then the low, then spits out an error about the mesh set being void. The mesh set isn't void, though.
[ QUOTE ]
Jeff is getting an error when trying to generate a normal map with our custom mesh importer. He loaded in the custom mesh file and once he tries to generate a normal map, it loads the high poly, then the low, then spits out an error about the mesh set being void. The mesh set isn't void, though.
[/ QUOTE ]
Sounds like you forgot to call theGeometry->CalculateAABBAndRadio() at end so the mesh set is "void" because it doesn't occupy space ( radio will be 0.0f if you don't call this )
Yes I know I missed this in the documentation hehe will solve it asap
Ok, so here's the deal with our custom mesh importer. It seems to work now if we don't load our own tangents and bi-tangents, but it doesn't work if we do try to load our own. Jeff made sure that the length is correct as you recommended, but it's still not working. So he's assuming that it's some sort of bug now. Also, we might be having problems with scaling thanks to how Maya handles units for it's own OBJ format(duh). Would it be hard to add in a scale parameter for the high and low poly? This would also help to solve the problems you have with objects that are less than 1 unit in size.
[ QUOTE ]
It seems to work now if we don't load our own tangents and bi-tangents, but it doesn't work if we do try to load our own. So he's assuming that it's some sort of bug now.
[/ QUOTE ]
Let me see if I have any error here... but I'm importing tangents/biNormals in the OVB format in the acid and wall and it appears to work... but all is possible, don't discard a possible bug in xNormal, I need extra fingers to count the millions of lines now
[ QUOTE ]
Also, we might be having problems with scaling thanks to how Maya handles units for it's own OBJ format(duh). Would it be hard to add in a scale parameter for the high and low poly? This would also help to solve the problems you have with objects that are less than 1 unit in size.
[/ QUOTE ]
I was thinking about this too... Perhaps I should autoscale internally the small objects... Meanwhile you could scale your data manually in the custom importer or implement the "Configure" plugin feature to show a dialog and ask for the scale ( I've seen some importers doing this, don't remember where.. perhaps in 3dsmax or in FarCry editor? )
Btw, I sent Jeff the source code of the new SBM format that I'm implementing in the 3.8.0 ( a binary mesh format superfast to load/save ) which includes tangents and binormals in the same way you need.
ps: btw I detected a bug loading OBJ files with vertex positions + normals but without texture coordinates. Will be solved in a few days when I release the 3.8.0. I just discovered it playing with the new projective diffuse texture baking hehe
Okie! I have the base projective texture thing working. Thx torncanvas for the internal test model(im not allowed to use it publically as example because is a model of their unfinished game). Now I am looking for some example to include in in the program to show this feature... any volunteer please?
Thank you for implementing that feature. It is greatly appreciated. We can begin switching our pipeline over to texturing the high-poly assets thanks to you.
We might have a less important asset somewhere we could donate as an example. I'll see if I can dig one up. Meanwhile someone else could donate something.
Implemented the the "bake highpoly base texture on the lowpoly model", re-enabled the COLLADA importer, added two new amazing examples, fixed tons of bugs(also i'm sure tons of new bugs were added ), improved 3x the speed in the 3D viewer drawing the highpoly model, implemented a new virtual memory system capable to manage really big mesh assets, reduced the memory consumption 2x, improved dual-core support, new SBM binary mesh format for superfast loading/save disk space, changed a few things to use UNICODE files and settings, revamped the acid example and some other minor changes.
I really want to thank Kevin "Ironbearxl" George ( http://ironbearxl.deviantart.com/ ) for his wonderful Sylia example(the girl)! The xN thingy is just my ugly programmer's art with zero skills using Zbrush 2 to show the new bake feature
Here is the new preview window with the "texture bake" new feature:
Replies
1) You have not marked as "visible" the meshes. Go to highpoly / lowpoly tab and sure the "visible" checkbox is checked ( it is by default tho )
2) You are trying to view a void file. For example, the file contains cameras, lights, etc... but no geometry at all. Notice too some programs that use NURBS ( for example Rhinoceros ) export the data too as NURBS and not as quad/triangle meshes. You need to convert the NURBS to meshes.
3) The highpoly model must contain vertex positions and optinally vertex normals. The lowpolymodel must contain vertex positions and UVS and optionally normals.
4) It is possible that the meshes you are trying to import contain too much degenerated faces ( zero area faces because on index is the same ) or tons of duplicated vertices/faces. xNormal will remove these so after cleaning the meshes the triangle set can be void.
5) Dont trust in the max2obj exporter of 3dsmax. Is really old and problematic.
6) The 3ds importer atm is a bit bad. I am working to improve it.
Like all this can sound too much complicated, feel free to send me your problematic model to granthill76 [at] yahoo.com and I will take a look if you want!
This time corrected some bugs, added ASE importer, x64 version ( bye bye memory limits ), shadows in the 3d viewer, optimized a lot the graphics driver, improved the documentation, automatic rendering ( without user intervention using the command-line ) and other minor things!
As usually download it at http://www.santyesprogramadorynografista.net/projects.aspx and changes list at http://www.santyesprogramadorynografista.net/archives/xNormal_changes.txt
The blog is at http://santyhammer.blogspot.com
Feel free to test, comment, blame of whatever
thx
Thanks Santy!
New shadows look awesome, I just tested it
Thanks Santy!
[/ QUOTE ]
Thx! However they have some problems... Aren't adaptative ( if you put far the light the shadows degrade a lot ) and need manual biasing And can be slow... If you have performance problems with shadows try to set the size to 512x512 or 256x256 in the graphics driver configuration dialog ( in the plugins manager )
I was trying to implement ones without bias and with area/penumbra... but they only moved well in a x1900/gf7800 with SM3.0, so implemented other method to allow to run them in a modest Radeon9700... Perhaps when everybody have SM3.0 cards or above I could implement them back hehe... And yep yep, MoP, I destroyed your artwork again haha I should find some environment models and textures for that "pink" room
Btw, have anyone tested the new x64 version? I tryed it using the Windows Vista Beta 2 x64 with 4Gb of RAM to see if I could avoid the 3Gb memory limit of Windows XP 32-bits and apparently worked. If you have some feedback for the x64 I will be very pleased thx!
...added ASE importer...
[/ QUOTE ]Yay. Much better format to work with when using Blender 3D.
--
This installation package is not supported by this processor type. Contact your product vendor.
--
Any fixes for this, or am I just fubar'd? I have an errror log also if that'd help.
I was really amped to check this out but I get the following error when trying to install...
--
This installation package is not supported by this processor type. Contact your product vendor.
--
Any fixes for this, or am I just fubar'd? I have an errror log also if that'd help.
[/ QUOTE ]
That's prolly because you downloaded the x64 version and you're trying to install in a Windows 32bits.
If you are using Windows XP/Vista 32-bits you need to download and install the xNormal_3_7_2_win32.zip
If you are using Windows XP/Vista 64-bits you need to download and install the xNormal_3_7_2_win64.zip
( Or I messed the versions when uploaded them! )
Hope this helps.
Solved some bugs, optimized/improved compatibility of the graphics driver ( much to the new NVIDIA's intrumented drivers ), re-enabled dual-core and hyperthreading support. You can get a 200% speed increase if you have a Pentium 4 Extreme Edition, Pentium D, Athlon64 X2 or Intel Core Duo CPU so I highly recommend you to download this new patch
As usually download it at http://www.santyesprogramadorynografista.net/projects.aspx and changes list at http://www.santyesprogramadorynografista.net/archives/xNormal_changes.txt
The blog is at http://santyhammer.blogspot.com
thx!
I'm on a athlon x2 3800+ with 2 gigs of ram and a 7600. I'm wondering if more ram would even help in this case i seem to remember asking you this before and you saying windows wouldnt even allocate it right.
Also a minor annoyance with the new way you select file formats(export image, import mesh, ect) bothers me because you have the word XNormal before all the formats, so i cant hit tab and just hit T on the keyboard for targa or A for alias wavefront OBJ for example.
Try keeping the task manager open and look at the memory usage of the program.
The file is around 520 mb and 6 million triangles
[/ QUOTE ]
You can try to export only the vertex positions for the highpoly model. The normals and texture coordinates won't be used, so you can save tons and tons of space.
Also, if you can, specify the decimal digits when exporting the OBJ to 3 digits. Some programs set it by default to 6 which can put excesive data into the file.
What kind of file do you use? OBJ? What program you used to export it? Modo, 3dsmax, Maya7, Blender?
[ QUOTE ]
Also a minor annoyance with the new way you select file formats(export image, import mesh, ect) bothers me because you have the word XNormal before all the formats, so i cant hit tab and just hit T on the keyboard for targa or A for alias wavefront OBJ for example.
[/ QUOTE ]
Yep yep I was thinking on that already. Probably in the next version will remove all those xnormal-prefixes.
[ QUOTE ]
More RAM would work if you installed the 64bit version of Windows.
[/ QUOTE ]
Absolutely. With a 64-bits OS you can extend much much more your memory quantity. 32Bits are no longer good because Windows can only manage 2Gb of RAM ( 3Gb with a nasty trick or 8/16Gb for the server versions )
To finish... do you see the left RAM indicator going down fast? You can also edit the xNormal_debugLog.txt file and see how much memory is consuming the program for the "acceleration" structures.
I will make some tests with very high polygon meshes to see if I can improve this a few
ps: I detected in windows2000 some of the controls in the UI are not well painted ( image crops, bad autosize in file labels, etc... ). I'm working to solve all this too.
thx for the feedback
Actually watching it i dont think its a ram problem at all. It only gets up to about half usage.
Or use masking in ZBrush to split up the mesh into useable chunks, just mask it across places where seams won't be obvious.
Either that or you're being wasteful with your polys
Changed the plugin file filters so you can select it fast using the keyboard as suggested, changed some internal structures to allow bigger meshes consuming less momery and corrected some bugs under Windows 2000.
As usually download it at http://www.santyesprogramadorynografista.net/projects.aspx and changes list at http://www.santyesprogramadorynografista.net/archives/xNormal_changes.txt
The blog is at http://santyhammer.blogspot.com
thx!
We're interested in developing an exporter for our Marmoset Engine mesh file format. Any comments before we jump in?
Keep up the great work. We really appreciate the quick updates, too.
Then I have to switch to the "memory conservative" raycaster and it works. I have 2GB of RAM, which seems like enough, and when it's calculating stuff, I still have half my RAM left. So it doesn't seem like a RAM issue, but switching raycasters does fix the problem.
We're interested in developing an exporter for our Marmoset Engine mesh file format. Any comments before we jump in?
[/ QUOTE ]
In theory will be easy to do a mesh importer for xNormal using its C++ SDK and Visual Studio. You just need to implement the IMeshImporter class and output a Geometry with the positions and texture coords. Optionally you can output too the vertex normals, tangent basis and cage information. You can include too other vertex data like the skinning info, vertex colors using the OtherVertexData property and other triangle data using the OtherTriangleData property, so you could re-export the information after calculating the normal maps if you need. See the SDK documentation for more info.
If you have any doubt just send me an email and I will try to answer you asap. I am trying to integrate a 3D engine with xNormal atm , so any comment or feedback will be appreciated.
[ QUOTE ]
Then I have to switch to the "memory conservative" raycaster and it work
[/ QUOTE ]
About the error with high polygon meshes, the version 3.7.4 is optimized to manage better big meshes. If you get errors with the fast raytracer can try to use the memory conservative one. The fast one can use up to 20 times the memory of the conservative one!!!. See the free RAM indicator on the left and the xN_debugLog.txt file to know how much memory is used.
Probably to use big big meshes you should consider to use a 64bits operating system with more than 2Gb of RAM ( for example Windows Vista Beta 2 x64 or Windows XP Professional 64-bits edition )
Take into consideration too that only the >>free<< RAM counts. Installed RAM doesn't matter really. Only the free one counts.
Windows can, in fact, take like 500Mb of memory at start if you have installed a few applications, antivirus, tools, etc... Try to disable some service and close unwanted applications to reduce the memory used.
Other advice to reduce the memory consumption is to launch xNormal as a "command line" application instead of in "user mode". Just set the settings, save them and then launch xNormal.exe [mySettingsFile]. For example, if your settings are called reallyBigMeshes.xml launch as xNormal.exe c:\test\reallyBigMeshes.xml
On the other hand, I really don't know the mesh limits of xNormal yet. With my poor computer I can't manage meshes with more than 650k in 3DSMAX so can't really test hehe...
[ QUOTE ]
Ever since i've been using this dual core system here calculating ambocc maps will take 100% of both cores on my cpu.
[/ QUOTE ]
Well atm I'm using a feature of Windows NT called "Thread pool/Working threads". I can't really control the priority of the threads because this is done completely by the operating system ( all the threads use "normal" priority ). In theory Windows should distribute well all the tasks but in the practice only very few OS's routines are optimized for the new multicore CPUs.
Btw, are you using a Hyperthreading CPU or a multi core one? I heard HT can be really annoying sometimes... Also if you are using Windows with a dual-core CPU there is a patch that solves some problems....
See http://forum.notebookreview.com/showthread.php?t=60416
and
http://www.presence-pc.com/forum/ppc/Har...jet-20896-1.htm
( use bubblefish or something to translate )
And also
http://www.amdzone.com/files/WinXPdualcorehotfix.exe
http://www.hardforum.com/showthread.php?t=983781
As temporal "patch" you can do this... Launch xNormal and begin render the maps... Then CTRL+ALT+SUPR and select xNormal.exe. Then mouse right click and manually downgrade its priority from normal to below normal.
In a next version I will try to improve a few more this, but need to investigate a few more and to get a new multi-core CPU to test well hehe!
thx for the feedback!
It seems like with the raycasters you could bake things other than normals into a low poly object. Like, say, any texture? So could you bake diffuse textures from a high to low poly? Because that is Teh Future. Have you thought about doing this?
[/ QUOTE ]
Yep yep could calculate lighting ( lightmaps + PRT ) using the highpoly and then bake the illumination into the lowpoly... but 3DSMAX/Maya/XSI/Lightware could do this much better than I could afford ( because they use über shadows and radiosity and I'm gimp! ).
PRT has tons of problems atm... I think in the Siggraph 2006 will be presented a new method based on wavelets but is still very experimental.
Atm I am using a modifyed version of xNormal in a 3d engine to bake the lightmaps too hehe! ( called xEditor :P ) but is still not finished!
I think hes refering more to baking diffuse maps exported from zbrush with thier funky auto-mapping applied to a highres model onto the lowres model to use as a base for texture work. We're using max to do this currently, but it would be great to get everything set up in one program.
[/ QUOTE ]
So you apply a base texture to the highpoly and get baked to the lowpoly or...??? Or diffuse texture = base * diffuse lighting? I don't understand the concept hehehe
kinda like what 3dsmax7/8 do with Render to Texture - they can take all surface information (diffuse, spec, whatever) from the highpoly mesh and bake it all down to the lowpoly.
That'd kick ass if it was possible!
So what you can do is create a fully textured and modeled film-quality high-poly object. Then you model a low poly and bake everything. It's awesome because you always have the highest quality there and if you ever need to reuse the content for a sequel or better engine, you just model a better low poly and press 'bake'. Plus your low poly object will likely look better.
3ds max and Microwave are the only apps that I know do this well. We ran screaming from 3ds max years ago and Microwave is out for Maya, but we're currently running screaming from that, too. So it'd be great if xNormal could be our savior.
However, any volunteer to send me some nice example to test this a few please? I'r lazy and my programmer art sux haha
Just got an error, apparently you only support up to 2048 textures? I'm trying to use a 4096x1024 in the 3d preview and it wont let me.
[/ QUOTE ]
Yep atm max texture is 2048x2048. That's because some old ATI cards only supports 2k x 2k max and I need to maintain the compatibility with all the cards, sorry. Damm Earthquake you always wanna pass the limits hahah!
Perhaps I should change this and get the real maximum available texture size instead of setting a hard-coded limit.. yep!
I can send a test for you, no prob. I mean it's up to you, I don't want to pressure you. It's just that you seem to be very competent and on the ball with xNormal. I just want to help make it awesome.
No, no need to support materials. I just want to load my high-poly, choose a targa texture, load my low-poly, and hit bake.
[/ QUOTE ]
Ok! So I just need to add a "select your highpoly texture" in the high polygon model menu, good! The texture could be really big if you want.. Like 32k x 32k... I am already implementing a virtual memory manager to manage things from the disk.
[ QUOTE ]
I can send a test for you, no prob.
[/ QUOTE ]
Sure, much thanks, all the help is welcome!
[ QUOTE ]
I mean it's up to you, I don't want to pressure you.
[/ QUOTE ]
Beeep beeep i'r a bot! I can tell you this is going to be implemented in a few hours, np! Supereasy if no multi-materials are needed in the highpoly model.
Hey, Jeff is working on a mesh importer but having problems still. Right now he's looking for a way to read Unicode text to debug the log output. Unicode seems to be a pain. Any suggestions?
Jeff is getting an error when trying to generate a normal map with our custom mesh importer. He loaded in the custom mesh file and once he tries to generate a normal map, it loads the high poly, then the low, then spits out an error about the mesh set being void. The mesh set isn't void, though.
[/ QUOTE ]
Sounds like you forgot to call theGeometry->CalculateAABBAndRadio() at end so the mesh set is "void" because it doesn't occupy space ( radio will be 0.0f if you don't call this )
Yes I know I missed this in the documentation hehe will solve it asap
I sent Jeff some info by e-mail, hope it helps.
It seems to work now if we don't load our own tangents and bi-tangents, but it doesn't work if we do try to load our own. So he's assuming that it's some sort of bug now.
[/ QUOTE ]
Let me see if I have any error here... but I'm importing tangents/biNormals in the OVB format in the acid and wall and it appears to work... but all is possible, don't discard a possible bug in xNormal, I need extra fingers to count the millions of lines now
[ QUOTE ]
Also, we might be having problems with scaling thanks to how Maya handles units for it's own OBJ format(duh). Would it be hard to add in a scale parameter for the high and low poly? This would also help to solve the problems you have with objects that are less than 1 unit in size.
[/ QUOTE ]
I was thinking about this too... Perhaps I should autoscale internally the small objects... Meanwhile you could scale your data manually in the custom importer or implement the "Configure" plugin feature to show a dialog and ask for the scale ( I've seen some importers doing this, don't remember where.. perhaps in 3dsmax or in FarCry editor? )
Btw, I sent Jeff the source code of the new SBM format that I'm implementing in the 3.8.0 ( a binary mesh format superfast to load/save ) which includes tangents and binormals in the same way you need.
ps: btw I detected a bug loading OBJ files with vertex positions + normals but without texture coordinates. Will be solved in a few days when I release the 3.8.0. I just discovered it playing with the new projective diffuse texture baking hehe
Thanks a lot for your help so far, we'll figure this out soon I'm sure.
We might have a less important asset somewhere we could donate as an example. I'll see if I can dig one up. Meanwhile someone else could donate something.
We look forward to your release.
Finally I decided to make other really-ugly programmer's art example to show the new feature mwaaaaa!
See ya soon.
Implemented the the "bake highpoly base texture on the lowpoly model", re-enabled the COLLADA importer, added two new amazing examples, fixed tons of bugs(also i'm sure tons of new bugs were added ), improved 3x the speed in the 3D viewer drawing the highpoly model, implemented a new virtual memory system capable to manage really big mesh assets, reduced the memory consumption 2x, improved dual-core support, new SBM binary mesh format for superfast loading/save disk space, changed a few things to use UNICODE files and settings, revamped the acid example and some other minor changes.
I really want to thank Kevin "Ironbearxl" George ( http://ironbearxl.deviantart.com/ ) for his wonderful Sylia example(the girl)! The xN thingy is just my ugly programmer's art with zero skills using Zbrush 2 to show the new bake feature
Here is the new preview window with the "texture bake" new feature:
As usually you can download it at
http://www.santyesprogramadorynografista.net/projects.aspx
Change list at
http://www.santyesprogramadorynografista.net/archives/xNormal_changes.txt
blog at
http://santyhammer.blogspot.com
Hope you like it, any comment and feedback will be appretiated, thx!
Great! I'll be trying this out with our importer when i come back here in the morning for some tests to see how things go.
[/ QUOTE ]
Remember if you did a plugin using the C++ SDK prolly you will need a recompilation of the plugin because some internal structures changed!