How long should it take to bake a <1.5mil tri model onto a 400tri one? I left it on for a few hours but I'm pretty sure it wasn't doing anything, it just stopped half way through.
[ QUOTE ]
How long should it take to bake a <1.5mil tri model onto a 400tri one? I left it on for a few hours but I'm pretty sure it wasn't doing anything, it just stopped half way through.
[/ QUOTE ]
Depends on computer, rays/sample, ao map dimensions, AA, cage, adaptive values, etc...
On an E2140 OCd to 3100Ghz(superpi 19s), 16-16 rays/samples, using cage limiters and linear attenuation i'm baking a subdiv1 1,2M smiley example (512x512, no aa) in 10 seconds.
Try to disable antialiasing, set map dimensions to 512x512, use cage limiter and set 16-16 rays/sample to test. If that takes "hours" something is happening
All I can tell you is that there is a bug related with some multicore CPUs, i'm analyzing it currently.
Try to run the hxGrid renderer with 1 or 2 machines and see if the render speed improves ( or set an environment variable as OMP_NUM_THREADS with value 1 so multicore won't be used )
Noticed a slight bug whereby edges which are close to each other in the uv layout , even if dilation is turned off, tend to wipe each other out. The same thing doesn't happen in max, so its not something I have done( hopefully ;y)
I only noticed this when unwrapping an arm and the fingers in the uv layout are quite close together.
I get green and other dark artefacts where the eges are
see below.
[ QUOTE ]
The dx 9.0c 4.09 just indicates you have DX9.0c installed, not the concrete revision.
To know if you have installed the correct DX9 April 2007 "revision" need to see if the d3dx9_33.dll exists on the system directory(usually c:\windows\system32 on winXP )
If you are 100% sure you have the correct DX version installed(and the .NET 2.0 + VS2005SP1 DLLs) you can try to use the 7zipped version (which occupies less)... if you see any "sorry can't find d3dx9_XXXXX.dll while the program loads the plugins then you don't have the April 2007 installed.
[/ QUOTE ]
Thanks. You appear to be right, I only have d3dx9_30.dll, not d3dx9_33.dll . However, apparently now I'm in a quandry because I know I've run the installer for the latest Direct X drivers from Microsoft, the one xNormal directs me to, however apparently it isn't installed.
Thanks. You appear to be right, I only have d3dx9_30.dll, not d3dx9_33.dll . However, apparently now I'm in a quandry because I know I've run the installer for the latest Direct X drivers from Microsoft, the one xNormal directs me to, however apparently it isn't installed.
[/ QUOTE ]
Try with this one http://www.microsoft.com/downloads/detai...;DisplayLang=en
Is the August 2007, but should work ok ( and is the one which uses the upcoming version ). Sure you have Administrator privileges to install it and notice you need it even in case you are using Vista.
If nothing of this works try to run Windows Update with the optional components checked. If you have problems still tell me and I will send you the DLL directly. While, just use the 7zipped version instead of the windows installer one and set the default graphics driver to OpenGL... that should work by now ( if you don't use the .X importer, the d3dx tangent basis, DDS, etc... )... just ignore the "sorry can't find d3dx_XXXXX.dll dialogs" or delete the plugins using DX.
[ QUOTE ]
Noticed a slight bug whereby edges which are close to each other in the uv layout
[/ QUOTE ]
Try to check the "discard backfaces" thing as Earth suggests. Try too to use the "smoothing normals" option in the lowpoly tab ( perhaps the exporter you used flipped incorrectly normals there so this will correct it).
Sure you set well the cage ( I know, is hard to set it near the fingers because there is not much space to operate... ). If you use uniform ray distances you will have problems with the fingers because the rays will pass the correct target point and will use the furthest collision... use cages always!
You can try also to use the 3dsmax's UVW Unwrap "pixel snap" function by the finger edges to discard fractional numeric problems in the UV coordinates.
[ QUOTE ]
[ QUOTE ]
How long should it take to bake a <1.5mil tri model onto a 400tri one? I left it on for a few hours but I'm pretty sure it wasn't doing anything, it just stopped half way through.
[/ QUOTE ]
Depends on computer, rays/sample, ao map dimensions, AA, cage, adaptive values, etc...
On an E2140 OCd to 3100Ghz(superpi 19s), 16-16 rays/samples, using cage limiters and linear attenuation i'm baking a subdiv1 1,2M smiley example (512x512, no aa) in 10 seconds.
Try to disable antialiasing, set map dimensions to 512x512, use cage limiter and set 16-16 rays/sample to test. If that takes "hours" something is happening
All I can tell you is that there is a bug related with some multicore CPUs, i'm analyzing it currently.
Try to run the hxGrid renderer with 1 or 2 machines and see if the render speed improves ( or set an environment variable as OMP_NUM_THREADS with value 1 so multicore won't be used )
[/ QUOTE ]
yep. im using a core2duo 6600 processor with a 8800gts.
could it be a bug?
yeah joghsy, its just from my point of view 3dsmax on default settings renders the normal map fine , even bits that are close together.
not to worry i will mess around some more with your suggestions
Hey man, it would be cool if there was an easy way to import mulitple textures to bake the way you can import multiple high res meshes. i am assuming, if say you name the meshes in some sort of alpha or numeric manor they come into xnormal that way, so the adverse should be true as well. If I name high res chunks chunk_01.obj, chunk_02.obj and so on. Then, exported high rez textures from auto generated uvs and used the exact same convention, chunk_01.PSD, chunk_02.PSD then it should be easy to bring in multiple images files in the correct order corresponding to the mesh files.
I generally do all my painting now in zbrush 3 with the polypainting tools, and export the textures out with the mesh chunks.. so having to click on each model and manually load textures for each is a bit cumbersome.
[ QUOTE ]
so having to click on each model and manually load textures for each is a bit cumbersome.
[/ QUOTE ]
Can't you save a XML settings file and load it after? The program is supposed to save the current settings each time you exit and you can load/save your settings manually also.
You can set all your meshes the first time and enable/disable the mesh you don't need ( the "visible" checkbox column )
Well.. that is assuming ZBrush doesn't name the chunks different each time, does it?
I think don't undestand well what you mean, sorry
[ QUOTE ]
Is there anyway to export a cage from Maya? I can't fin'd ovb or sbm exporters for it, and .obj don't work
[/ QUOTE ]
No exporters for Maya/XSI/LW atm, sorry. That's because I couldn't find a "projection modifier" like 3dsmax to export a cage... and was a non-sense to write a simple mesh exporter having .OBJ support.
Do this if ya need a cage from Maya... Export your lowpoly mesh as, for example, lowpoly.obj. Then, inside Maya, clone it. Hide the original lowpoly mesh. Extrude/move the cloned object vertices until you're happy with the cage. Export it as lowpoly_cage.obj. Then assign the lowpoly_cage.obj in the lowpoly slot "External cage file".
Notice if you change the lowpoly mesh topology the cage won't match anymore... so you need to repeat the process again if you add/remove any face on the original lowpoly mesh you cloned... That's because the external cage file and the lowpoly mesh topology must match 100%.
See the parallax wall, that example is using an .OBJ external cage ( did from Maya btw )
Perhaps there is a 3dsmax "projection modifier" in Maya.. but, AFAIK, nop!
yeah i know xnormal saves settings and thats great, but the current project i am working the meshes are really complex sometimes having 20 to 30 individual mesh pieces from zbrush. we have a great work flow where we are doing most all texturing in zbrush from auto generated uvs and baking them afterword. problem is setting up the initial textures to bake is a process of me clicking, finding the texture, repeat, repeat, repeat.. there are some models coming up where i might be dealing with 40 plus meshes.. this might just be me, and the project i am working on, i can just see some type of batch texture loading to be useful.
i do have one question hopefully someone can help me out with.. how do you use the photoshop dilation filter? i always get errors about selecting 3 channels and a mask in the alpha, i have tried numerous scenarios of mask in the alpha, plus selections and i cant get it to do anything. i also can not find any documentation on this.. any help would be cool, i don't even know what it does, but i am assuming its similar to the dilation xnormal does, it just seems to take a while and sometimes i run out of memory at the dilation part of the render.
[ QUOTE ]
this might just be me, and the project i am working on, i can just see some type of batch texture loading to be useful.
[/ QUOTE ]
Ok I understand now. Would be enough to add a "load mesh folder" option? All the models located in a folder will be added to the list with the corresponding base bake texture .
Is that ok?
[ QUOTE ]
i do have one question hopefully someone can help me out with.. how do you use the photoshop dilation filter?
[/ QUOTE ]
Need to select RGB + alpha ( 4 channels in total ). The idea is to use the alpha channel as a 1bit mask where black==unwritten pixel and white==written pixel... So the color(RGB) values will be expanded nPixels around if the corresponding alpha is unwritten(black)
But I think there is a bug currently in that filter. Somebody sent me an email telling me the dilation ps filter does not work as the xNormal one... Is in my TOFIX list for the final 3.11.1. Also I think gonna change it to work without having to select an alpha mask and without having to flatten image neither.
Would be enough to add a "load mesh folder" option? All the models located in a folder will be added to the list with the corresponding base bake texture .
Is that ok?
[/ QUOTE ]
that would be perfect!
thanks for the info. when you say selected, do you mean physically select like the select tool or do you mean highlight the channels in the channels tab.
It gives me the message saying the cage has a different number of verts than the source, even when it's exactly the same, and same number of uvs.
I used transfer maps in maya and exported the envelope from that to use in xnormal to bake AO. As it's the same topology, vert count etc. I thought it would work fine, but it doesn't. I've also tried just inflating my model a bit as a test and exporting that to use as a cage, but got the same problem.
when you say selected, do you mean physically select like the select tool or do you mean highlight the channels in the channels tab.
[/ QUOTE ]
Go to the layers window. Change to channels. Keep shift pressed and select RGB + alpha channel.
But the problem is that, to make this, you need to flatten the image so don't work very well. I'm gonna change that for the final 3.11.1 so you won't need to select alpha and neither to flatten the image.
[ QUOTE ]
It gives me the message saying the cage has a different number of verts than the source, even when it's exactly the same, and same number of uvs.
[/ QUOTE ]
And the normals? Are you sure you exported both meshes with the same options selected? Open both .OBJs with a text editor and see really the vertex elements match ( same number of vertices, 100% match in the UVs, 100% match in the face indices and face count... vertex and normals values can vary, that's ok )
If you can't get it to work in any way... can you send me your lowpoly mesh + the cage so I can debug it pls?
Cheers, will give it a shot. Sorry, I couldn't send the models as they are from work, was going to just use a personal model, but kinda forgot hehe. Will let you know if I still get the problem <crosses figners>.
Thanks man, great release!
One thing someone from work pointed it out is that he dosent understand why xnormal only supports tri or quad constructed models.. its obvious the obj format supports geometry with ngons, and in the heat of production its not like you always stop and pay attention if your making an ngon or not. So what ends up happening is that you have to convert every model to tris manually before export to get it to work in xnormal. i have just got in the habit of converting to tris before export, but he does have a point, i know zbrush just adds edges to ngons to make the polies either 3 or 4 sided. is this something hard to implement into xnormal?
[ QUOTE ]
One thing someone from work pointed it out is that he dosent understand why xnormal only supports tri or quad constructed models...
[/ QUOTE ]
Direct3D can only work with triangles. OpenGL can manage triangles, quads and ngons, but must be convex and closed.
The problem is that some programs don't generate 100% convex polygons. Some even add "holes" in the middle of them .I could manually triangulate them(not easy in case of holes), but will consume a lot of time in the conversion (specially for big meshes) because I need to find coplanar and adjacent polygons... and then triangulate but , like all the automatic things, I bet some edges/normals won't be calculated in a proper way so shading won't be very correct.
I think is better just to use the Maya "triangulate" or the 3dsmax "Edit mesh" because those programs already have all the things pre-calculated, so they can triangulate very fast. In that way you can also preview better the shading or normals artifacts after the conversion process.
Well, let me see what I can do but i'm not very optimistic.
[ QUOTE ]
Ambient occlusion still doesn't work for me even with the latest version, now it just hangs with no progress
[/ QUOTE ]
Things go better haha!
Do this... load one of the examples ( for example the wall ). Can you complete the AO baking process with it? If completes ok(in my computer uses like 10s max) then perhaps is not halted but working(but slow)...
Btw, I saw you have an AMD X2... Sure you installed the dual core patch for WinXP or you will have problems.
I think you must install the patch from Microsoft + the patch from AMD, i'm not sure ( I'm an Intel fanboy ).
To do the tests pls use 512x512 noAA 16-16 samples per pixel. More will make it too slow to test. Btw.. how big is your highpoly mesh? Are you using cages to limit ray distance or checked the "don't limit ray distance" option? If you use cages will optimize better.
Well, if nothing of this works.. could you send me a link where to download your meshes and settings to debug it pls?
Btw, I know AO speed can be very slow but is very hard to optimize it more. Alternatively, I'm working on HW-accelerated AO but is still not ready ( because I'm having problems with the poly limit... usually cards cannot manage more than 1M poly meshes and go out of VRAM too easy )
I am finishing a 100% software realtime raytracer for the xNormal viewer...it's using the same ray engine than the normal map renderer:
Some results:
I think will be good for the people without 3D graphics accelerator card.
Currently shadows are too "hard" and the reflection is too weak, but hey is a start! I'm planning global illumination with photon mapping and very accurate transparency in the future ( but don't wanna waste too much time in this pre-alpha experimental thing... ). The FPS weren't too bad running in a dual-core machine but had to crop the viewport to get decent speed
One thing that has always really bothered me. Sometimes if you cancle an ambocc render it will hang for a very long time, you can still move the window around and all that but it just takes a very very long time for it to actually "abort". Its not frozen or crashed, because it will eventually stop thinking and come back, but its very anoying to have that really long wait.
Man, fantastic app. I love how smooth the normal maps are in 3d view.
Few sugestions:
-More then one light in scene.
-Screenshots should have alpha for easy model cut out. Sure I can always put psychadelic pink in the background for that but would be nice. Or maybe background image slot.
-adjustable wireframe colour.
-reloading of single textures instead of all of them. Would help with previewing while texturing a little bit faster. Just several small buttons for each channel.
-would be cool if Xnormal didn't freeze mouse when alt tabbing. But I understand there were problems so that is why alt+enter is requied. It would be just nice to be able to have photoshop on one screen, xnormal on another and just move mouse over Xn window and click quickly "reload diffuse texture". Would be huge help while texturing.
-resizing of window in windowed mode.
Just few small sugestions . The reason why most of them are in beauty department is viewing or even rendering normal maps is not as perfect in other apps as I would hope for while Xnormal does that perfectly and hassle free.
Once again, awesome software dude.
ps. Bugs (x64):
- Light stops being locked to the camera when you hide UI.
- Using PSD file in base texture slot disables shadows, but using bmp does not.
- Using PSD for GI map ended up with error saying that it has to be power of 2 wide (although it was) and converting it to bmp ended up with no error.
[/ QUOTE ]
Yep, that's one of the thing I wanna add some day.
[ QUOTE ]
-Screenshots should have alpha for easy model cut out. Sure I can always put psychadelic pink in the background for that but would be nice. Or maybe background image slot.
[/ QUOTE ]
Currently that's not possible. DX9 does not support framebuffers with alpha. OpenGL yes but the HW have the last word. I'm working in a software rasterizer too ( in that case will be possible ).
[ QUOTE ]
-adjustable wireframe colour.
[/ QUOTE ]
It is currently Wireframe color is set to 1-backgroundColor... if you change the background color will change the wireframe color. You can also touch the ui.lua file manually to set it to certain color(but requires LUA scripting knowledge). Perhaps I should add a color-picker button like the grid color one.
[ QUOTE ]
-reloading of single textures instead of all of them. Would help with previewing while texturing a little bit faster. Just several small buttons for each channel.
[/ QUOTE ]
Yep. For xNormal 4.0 I will change the UI completely. Will be a typical application instead of a strange-UI one. You will be able to do that.
[ QUOTE ]
-would be cool if Xnormal didn't freeze mouse when alt tabbing. But I understand there were problems so that is why alt+enter is requied.
[/ QUOTE ]
Currently DX9 limits a lot. IT uses a very strange "device lost" behavior. Each time you ALT+TAB, CTRL+ALT+SUPR, etc... the textures and meshes need to be re-uploaded to the graphics card, etc... Also there is a similar problem with the Windows input system and problems with multimonitors, etc. Vista and DX10 system will be much better.
[ QUOTE ]
-resizing of window in windowed mode.
[/ QUOTE ]
You can currently, but only if you use OpenGL graphics driver ( set it in the plugin manager ).
[ QUOTE ]
ps. Bugs (x64):
[/ QUOTE ]
Nice list. Gonna try to solve them asap.
[ QUOTE ]
Currently DX9 limits a lot. IT uses a very strange "device lost" behavior. Each time you ALT+TAB, CTRL+ALT+SUPR, etc... the textures and meshes need to be re-uploaded to the graphics card, etc...
[/ QUOTE ]
You can avoid the problem by creating textures and meshes with D3DPOOL_MANAGED. Only the default pool gets lost when the user alt-tabs. The managed pool behaves more like OpenGL!
[ QUOTE ]
[ QUOTE ]
Currently DX9 limits a lot. IT uses a very strange "device lost" behavior. Each time you ALT+TAB, CTRL+ALT+SUPR, etc... the textures and meshes need to be re-uploaded to the graphics card, etc...
[/ QUOTE ]
You can avoid the problem by creating textures and meshes with D3DPOOL_MANAGED. Only the default pool gets lost when the user alt-tabs. The managed pool behaves more like OpenGL!
[/ QUOTE ]
Yep but some drivers report D3DERR_DRIVERINTERNALERROR too on atl+tab/ctrl+alt+supr sometimes!
[ QUOTE ]
- Using PSD file in base texture slot disables shadows, but using bmp does not.
[/ QUOTE ]
Just thought I'd chime in and say this isn't true on my setup. Shadows work using PSD files as base texture. Saved on Photoshop CS3 using compatibility mode.
Not sure what specs are helpful, but I'm running xNormal v3.11.1.7434 (x64 Release) on Windows XP Pro x64, on an Intel E6850 with an ATI Radeon HD 2900 XT with latest (7.9) drivers.
On a side note, I again have to applaud you and your software. I thought xNormal would have a problem baking maps out of a 12 million triangle mesh, but it did so amazingly fast. I think I'm going to have to send you another donation soon
I exported my low poly model from maya. I then pushed verts out so the model would act as an envelope, I didn't add/remove/alter geometry, just inflated it to fit around the high poly. I then exported that (as .obj like the first export). Didn't work in xnormal, so I'm looking at them mow in notepad and the numbers are different. Which makes sense of course if it's the coordinates. No idea what to do here, can't send the file to you as it's work stuff.
[/ QUOTE ]
Btw, shadows + OpenGL + Radeon 2XXX are broken currently(does not work with the 2400, not sure about other models). It's an ATI bug in the Catalyst, they are working to solve it. Should work ok using Direct3D tho.
[ QUOTE ]
so I'm looking at them mow in notepad and the numbers are different
[/ QUOTE ]
See if the vertex position count or face count/face indices are different. All the other data does not matter for external cages.
Just wondering; can Xnormal render 32-bit FP displacement maps?
[/ QUOTE ]
Yep, save as TIFF(go to plugin manager, select IEEE754 FP32 output) or SuperRAW image format. The program works internally with 64/80 bits floating point double precision... it just converts it at end, depending on the output image format.
And remember, it does not tesselate the lowpoly so you need it to pass it already subdivided with the level you want.
[ QUOTE ]
- Using PSD file in base texture slot disables shadows, but using bmp does not.
[/ QUOTE ]
I'm analyzing that. I suspect is something related to the alpha channel/alpha test... need more investigation.
Just a couple of minor (and major) additions to the wishlist:
- How about different speeds for the rotations of ReelC and ReelL? Not vital, but would be nice.
- A turntable recording function would be nice, capable of both exporting it as individual images for composing in After Effects/Premiere/etc, and for example an uncompressed AVI.
- If a turntable function, give the option to either spin the camera around the model, or rotating the actual model.
- Mentioned before, multiple lights would be nice.
love the new realse, been using it a lot.
a coworker brought up a good point..
why can't we load models with ngons?
a lot of our highpoly stuff has ngons, its a waste of time to make sure everything is a quad or a tri, and it messes up the normals sometimes (in maya) when you convert to all tris.
is this something easy to fix? support for ngons.
[ QUOTE ]
- How about different speeds for the rotations of ReelC and ReelL? Not vital, but would be nice.
[/ QUOTE ]
Easy
[ QUOTE ]
- A turntable recording function would be nice, capable of both exporting it as individual images for composing in After Effects/Premiere/etc, and for example an uncompressed AVI.
[/ QUOTE ]
Yep, but have to be uncompressed because good codecs(Quicktime, DivX, etc) are not free. The capture rate will suck (5FPS aprox) and videos could occupy a lot.
Well, I can try but you could use this for free and a lot easier:
- If a turntable function, give the option to either spin the camera around the model, or rotating the actual model.
[/ QUOTE ]
Will have to change a lot of things in the graphics driver, loading system and spatial structures to manage moving objects... that's why you can only move the camera or the light currently.
[ QUOTE ]
- Mentioned before, multiple lights would be nice.
[/ QUOTE ]
I'm in the limit of the SM3.0. Need a heavy shader change and optimization for that. I'm making 3 passes for SM1.0... with multiple lights that can be too slow. Probably would need to pass to a "deferred shading" model for that.
[ QUOTE ]
why can't we load models with ngons?
[/ QUOTE ]
I had n-gons support in the past. It's "commented" in the code. The problem is the poor exporting support. FBX takes years to triangulate a 1M mesh, some OBJ exporters does not even make them convex or create holes in the middle, etc... Cannot include them until some libraries I'm using support them correctly and fast.
On the other hand, I would need to triangulate them manually and that requires to find adjacent polygons, grouping coplanar surfaces, triangulating non-convex and hole polygons, fixing tangent space and edge shading problems... Too slow.. thing to triangulate a non-convex polygon with holes is not an easy nor fast task... and , the most important... all the automatisms are fool by nature, so probably will generate a lot of shading artifaces and incorrect normal calculations.
Triangles and quads are always convex and non-hole by nature. That's why D3D only support triangles. OpenGL can manage quads(and 100% convex non-hole polygons), but is very slow to process/paint(in fact, drivers are usually only optimized to paint lines and triangles, unless you have a pro card like a FireGL or a Quadro). Subdivision programs usually can manage only triangles and quads too(due to catmull, loop or butterfly algorithms ).
You can triangulate/quad your model before to export. Is as simple as Edit mesh in 3dsmax and Triangulate/Quadrangulate in Maya. Those programs have the data already calculated so can do it very fast ( and you can see the shading artifacts better after you do it).
[ QUOTE ]
newest version still can't render non-square textures correctly
[/ QUOTE ]
I just rendered the smiley example at 500x490 with the 3.11.1 ... Also previewed it with 512x256 texture without problem...
Notice non-power of two textures are not supported by some 3d graphics cards, is a HW-limitation (related to mipmaping and texture compression). Cards supporting NPOT textures usually render a lot slower when working with them... so is always a good practice to make POT textures.
Anybody can cofirm me a thing pls?
Can xNormal load an .OBJ exported from a Macintosh(Intel or ppc) or Unix pls? Wanna test the "end of line" characters.
jogshy - awsome release man. My stuff looks tons better in xnormal viewer than 3dsmax. Just one question, shadows are not working for some reason. My vid card is fairly old, but supports pixel shader 2.0, so I thought it would be ok.
Also love the cavity map from normal map. excellent
Replies
btw: I this program
How long should it take to bake a <1.5mil tri model onto a 400tri one? I left it on for a few hours but I'm pretty sure it wasn't doing anything, it just stopped half way through.
[/ QUOTE ]
Depends on computer, rays/sample, ao map dimensions, AA, cage, adaptive values, etc...
On an E2140 OCd to 3100Ghz(superpi 19s), 16-16 rays/samples, using cage limiters and linear attenuation i'm baking a subdiv1 1,2M smiley example (512x512, no aa) in 10 seconds.
Try to disable antialiasing, set map dimensions to 512x512, use cage limiter and set 16-16 rays/sample to test. If that takes "hours" something is happening
All I can tell you is that there is a bug related with some multicore CPUs, i'm analyzing it currently.
Try to run the hxGrid renderer with 1 or 2 machines and see if the render speed improves ( or set an environment variable as OMP_NUM_THREADS with value 1 so multicore won't be used )
I only noticed this when unwrapping an arm and the fingers in the uv layout are quite close together.
I get green and other dark artefacts where the eges are
see below.
The dx 9.0c 4.09 just indicates you have DX9.0c installed, not the concrete revision.
To know if you have installed the correct DX9 April 2007 "revision" need to see if the d3dx9_33.dll exists on the system directory(usually c:\windows\system32 on winXP )
If you are 100% sure you have the correct DX version installed(and the .NET 2.0 + VS2005SP1 DLLs) you can try to use the 7zipped version (which occupies less)... if you see any "sorry can't find d3dx9_XXXXX.dll while the program loads the plugins then you don't have the April 2007 installed.
[/ QUOTE ]
Thanks. You appear to be right, I only have d3dx9_30.dll, not d3dx9_33.dll . However, apparently now I'm in a quandry because I know I've run the installer for the latest Direct X drivers from Microsoft, the one xNormal directs me to, however apparently it isn't installed.
Thanks. You appear to be right, I only have d3dx9_30.dll, not d3dx9_33.dll . However, apparently now I'm in a quandry because I know I've run the installer for the latest Direct X drivers from Microsoft, the one xNormal directs me to, however apparently it isn't installed.
[/ QUOTE ]
Try with this one
http://www.microsoft.com/downloads/detai...;DisplayLang=en
Is the August 2007, but should work ok ( and is the one which uses the upcoming version ). Sure you have Administrator privileges to install it and notice you need it even in case you are using Vista.
If nothing of this works try to run Windows Update with the optional components checked. If you have problems still tell me and I will send you the DLL directly. While, just use the 7zipped version instead of the windows installer one and set the default graphics driver to OpenGL... that should work by now ( if you don't use the .X importer, the d3dx tangent basis, DDS, etc... )... just ignore the "sorry can't find d3dx_XXXXX.dll dialogs" or delete the plugins using DX.
[ QUOTE ]
Noticed a slight bug whereby edges which are close to each other in the uv layout
[/ QUOTE ]
Try to check the "discard backfaces" thing as Earth suggests. Try too to use the "smoothing normals" option in the lowpoly tab ( perhaps the exporter you used flipped incorrectly normals there so this will correct it).
Sure you set well the cage ( I know, is hard to set it near the fingers because there is not much space to operate... ). If you use uniform ray distances you will have problems with the fingers because the rays will pass the correct target point and will use the furthest collision... use cages always!
You can try also to use the 3dsmax's UVW Unwrap "pixel snap" function by the finger edges to discard fractional numeric problems in the UV coordinates.
[ QUOTE ]
How long should it take to bake a <1.5mil tri model onto a 400tri one? I left it on for a few hours but I'm pretty sure it wasn't doing anything, it just stopped half way through.
[/ QUOTE ]
Depends on computer, rays/sample, ao map dimensions, AA, cage, adaptive values, etc...
On an E2140 OCd to 3100Ghz(superpi 19s), 16-16 rays/samples, using cage limiters and linear attenuation i'm baking a subdiv1 1,2M smiley example (512x512, no aa) in 10 seconds.
Try to disable antialiasing, set map dimensions to 512x512, use cage limiter and set 16-16 rays/sample to test. If that takes "hours" something is happening
All I can tell you is that there is a bug related with some multicore CPUs, i'm analyzing it currently.
Try to run the hxGrid renderer with 1 or 2 machines and see if the render speed improves ( or set an environment variable as OMP_NUM_THREADS with value 1 so multicore won't be used )
[/ QUOTE ]
yep. im using a core2duo 6600 processor with a 8800gts.
could it be a bug?
There is a bug with Radeon HD 2XXX + openGL driver in the textures. I'm working with ATI to solve it.
Feel free to test and tell me if you discover any bug pls!
For the final version I wanna add a HW-accelerated renderer.. is almost finished but is still under development to improve it a few.
thx!
not to worry i will mess around some more with your suggestions
cheers
Mike
I generally do all my painting now in zbrush 3 with the polypainting tools, and export the textures out with the mesh chunks.. so having to click on each model and manually load textures for each is a bit cumbersome.
so having to click on each model and manually load textures for each is a bit cumbersome.
[/ QUOTE ]
Can't you save a XML settings file and load it after? The program is supposed to save the current settings each time you exit and you can load/save your settings manually also.
You can set all your meshes the first time and enable/disable the mesh you don't need ( the "visible" checkbox column )
Well.. that is assuming ZBrush doesn't name the chunks different each time, does it?
I think don't undestand well what you mean, sorry
[ QUOTE ]
Is there anyway to export a cage from Maya? I can't fin'd ovb or sbm exporters for it, and .obj don't work
[/ QUOTE ]
No exporters for Maya/XSI/LW atm, sorry. That's because I couldn't find a "projection modifier" like 3dsmax to export a cage... and was a non-sense to write a simple mesh exporter having .OBJ support.
Do this if ya need a cage from Maya... Export your lowpoly mesh as, for example, lowpoly.obj. Then, inside Maya, clone it. Hide the original lowpoly mesh. Extrude/move the cloned object vertices until you're happy with the cage. Export it as lowpoly_cage.obj. Then assign the lowpoly_cage.obj in the lowpoly slot "External cage file".
Notice if you change the lowpoly mesh topology the cage won't match anymore... so you need to repeat the process again if you add/remove any face on the original lowpoly mesh you cloned... That's because the external cage file and the lowpoly mesh topology must match 100%.
See the parallax wall, that example is using an .OBJ external cage ( did from Maya btw )
Perhaps there is a 3dsmax "projection modifier" in Maya.. but, AFAIK, nop!
i do have one question hopefully someone can help me out with.. how do you use the photoshop dilation filter? i always get errors about selecting 3 channels and a mask in the alpha, i have tried numerous scenarios of mask in the alpha, plus selections and i cant get it to do anything. i also can not find any documentation on this.. any help would be cool, i don't even know what it does, but i am assuming its similar to the dilation xnormal does, it just seems to take a while and sometimes i run out of memory at the dilation part of the render.
this might just be me, and the project i am working on, i can just see some type of batch texture loading to be useful.
[/ QUOTE ]
Ok I understand now. Would be enough to add a "load mesh folder" option? All the models located in a folder will be added to the list with the corresponding base bake texture .
Is that ok?
[ QUOTE ]
i do have one question hopefully someone can help me out with.. how do you use the photoshop dilation filter?
[/ QUOTE ]
Need to select RGB + alpha ( 4 channels in total ). The idea is to use the alpha channel as a 1bit mask where black==unwritten pixel and white==written pixel... So the color(RGB) values will be expanded nPixels around if the corresponding alpha is unwritten(black)
But I think there is a bug currently in that filter. Somebody sent me an email telling me the dilation ps filter does not work as the xNormal one... Is in my TOFIX list for the final 3.11.1. Also I think gonna change it to work without having to select an alpha mask and without having to flatten image neither.
Would be enough to add a "load mesh folder" option? All the models located in a folder will be added to the list with the corresponding base bake texture .
Is that ok?
[/ QUOTE ]
that would be perfect!
thanks for the info. when you say selected, do you mean physically select like the select tool or do you mean highlight the channels in the channels tab.
I used transfer maps in maya and exported the envelope from that to use in xnormal to bake AO. As it's the same topology, vert count etc. I thought it would work fine, but it doesn't. I've also tried just inflating my model a bit as a test and exporting that to use as a cage, but got the same problem.
when you say selected, do you mean physically select like the select tool or do you mean highlight the channels in the channels tab.
[/ QUOTE ]
Go to the layers window. Change to channels. Keep shift pressed and select RGB + alpha channel.
But the problem is that, to make this, you need to flatten the image so don't work very well. I'm gonna change that for the final 3.11.1 so you won't need to select alpha and neither to flatten the image.
[ QUOTE ]
It gives me the message saying the cage has a different number of verts than the source, even when it's exactly the same, and same number of uvs.
[/ QUOTE ]
And the normals? Are you sure you exported both meshes with the same options selected? Open both .OBJs with a text editor and see really the vertex elements match ( same number of vertices, 100% match in the UVs, 100% match in the face indices and face count... vertex and normals values can vary, that's ok )
If you can't get it to work in any way... can you send me your lowpoly mesh + the cage so I can debug it pls?
It gives me the message saying the cage has a different number of verts than the source, even when it's exactly the same, and same number of uvs.
[/ QUOTE ]
I think the problem is located. The final 3.11.1 will include, in a few days, a patch for that.
Basically corrected some bugs + the load mesh folder for arshlevon + cage bug correction for lupus(hope)
PS. Woo woo
One thing someone from work pointed it out is that he dosent understand why xnormal only supports tri or quad constructed models.. its obvious the obj format supports geometry with ngons, and in the heat of production its not like you always stop and pay attention if your making an ngon or not. So what ends up happening is that you have to convert every model to tris manually before export to get it to work in xnormal. i have just got in the habit of converting to tris before export, but he does have a point, i know zbrush just adds edges to ngons to make the polies either 3 or 4 sided. is this something hard to implement into xnormal?
One thing someone from work pointed it out is that he dosent understand why xnormal only supports tri or quad constructed models...
[/ QUOTE ]
Direct3D can only work with triangles. OpenGL can manage triangles, quads and ngons, but must be convex and closed.
The problem is that some programs don't generate 100% convex polygons. Some even add "holes" in the middle of them .I could manually triangulate them(not easy in case of holes), but will consume a lot of time in the conversion (specially for big meshes) because I need to find coplanar and adjacent polygons... and then triangulate but , like all the automatic things, I bet some edges/normals won't be calculated in a proper way so shading won't be very correct.
I think is better just to use the Maya "triangulate" or the 3dsmax "Edit mesh" because those programs already have all the things pre-calculated, so they can triangulate very fast. In that way you can also preview better the shading or normals artifacts after the conversion process.
Well, let me see what I can do but i'm not very optimistic.
[ QUOTE ]
Ambient occlusion still doesn't work for me even with the latest version, now it just hangs with no progress
[/ QUOTE ]
Things go better haha!
Do this... load one of the examples ( for example the wall ). Can you complete the AO baking process with it? If completes ok(in my computer uses like 10s max) then perhaps is not halted but working(but slow)...
Btw, I saw you have an AMD X2... Sure you installed the dual core patch for WinXP or you will have problems.
See this:
http://support.microsoft.com/kb/896256
http://www.overclock3d.net/articles.php?...x2_hotfix_patch
http://www.xtremesystems.org/forums/showthread.php?t=81429
I think you must install the patch from Microsoft + the patch from AMD, i'm not sure ( I'm an Intel fanboy ).
To do the tests pls use 512x512 noAA 16-16 samples per pixel. More will make it too slow to test. Btw.. how big is your highpoly mesh? Are you using cages to limit ray distance or checked the "don't limit ray distance" option? If you use cages will optimize better.
Well, if nothing of this works.. could you send me a link where to download your meshes and settings to debug it pls?
Btw, I know AO speed can be very slow but is very hard to optimize it more. Alternatively, I'm working on HW-accelerated AO but is still not ready ( because I'm having problems with the poly limit... usually cards cannot manage more than 1M poly meshes and go out of VRAM too easy )
Some results:
I think will be good for the people without 3D graphics accelerator card.
Currently shadows are too "hard" and the reflection is too weak, but hey is a start! I'm planning global illumination with photon mapping and very accurate transparency in the future ( but don't wanna waste too much time in this pre-alpha experimental thing... ). The FPS weren't too bad running in a dual-core machine but had to crop the viewport to get decent speed
I'm making other things too, but are secret atm
Great stuff so far, jogshy. Can't wait to see the new stuff released.
Its not frozen or crashed, because it will eventually stop thinking and come back, but its very anoying to have that really long wait.
[/ QUOTE ]
Ok, gonna try to solve that for the next version.
Few sugestions:
-More then one light in scene.
-Screenshots should have alpha for easy model cut out. Sure I can always put psychadelic pink in the background for that but would be nice. Or maybe background image slot.
-adjustable wireframe colour.
-reloading of single textures instead of all of them. Would help with previewing while texturing a little bit faster. Just several small buttons for each channel.
-would be cool if Xnormal didn't freeze mouse when alt tabbing. But I understand there were problems so that is why alt+enter is requied. It would be just nice to be able to have photoshop on one screen, xnormal on another and just move mouse over Xn window and click quickly "reload diffuse texture". Would be huge help while texturing.
-resizing of window in windowed mode.
Just few small sugestions . The reason why most of them are in beauty department is viewing or even rendering normal maps is not as perfect in other apps as I would hope for while Xnormal does that perfectly and hassle free.
Once again, awesome software dude.
ps. Bugs (x64):
- Light stops being locked to the camera when you hide UI.
- Using PSD file in base texture slot disables shadows, but using bmp does not.
- Using PSD for GI map ended up with error saying that it has to be power of 2 wide (although it was) and converting it to bmp ended up with no error.
-More then one light in scene.
[/ QUOTE ]
Yep, that's one of the thing I wanna add some day.
[ QUOTE ]
-Screenshots should have alpha for easy model cut out. Sure I can always put psychadelic pink in the background for that but would be nice. Or maybe background image slot.
[/ QUOTE ]
Currently that's not possible. DX9 does not support framebuffers with alpha. OpenGL yes but the HW have the last word. I'm working in a software rasterizer too ( in that case will be possible ).
[ QUOTE ]
-adjustable wireframe colour.
[/ QUOTE ]
It is currently Wireframe color is set to 1-backgroundColor... if you change the background color will change the wireframe color. You can also touch the ui.lua file manually to set it to certain color(but requires LUA scripting knowledge). Perhaps I should add a color-picker button like the grid color one.
[ QUOTE ]
-reloading of single textures instead of all of them. Would help with previewing while texturing a little bit faster. Just several small buttons for each channel.
[/ QUOTE ]
Yep. For xNormal 4.0 I will change the UI completely. Will be a typical application instead of a strange-UI one. You will be able to do that.
[ QUOTE ]
-would be cool if Xnormal didn't freeze mouse when alt tabbing. But I understand there were problems so that is why alt+enter is requied.
[/ QUOTE ]
Currently DX9 limits a lot. IT uses a very strange "device lost" behavior. Each time you ALT+TAB, CTRL+ALT+SUPR, etc... the textures and meshes need to be re-uploaded to the graphics card, etc... Also there is a similar problem with the Windows input system and problems with multimonitors, etc. Vista and DX10 system will be much better.
[ QUOTE ]
-resizing of window in windowed mode.
[/ QUOTE ]
You can currently, but only if you use OpenGL graphics driver ( set it in the plugin manager ).
[ QUOTE ]
ps. Bugs (x64):
[/ QUOTE ]
Nice list. Gonna try to solve them asap.
thx for the feedback!
Currently DX9 limits a lot. IT uses a very strange "device lost" behavior. Each time you ALT+TAB, CTRL+ALT+SUPR, etc... the textures and meshes need to be re-uploaded to the graphics card, etc...
[/ QUOTE ]
You can avoid the problem by creating textures and meshes with D3DPOOL_MANAGED. Only the default pool gets lost when the user alt-tabs. The managed pool behaves more like OpenGL!
[ QUOTE ]
Currently DX9 limits a lot. IT uses a very strange "device lost" behavior. Each time you ALT+TAB, CTRL+ALT+SUPR, etc... the textures and meshes need to be re-uploaded to the graphics card, etc...
[/ QUOTE ]
You can avoid the problem by creating textures and meshes with D3DPOOL_MANAGED. Only the default pool gets lost when the user alt-tabs. The managed pool behaves more like OpenGL!
[/ QUOTE ]
Yep but some drivers report D3DERR_DRIVERINTERNALERROR too on atl+tab/ctrl+alt+supr sometimes!
- Using PSD file in base texture slot disables shadows, but using bmp does not.
[/ QUOTE ]
Just thought I'd chime in and say this isn't true on my setup. Shadows work using PSD files as base texture. Saved on Photoshop CS3 using compatibility mode.
Not sure what specs are helpful, but I'm running xNormal v3.11.1.7434 (x64 Release) on Windows XP Pro x64, on an Intel E6850 with an ATI Radeon HD 2900 XT with latest (7.9) drivers.
On a side note, I again have to applaud you and your software. I thought xNormal would have a problem baking maps out of a 12 million triangle mesh, but it did so amazingly fast. I think I'm going to have to send you another donation soon
I exported my low poly model from maya. I then pushed verts out so the model would act as an envelope, I didn't add/remove/alter geometry, just inflated it to fit around the high poly. I then exported that (as .obj like the first export). Didn't work in xnormal, so I'm looking at them mow in notepad and the numbers are different. Which makes sense of course if it's the coordinates. No idea what to do here, can't send the file to you as it's work stuff.
ATI Radeon HD 2900 XT with latest (7.9) drivers.
[/ QUOTE ]
Btw, shadows + OpenGL + Radeon 2XXX are broken currently(does not work with the 2400, not sure about other models). It's an ATI bug in the Catalyst, they are working to solve it. Should work ok using Direct3D tho.
[ QUOTE ]
so I'm looking at them mow in notepad and the numbers are different
[/ QUOTE ]
See if the vertex position count or face count/face indices are different. All the other data does not matter for external cages.
Just wondering; can Xnormal render 32-bit FP displacement maps?
Just thought I'd chime in and say this isn't true on my setup...
[/ QUOTE ]
Im using CS1 and GF7 card in Dx9 mode. Xp pro 64.
Hey Jogshy,
Just wondering; can Xnormal render 32-bit FP displacement maps?
[/ QUOTE ]
Yep, save as TIFF(go to plugin manager, select IEEE754 FP32 output) or SuperRAW image format. The program works internally with 64/80 bits floating point double precision... it just converts it at end, depending on the output image format.
And remember, it does not tesselate the lowpoly so you need it to pass it already subdivided with the level you want.
[ QUOTE ]
- Using PSD file in base texture slot disables shadows, but using bmp does not.
[/ QUOTE ]
I'm analyzing that. I suspect is something related to the alpha channel/alpha test... need more investigation.
- How about different speeds for the rotations of ReelC and ReelL? Not vital, but would be nice.
- A turntable recording function would be nice, capable of both exporting it as individual images for composing in After Effects/Premiere/etc, and for example an uncompressed AVI.
- If a turntable function, give the option to either spin the camera around the model, or rotating the actual model.
- Mentioned before, multiple lights would be nice.
a coworker brought up a good point..
why can't we load models with ngons?
a lot of our highpoly stuff has ngons, its a waste of time to make sure everything is a quad or a tri, and it messes up the normals sometimes (in maya) when you convert to all tris.
is this something easy to fix? support for ngons.
thanks again for all your hard work.
- How about different speeds for the rotations of ReelC and ReelL? Not vital, but would be nice.
[/ QUOTE ]
Easy
[ QUOTE ]
- A turntable recording function would be nice, capable of both exporting it as individual images for composing in After Effects/Premiere/etc, and for example an uncompressed AVI.
[/ QUOTE ]
Yep, but have to be uncompressed because good codecs(Quicktime, DivX, etc) are not free. The capture rate will suck (5FPS aprox) and videos could occupy a lot.
Well, I can try but you could use this for free and a lot easier:
http://www.microsoft.com/windows/windowsmedia/forpros/encoder/default.mspx
[ QUOTE ]
- If a turntable function, give the option to either spin the camera around the model, or rotating the actual model.
[/ QUOTE ]
Will have to change a lot of things in the graphics driver, loading system and spatial structures to manage moving objects... that's why you can only move the camera or the light currently.
[ QUOTE ]
- Mentioned before, multiple lights would be nice.
[/ QUOTE ]
I'm in the limit of the SM3.0. Need a heavy shader change and optimization for that. I'm making 3 passes for SM1.0... with multiple lights that can be too slow. Probably would need to pass to a "deferred shading" model for that.
[ QUOTE ]
why can't we load models with ngons?
[/ QUOTE ]
I had n-gons support in the past. It's "commented" in the code. The problem is the poor exporting support. FBX takes years to triangulate a 1M mesh, some OBJ exporters does not even make them convex or create holes in the middle, etc... Cannot include them until some libraries I'm using support them correctly and fast.
On the other hand, I would need to triangulate them manually and that requires to find adjacent polygons, grouping coplanar surfaces, triangulating non-convex and hole polygons, fixing tangent space and edge shading problems... Too slow.. thing to triangulate a non-convex polygon with holes is not an easy nor fast task... and , the most important... all the automatisms are fool by nature, so probably will generate a lot of shading artifaces and incorrect normal calculations.
Triangles and quads are always convex and non-hole by nature. That's why D3D only support triangles. OpenGL can manage quads(and 100% convex non-hole polygons), but is very slow to process/paint(in fact, drivers are usually only optimized to paint lines and triangles, unless you have a pro card like a FireGL or a Quadro). Subdivision programs usually can manage only triangles and quads too(due to catmull, loop or butterfly algorithms ).
You can triangulate/quad your model before to export. Is as simple as Edit mesh in 3dsmax and Triangulate/Quadrangulate in Maya. Those programs have the data already calculated so can do it very fast ( and you can see the shading artifacts better after you do it).
[ QUOTE ]
newest version still can't render non-square textures correctly
[/ QUOTE ]
I just rendered the smiley example at 500x490 with the 3.11.1 ... Also previewed it with 512x256 texture without problem...
Notice non-power of two textures are not supported by some 3d graphics cards, is a HW-limitation (related to mipmaping and texture compression). Cards supporting NPOT textures usually render a lot slower when working with them... so is always a good practice to make POT textures.
Can xNormal load an .OBJ exported from a Macintosh(Intel or ppc) or Unix pls? Wanna test the "end of line" characters.
Also love the cavity map from normal map. excellent