So i was sitting around last night and thought of a good way to use the diffuse baking to replicate Max's material color baking ability from the highres, only without having to use any materials. This comes in real handy when texturing in photoshop, makes it much easier to know exactly where to paint and easier to make masks and whatnot. Theres a few steps to it.
1. Make a simple image with some colors.
2. Uv highres and lay out ontop of simple colors.
3. Save out highres, load up texture in xnormal and hit render.
4. Repeate the phrase "Arise chicken araise!" 3 times
5. ???
6. PROFIT!
Jogshy: It seems the color map that gets exported is much brighter than the source, its not a big deal with what im doing here since its easily tweaked, but i can imagine for someone with a really detailed texture it would be bothersome. heres an example:
Oh yeah, and it displays it fine in the preview window but when i open the file in photoshop its different.
Some further thoughts and this to consider would be to have multiple image slots for the highres, for things like bump, spec, etc. And also the ability to load a bump/normals map ontop of a highres mesh and have that rendered into the normals map for your low res mesh would be great too. That way you could posibly get around having to work with insanely high res meshes. The more functionality you can borrow from max the better, this is really shaping up into being a bad ass program. I didnt load max once today. w00t
I noticed too some bright problems in the baked base texture... Need to investigate a few
Are you using OpenEXR or HDR format? The exposure control is a pain Also caution with the Photoshop 7/CS and TGAs... there was a bug in the importer/exporter that premodulated the alpha.
[ QUOTE ]
to load a bump/normals map ontop of a highres mesh and have that rendered into the normals map for your low res mesh would be great too.
[/ QUOTE ]
So you could mix the highpoly UV-texture-mapped normal map and the generated lowpoly one? Sounds interesting!
We've talked about this before, but I'll say it to everyone. The washed out colors effect is a gamma correction problem. OpenGL expects a gamma of 1.0, and Windows is 2.2 usually. I'm not sure who's fault this is actually. It might be you or xNormal. Setting up a color correction workflow in Photoshop usually fixes these problems. I haven't had a chance to test 3.8 yet, so I'll see if I get those colors, too.
[ QUOTE ]
So you could mix the highpoly UV-texture-mapped normal map and the generated lowpoly one? Sounds interesting!
[/ QUOTE ]
He means applying a "fine detail" normal map onto the high poly model, which is taken into account when rendering the normal map for the low poly. I think you can already do this with a bump map, right? Could it work for a normal map, too?
He means applying a "fine detail" normal map onto the high poly model, which is taken into account when rendering the normal map for the low poly. I think you can already do this with a bump map, right? Could it work for a normal map, too?
[/ QUOTE ]
Oh so if I understand well what you want is to UV-texture-map a heightmap to the highpoly model to add fine detail? So basicaly we have the base texture + that bump map applyed to the highpoly mesh? Damm good idea, inc for nxt version!
He means applying a "fine detail" normal map onto the high poly model, which is taken into account when rendering the normal map for the low poly. I think you can already do this with a bump map, right? Could it work for a normal map, too?
[/ QUOTE ]
Oh so if I understand well what you want is to UV-texture-map a heightmap to the heightmap model to add fine detail? So basicaly we have the base texture + that bump map applyed to the highpoly mesh? Damm good idea, inc for nxt version!
[/ QUOTE ]
Exactly. And the only reason i say normal or bump for this is because some programs like Modo 2.01 can paint normals in 3d as well as bump IIRC.
Indeed. So the "Fine Detail" button is for applying the detail to the low poly? Because you could just add in another input in that section and label one for the low poly and one for the high poly. It would be awesome if it read both bump and normal maps, too.
Oh, a brief tutorial on fixing the diffuse output (until it gets fixed internally):
1. In Photoshop, goto Edit > Assign Profile...
2. Choose an RGB profile with the gamma you have - sRGB 2.1 will be fine for Windows machines.
3. Goto Edit > Convert Profile...
4. Your Source Profile should be sRGB or whatever. You want to set the Destination Profile to Custom RGB...
5. This will pop up a window where you can customize your color profile. I leave everything at default except for Gamma, which should be set to 1.00. I name this OpenGL since it uses 1.0 gamma. Click OK.
6. Under Conversion Options, Engine should be set to Adobe, and Intent to your preference - I usually use Relative Colorimetric. Black Point Compensation should be checked, Dithering is up to you.
7. If your colors haven't changed, goto View > Proof Setup > Windows RGB.
Keep up the good work, man. We're rooting for you here.
I've been getting a lot of virtual memory errors when trying to bake normals, occlusion and diffuse while setting the pixel dilation higher. The high poly is only about 300k, and my maps are 2048. I have 2GB of RAM (which is the max) and a 3GB page file. But I get errors when trying to use a higher dilation. First of all, I can only go up to 5, and I usually like 10-15 pixels. I know you can go into the plug-in manager and set the dilation filter higher, but should I be changing it there? If I set it higher, the baking usually gets to the 3rd pass of dilation and then fails with a virtual memory error. I have a couple questions.
Can the pixel dilation max be set higher in the normal map settings, please?
Is there anything that can be done to help the virtual memory problem? (On either end)
[ QUOTE ]
I've been getting a lot of virtual memory errors when trying to bake normals, occlusion and diffuse while setting the pixel dilation higher. The high poly is only about 300k, and my maps are 2048. I have 2GB of RAM (which is the max) and a 3GB page file. But I get errors when trying to use a higher dilation. First of all, I can only go up to 5, and I usually like 10-15 pixels. I know you can go into the plug-in manager and set the dilation filter higher, but should I be changing it there? If I set it higher, the baking usually gets to the 3rd pass of dilation and then fails with a virtual memory error. I have a couple questions.
Can the pixel dilation max be set higher in the normal map settings, please?
Is there anything that can be done to help the virtual memory problem? (On either end)
Thanks. ^_^
[/ QUOTE ]
You dhoun't change the dilation pixels in the plugin-manager. Just go to the Normal map tab, right click over the dilation filters and "configure" it there. You can stack various filters(dilation,blur) right clicking over them and setting "add new filter". "Configuring" the filters in the plugin-manager only sets its default value ( or even does nothing )
In theory the max dialation atm is 64px. Unfortunally a bug does not possible to pass from 5 as you mentioned.
About the virtual memory... Hahahha I just discovered im not deallocating the memory in the dilation filter... so prolly you are going outta ram hahah!
Will put a patch for this asap in the v3.8.1 and sorry for the bugs
Wow, thanks! You're really on the ball here. No more memory errors.
Have you ever thought about some sort of an upgrade system? It's just that you're so fast with new versions, it would seem appropriate to upgrade xNormal instead of installing a new version. Not a big deal, I just thought I would bring it up.
[ QUOTE ]
Have you ever thought about some sort of an upgrade system? It's just that you're so fast with new versions, it would seem appropriate to upgrade xNormal instead of installing a new version. Not a big deal, I just thought I would bring it up.
[/ QUOTE ]
Yep yep. I tryed to setup a ClickOnce but is very complicated to setup it for Managed C++ ( easy for c# or vb.net )
I could develop my own upgrade system using an ASP.NET web service but unfortunally my web has not enough bandwidth or space to do it In fact I can't event host the xNormal files due to the excesive bandwidth and space hehe.
And yes, I know to download 31Mb for each new version ( and I have to upload like 280Mb for each new version which hurts too ) is a pain but I don't see any other alternatives...
Also some people sent me an email telling you need administrative priviledges to install it but can't even do a copy/paste installation because it requires to install Windows SxS VC++ dlls, .NET 2.0 and to manage the registry.
So thats why I opted for a Windows Installer deployment using public upload web sites ( also I share xNormal in eDonkey/eMule/BitTorrent and other peer to peer )
I hope Visual Studio 2032 incorporates a decent clickOnce for MC++ in a few... and also somebody donates me a T1 with 6Gb of bandwith to experiment with web services hahah Also not sure if worth the effort because when I touch 1 line in the SDK i have to recompile the .exe, all the zillion of plugins, etc... So you prolly will have to re-download almost all but the examples.
How much data transfer do you get a month? I've got 400 gb/m on my site and barely use any of it. I could probably set up a sub-domain for you, if you're really using a crazy amount of bandwidth i'm sure someone on here could figure something out as far as hosting *cough*bearkub*cough*. For all you've done adding features and updating so frequently i think its only right for someone to help you out
I noticed importing the mesh dosent use both threads on my cpu, i dont have any idea if its posible to use both cores but it would be nice when loading very large meshs. I've exported some very dense meshes from mudbox recently and it seems to take FOREVER to import(up to an hour) because it has to calculate vertex normals. Is this because mudbox isnt exporting normals? Because i can take similar sized meshes from modo and it will load in a matter of minutes.
I also noticed that sometimes i get offset pixels on certain polygons and i cant really seem to figure out why, it usually seems this happens on mechanical type objects that have details close together. Heres a screenshot:
[ QUOTE ]
I noticed importing the mesh doesn't use both threads on my cpu, i dont have any idea if its posible to use both cores
[/ QUOTE ]
To use dual-core to import won't be a good idea I think... because you will gain CPU performance but you will loose it because the hard disk head have to jump from a portion of the file to another. Also probably will have to lock/sync too many times the vertex lists so will get too much thread deadlocks. I have to study it in depth to tell you really.
[ QUOTE ]
I've exported some very dense meshes from mudbox recently and it seems to take FOREVER to import(up to an hour)
[/ QUOTE ]
Yes I have to re-think about the mesh model. I have a problem here... Almost all the 3d modeling programs use multiple vertex indices ( like the OBJ format, one index for the position, other for the normals and other for the texture coords )... but D3D/OpenGL requires only one vertex index... I could optimize the load time dramatically but at the cost of a very poor performance in the 3D viewer ( need to draw raw triangles instaed of VBOs ). Ideally, you could ask your programmer to make a xNormal importer for your engine format with an OpenGL-friendly format like the SBM or the one 8monkeys team did. Then you will notice a tremendous speed up, I hope!
I am not sure if is better to export the normals or not in the highpoly model... By one part you could skip the vertex normals calculation, but on the other hand will affect the hashing algorithms.
As advice, if you are not going to touch anymore the highpoly model I recommend you to load your original into the 3d viewer.. and then save it as a SBM for future re-use. And try to use highpoly models with only vertex positions exported ( no normals, no UVs ) to speed up the process.
Can you tell me more specifically in what stage is halting so bad? In the loading process? In the vertex normals calculation? Tangent basis calculation? Remove illegal geometry? Have your highpoly model normals and UVs? Size of the file and # of polygons/vertices? And what kind of file are you using too pls (obj, lwo, fbx, ... )
[ QUOTE ]
I also noticed that sometimes i get offset pixels on certain polygons and i cant really seem to figure out why, it usually seems this happens on mechanical type objects that have details close together.
[/ QUOTE ]
Can you post a wireframe over that render to know if they are in the triangle edges please? Also enable the "show lowpoly tangents" and if you see duplicated axis around the same object then probably is because the UVs are not well welded. You can use too the checked.tga texture in the examples directory to see more visually the UV mapping.
Are you sure your UVs are well welded and pixel-snapped?
Triangle edges can be conflictive while rendering them though because is a point owned by more than one triangle and maths can be unestable.
[ QUOTE ]
To use dual-core to import won't be a good idea I think... because you will gain CPU performance but you will loose it because the hard disk head have to jump from a portion of the file to another. Also probably will have to lock/sync too many times the vertex lists so will get too much thread deadlocks. Have to study it in depth to tell really.
[/ QUOTE ]
Yeah i figured there was some reason you did it that way, wanted to mention just incase it was something you missed but didnt think it was likely.
[ QUOTE ]
Can you tell me more specifically in what stage is halting so bad? In the loading process? In the vertex normals calculation? Tangent basis calculation? Remove illegal geometry? Have your highpoly model normals and UVs? Size of the file and # of polygons/vertices? And what kind of file are you using too pls (obj, lwo, fbx, ... )
[/ QUOTE ]
The vertex normal calculation is what takes the longest with mudbox files. Importing OBJ files as always.
Im positive that the UVs are welded, but am unsure on them being pixel snapped, thats actually the first time i've heard of anything like that. But would make sense, if a uv vert is inbetween a pixel. Is that what you're suggesting? Would it be posible to snap the uvs to whatever resolution you're rendering out before it renders or something?
Hmm you object seems to have the UVs well welded... Can you post the same using the checked pattern too pls?
I notice those seams are in the triangle edges... Perhaps is some kind of math problem problem then, will look it. Are you using the antialiasing option btw? And other question... does maya/xsi/3dsmax generate well the normal map for it?
Yes pixel-snap ensures the UVs are not in the middle of a pixel but I think thats not the problem tho.
If you see any of the "Calculating vertex normals..." messages is because mudbox doesn't export vertex normals indeed, so xNormal have to calculate them.
The calculating vertex normals have various stages... Like
Averaging vertex normals (crossA)...
Averaging vertex normals (faceN)...
Averaging vertex normals (finding positions)...
Averaging vertex normals (averaging)...
Averaging vertex normals (normalizing)...
Averaging vertex normals (re-deindexing mesh)... I bet is here is where it halts badly...
Do you see any of these halting more than other pls?
Try to export the vertex normals if possible then? Open it in Maya/XSI/3dsmax the exported object in mudbox and re-export it and test again? Also don't export UVs for the highpoly mesh if you are not using the base texture bake thingy.. that will slow a lot the calculation.
I am trying to dowloaded mudbox to test... but requires me to apply a beta or something?
ps: That screenshot reminds me I have to put a slide bar to control the axis scale
Yeah im using anti-aliasing, at max setting it still dosent go away. The only real way i can get rid of it now ithout editing it out by hand is to render out a 4096x4096 and that large of a texture has crashed xnormal every time i tried with this these models.
I'm pretty sure it was on (re-indexing mesh)
Oh yeah also if you didnt see my message before i can give you free webhosting. Send me a PM and ill set you up with a sub-domain and ftp account.
thx Earth for the tests. Your model seems to be correct so I bet is bug really ( not sure if mine of mudbox thingy tho ) A last favour... could you send me the lowpoly model you are using for test pls? To granthill76 [at] yahoo.com pls only the clock-spiral model thingy
Will investigate all this in depth when I have a moment. The only thing that preocupies me are the seams at edges... I'm changing all the loading model and loads much faster in the version I am develpoing atm... don't worry load times will be reduced a lot.
also thx for the web host thing but first I think I need to develop the upgrade system before to ask you
I tried to load another high poly mesh to normal map and I still get a "Out of virtual RAM memory" error if I have a Dilation Filter enabled. I had three loaded with a setting of 5. I tried just enabling one of the filters, with the padding still set to 5, and I get the error. If I disable all 3 dilation filters, it works. However, that's not doable, because with no dilation, I get nasty gaps in the normal map everywhere.
[/ QUOTE ]
Why not one with a value of 15?
What's your normal map size btw? How much memory is trying to allocate the program?(see the xNormal_debugLog.txt)
ps: I'm preparing the 3.9.0 correcting tons of bugs and with superimproved mesh loading speed
torn, are you using the 3.8.1, aren't you? I think solved some normal map filters problems in that version.
In theory you can put up to 64 dilation pixels. Is it not letting you to put a value greater than 5 in the numeric control or?
With that memory usage and 2Gb installed should not give you problems, so I bet is some kind of bug?
[ QUOTE ]
Yeah, 3.8.1.32971. I'm not sure what's going on. I didn't uninstall 3.8.0 before installing 3.8.1. Could that have something to do with it?
[/ QUOTE ]
Hmm could you try to uninstall completely the program, erase its directory, re-install again from zero the 3.8.1 and retry to put 64 dilation pixels, please?
I downloaded from mirror #1, installed and that's my max:
I just loaded the acid example, went to the "Normal map" tab, right-mouse-click over the dilation image filter, "Configure", type 900, it will autocorrect to the max, 64.
Anymore can try to set dilation pixels in the normalmap tab to 64 pls? Does it resets to 5? It appears to works fine for me hahah, weird
Oh my bad, I thought that's what the Pixel Padding was. Haha - so I guess I can leave that at 2 then and use the dilation. :P
I'm sorry, I definitely misunderstood. Some people use the terms interchangeably, so I thought that was the case here. Turns out, you were using them correctly.
[ QUOTE ]
Oh my bad, I thought that's what the Pixel Padding was. Haha - so I guess I can leave that at 2 then and use the dilation. :P
I'm sorry, I definitely misunderstood. Some people use the terms interchangeably, so I thought that was the case here. Turns out, you were using them correctly.
[/ QUOTE ]
Hehehheh it happens! ( and that means probably I might touch the docs to clarify it! )
Yep yep, I use "dilation" as an image filter to avoid normalmap seams... Basically pixels not rendered will be filled with the average of the near pixels. That's what you wanted to touch and what you need.
The "pixel padding" is just to consider a few pixels near the triangle border UVs to avoid avoid barycentric cooordinates floating-point problems while the triangle-scanline process... That's only for math-crazy people and usually you won't touch it nevah!
Both terms are, indeed, related but the "dilation" is an image filter, "pixel padding" is just a hack to prevent math overflows/exceptions near triangle edges. In fact I think xNormal is enough floating-point-protected so you prolly could use a "pixel padding" of ZERO without problems but decided to put that param in case you are getting strange pixels at triangle edges. Btw, Earth, I suspect the images you posted before suffers from this... Can you try to use a pixel padding of ZERO or 5 and see if the spiral thing is solved please?
You can't set more than FIVE pixels in the "pixel pading". Five is a good maximum I think... usually 1 is enough ( edges usually have 1 pixel ), 2 is the recomended because one appeared to me too weak, more than 2 is just to solve weird weird problems I barely can imagine...
ps: The 3.9.0 is almost finished and also i'm considering now a linux port... not sure if can be done because GTK+ and Elipse/NetBeans/Anjuta with the C++ modules are touching the nuts badly.
And btw, latest Catalyst 6.7 has a bug with FBOs and cubemaps that produces a nice crash... Revert to the 6.6 in case.
This time optimized the mesh loading speed a lot ( can reach 10x! ), improved the cage system ( now you can control too the ray directions ) and solved some minor bugs.
btw, there is a bug in the latest ATI Catalyst 6.7 drivers with FBOs and render-to-cubemap texture and texture corruptions... If you are using these drivers I recommend you to rollback to the 6.6 or 6.5 to avoid problems with the xNormal 3D viewer. ATI has been informed and they're working to solve it...
For the v3.10.0 will add a Direct3D 9 graphics driver because I'm really tired of crappy OpenGL driver implementations. Also wanna experiment a few with GPGPU raycasters.
ps: Enabled two mirrors to download... tomorrow will add more if I have time.
Hmmm actually nevermind, i just went to look at what i thought i had a proble with and it looks fine, i think i must have had a texture before that i rendered the ambient and normals with different settings. Sorry for the confusion.
Hey btw, what do you think about the new mesh re-design to load faster? Do you load meshes faster now? Had to touch a few some internal structures.. now occupy a few more... hope that won't wipe all your RAM heheh
Also included the possibility to move the cage points to control the ray direction like I saw in the Mr.Poopping's tutorials. Tryed this yet? Was very useful for me to tweak better the xN example at hard edges (I know still have a few seams but still need some retouches)
Now I'm implementing the Direct3D driver because I'm tired of crappy OpenGL drivers... Also exploring the "Wink" at
I havent gotten around to trying the newest version yet, I just got done running a bunch of stuff through the last version and i'm on to texturing now. When i'm backing to rendering some larger files again i will definately let you know. One thing i did notice and this is still will the old version is that there did still seem to be some issues with very large meshs... I think around 6m triangles and large texture resolutions like 2048x2048 that it just cant seem to handle(Usually only a problem when trying to do ambient.)
sorry for my bad english, i just join this forum i read every day, such a shame
I read this thread carefully, but i just have no clue and tested around for ambient occlusion. Since maya lack in baking ao, i tried xNormal for the job. I only got a low res model which i want to bake. As far as i get it i need also a highres model which won´t made such sense for my example, i want to bake ao for different low poly Houses.
At the moment i test something about 450 rays, it´s still calculating. I just imported the house as fbx with the parts mapped i need. Windows for exampel are still there but with no UV information. If i render a half day it might be ok because here at work are often pc´s free for such tasks. And there is a batch render option i read ^^
ThX for this FREE tool. Seems to kick asses!
[ QUOTE ]
I want to bake ao for different low poly Houses.
At the moment i test something about 450 rays, it´s still calculating. I just imported the house as fbx with the parts mapped i need. Windows for exampel are still there but with no UV information
[/ QUOTE ]
xNormal was initally designed to project the highpoly AO into the lowpoly one. To calculate the AO for the lowpoly models without highpoly ones probably you will need to use a trick... Set as highpoly models the lowpoly models. Then set the lowpoly model. As alternative, you can use other programs like AOGen for that task.
450 rays to do a test I think is too much. Use 24-50 with a 512x512 map to do test. 450 is high quality. 1000 is almost film quality.
Also remember the lowpoly models need to include texture coordinates(UVs) and the AO will be baken using those UVs ( can't generate the UVs for you atm)
Mesh loading is much improve, sooo much faster. Tho i havent tested it on really dense meshes(only 1m tris) im sure it will help a lot. Now i dont have to spend the time importing and exporting from modo.
I'm getting a virtual memory error if i try and render 4096x4096(No AA) telling me i'm out of disk space even tho i have 1.6 gigs free on C:/ and 7 gigs free on / where my page file is located. I'm trying to do 4096x4096 because i'm still getting those inaccuracy issues where certain triangles will be offset a pixel or two and i think down sampling from 4096 to 2048 will fix it.
[ QUOTE ]
I'm getting a virtual memory error if i try and render 4096x4096(No AA) telling me i'm out of disk space even tho i have 1.6 gigs free on C:/
[/ QUOTE ]
Each pixel in the "maps" occupies:
nx,ny,nz -> 3 floats ->12 bytes
ao -> 1 float -> 4 bytes
displacement -> 1 float -> 4 bytes
r, g, b, a for base-texture projection -> 4 floats -> 16 bytes
total 36 bytes per pixel
4096x4096x36 = 603Mb
Like some operations like dilation/blur requires a "copy" of the map, you need to 2x that... so like 1,2Gb only for the "maps". If you have only 1,7Gb in C:\, the system volume, that probably won't be enough because you must add the highpoly memory, etc... I'm not sure how windows manages the swap file but I suspect it just uses the one located in the Windows drive. Perhaps you should try to delete all the swap files but the \ one and set its size to a lot of Gb...
I'm still trying to optimize the memory but is hard without affecting the rendering speed and precission a lot. 64bits probably will solve this if the RAM vendors can drop some day the DIMM prices a few hehe. Also remember to use the "memory-conservative" raycaster instead of the "fast" one when you are managing big meshes.
Yes, I know, xNormal devoures your RAM! I wonder what would be the limit putting here 4Gb DIMMs
I have the page file turned off completely on my c:\ because whoever set up the pc i'm using here at work decided a 10 gig partition would be enough for the OS...... So my pagefile is on \ and set to like 4-5 gigs. Perhaps windows is being retarded and still swaping to c:\?
EQ, if those "drives" are on the same physical hardware and just 2 partitions, there is ZERO, NADA, ZILTCH, benefit to moving the page file. Not even for imaging, techsupport or engineering purposes can I think of that benefiting any environment.
Just fyi, tell your admins to stop wasting time and complicating crap. And i'm availible as a consultant :P
Jogshy, i'm curious if you would be interested in paid programming work in the future. Nothing major, just applying new functions or altering some items in TSE when it's finalized for a prototype.
Down: I moved it there because i do not have enough physical space available on the tiny os partition, not so i could get some sort of speed improvement. I'm not sure if thats what you were getting at tho? Are you trying to say if its on the same partition it dosent matter where it is its still going to use c:/ ??
[ QUOTE ]
Are you trying to say if its on the same partition it dosent matter where it is its still going to use c:/ ??
[/ QUOTE ]
Im not sure how Windows manages virtual memory but... I think there should be only ONE swap file. You can have multiple ones but only one will be used. If you have not enough space in C:\ then set it in the \ and erase all the other swap files to ensure no other swap files are used. Set the \ swap file to 5-7Gb and sure you have enough physical RAM installed to perform the swapping ( like 1,5Gb-2Gb )
And remember... only AVAILABLE physical RAM counts... so you can install 190293209Gb, if you have only 512Mb free when Windows starts you will have problems.
If you really need to manage BIG BIG mesh assets ( >1M polys ) I suggest to use a 64bits OS with 2Gb+ RAM. 32bits are absolutely not enough for those meshes ( that why the last Maya, 3dsmax, etc started to release x64 version too ). 32bits OS will start to fall soon because 2Gb is too low for highpoly mesh management ( and aren't 2Gb really because Windows will own like 600Mb at start ).
Other option is to use Windows Server 32bits and setup a "cluster" using shared RAM ( yes I know Windows Server is expensive but usually enterprises already use it to manage their "domain" )
Oh and also other note... usually graphics cards don't allow to paint meshes with more than 1M polygons.
Hey Jogshy, the bake hipoly texture to lowpoly option is nice!
I was wondering if it would be possible to make an auto-assign random colors to groups/elements option? Also an option to go into 3d view, select highpoly elements and pick/change colors.
Replies
We'll test it out and let you know.
1. Make a simple image with some colors.
2. Uv highres and lay out ontop of simple colors.
3. Save out highres, load up texture in xnormal and hit render.
4. Repeate the phrase "Arise chicken araise!" 3 times
5. ???
6. PROFIT!
Jogshy: It seems the color map that gets exported is much brighter than the source, its not a big deal with what im doing here since its easily tweaked, but i can imagine for someone with a really detailed texture it would be bothersome. heres an example:
Oh yeah, and it displays it fine in the preview window but when i open the file in photoshop its different.
Some further thoughts and this to consider would be to have multiple image slots for the highres, for things like bump, spec, etc. And also the ability to load a bump/normals map ontop of a highres mesh and have that rendered into the normals map for your low res mesh would be great too. That way you could posibly get around having to work with insanely high res meshes. The more functionality you can borrow from max the better, this is really shaping up into being a bad ass program. I didnt load max once today. w00t
Are you using OpenEXR or HDR format? The exposure control is a pain Also caution with the Photoshop 7/CS and TGAs... there was a bug in the importer/exporter that premodulated the alpha.
[ QUOTE ]
to load a bump/normals map ontop of a highres mesh and have that rendered into the normals map for your low res mesh would be great too.
[/ QUOTE ]
So you could mix the highpoly UV-texture-mapped normal map and the generated lowpoly one? Sounds interesting!
[ QUOTE ]
So you could mix the highpoly UV-texture-mapped normal map and the generated lowpoly one? Sounds interesting!
[/ QUOTE ]
He means applying a "fine detail" normal map onto the high poly model, which is taken into account when rendering the normal map for the low poly. I think you can already do this with a bump map, right? Could it work for a normal map, too?
He means applying a "fine detail" normal map onto the high poly model, which is taken into account when rendering the normal map for the low poly. I think you can already do this with a bump map, right? Could it work for a normal map, too?
[/ QUOTE ]
Oh so if I understand well what you want is to UV-texture-map a heightmap to the highpoly model to add fine detail? So basicaly we have the base texture + that bump map applyed to the highpoly mesh? Damm good idea, inc for nxt version!
[ QUOTE ]
He means applying a "fine detail" normal map onto the high poly model, which is taken into account when rendering the normal map for the low poly. I think you can already do this with a bump map, right? Could it work for a normal map, too?
[/ QUOTE ]
Oh so if I understand well what you want is to UV-texture-map a heightmap to the heightmap model to add fine detail? So basicaly we have the base texture + that bump map applyed to the highpoly mesh? Damm good idea, inc for nxt version!
[/ QUOTE ]
Exactly. And the only reason i say normal or bump for this is because some programs like Modo 2.01 can paint normals in 3d as well as bump IIRC.
Oh, a brief tutorial on fixing the diffuse output (until it gets fixed internally):
1. In Photoshop, goto Edit > Assign Profile...
2. Choose an RGB profile with the gamma you have - sRGB 2.1 will be fine for Windows machines.
3. Goto Edit > Convert Profile...
4. Your Source Profile should be sRGB or whatever. You want to set the Destination Profile to Custom RGB...
5. This will pop up a window where you can customize your color profile. I leave everything at default except for Gamma, which should be set to 1.00. I name this OpenGL since it uses 1.0 gamma. Click OK.
6. Under Conversion Options, Engine should be set to Adobe, and Intent to your preference - I usually use Relative Colorimetric. Black Point Compensation should be checked, Dithering is up to you.
7. If your colors haven't changed, goto View > Proof Setup > Windows RGB.
Keep up the good work, man. We're rooting for you here.
Can the pixel dilation max be set higher in the normal map settings, please?
Is there anything that can be done to help the virtual memory problem? (On either end)
Thanks. ^_^
I've been getting a lot of virtual memory errors when trying to bake normals, occlusion and diffuse while setting the pixel dilation higher. The high poly is only about 300k, and my maps are 2048. I have 2GB of RAM (which is the max) and a 3GB page file. But I get errors when trying to use a higher dilation. First of all, I can only go up to 5, and I usually like 10-15 pixels. I know you can go into the plug-in manager and set the dilation filter higher, but should I be changing it there? If I set it higher, the baking usually gets to the 3rd pass of dilation and then fails with a virtual memory error. I have a couple questions.
Can the pixel dilation max be set higher in the normal map settings, please?
Is there anything that can be done to help the virtual memory problem? (On either end)
Thanks. ^_^
[/ QUOTE ]
You dhoun't change the dilation pixels in the plugin-manager. Just go to the Normal map tab, right click over the dilation filters and "configure" it there. You can stack various filters(dilation,blur) right clicking over them and setting "add new filter". "Configuring" the filters in the plugin-manager only sets its default value ( or even does nothing )
In theory the max dialation atm is 64px. Unfortunally a bug does not possible to pass from 5 as you mentioned.
About the virtual memory... Hahahha I just discovered im not deallocating the memory in the dilation filter... so prolly you are going outta ram hahah!
Will put a patch for this asap in the v3.8.1 and sorry for the bugs
Basically patched the virtual memory problem, bugs in the image filters(dilate,etc) and some minor corrections
Have you ever thought about some sort of an upgrade system? It's just that you're so fast with new versions, it would seem appropriate to upgrade xNormal instead of installing a new version. Not a big deal, I just thought I would bring it up.
Have you ever thought about some sort of an upgrade system? It's just that you're so fast with new versions, it would seem appropriate to upgrade xNormal instead of installing a new version. Not a big deal, I just thought I would bring it up.
[/ QUOTE ]
Yep yep. I tryed to setup a ClickOnce but is very complicated to setup it for Managed C++ ( easy for c# or vb.net )
I could develop my own upgrade system using an ASP.NET web service but unfortunally my web has not enough bandwidth or space to do it In fact I can't event host the xNormal files due to the excesive bandwidth and space hehe.
And yes, I know to download 31Mb for each new version ( and I have to upload like 280Mb for each new version which hurts too ) is a pain but I don't see any other alternatives...
Also some people sent me an email telling you need administrative priviledges to install it but can't even do a copy/paste installation because it requires to install Windows SxS VC++ dlls, .NET 2.0 and to manage the registry.
So thats why I opted for a Windows Installer deployment using public upload web sites ( also I share xNormal in eDonkey/eMule/BitTorrent and other peer to peer )
I hope Visual Studio 2032 incorporates a decent clickOnce for MC++ in a few... and also somebody donates me a T1 with 6Gb of bandwith to experiment with web services hahah Also not sure if worth the effort because when I touch 1 line in the SDK i have to recompile the .exe, all the zillion of plugins, etc... So you prolly will have to re-download almost all but the examples.
I also noticed that sometimes i get offset pixels on certain polygons and i cant really seem to figure out why, it usually seems this happens on mechanical type objects that have details close together. Heres a screenshot:
I noticed importing the mesh doesn't use both threads on my cpu, i dont have any idea if its posible to use both cores
[/ QUOTE ]
To use dual-core to import won't be a good idea I think... because you will gain CPU performance but you will loose it because the hard disk head have to jump from a portion of the file to another. Also probably will have to lock/sync too many times the vertex lists so will get too much thread deadlocks. I have to study it in depth to tell you really.
[ QUOTE ]
I've exported some very dense meshes from mudbox recently and it seems to take FOREVER to import(up to an hour)
[/ QUOTE ]
Yes I have to re-think about the mesh model. I have a problem here... Almost all the 3d modeling programs use multiple vertex indices ( like the OBJ format, one index for the position, other for the normals and other for the texture coords )... but D3D/OpenGL requires only one vertex index... I could optimize the load time dramatically but at the cost of a very poor performance in the 3D viewer ( need to draw raw triangles instaed of VBOs ). Ideally, you could ask your programmer to make a xNormal importer for your engine format with an OpenGL-friendly format like the SBM or the one 8monkeys team did. Then you will notice a tremendous speed up, I hope!
I am not sure if is better to export the normals or not in the highpoly model... By one part you could skip the vertex normals calculation, but on the other hand will affect the hashing algorithms.
As advice, if you are not going to touch anymore the highpoly model I recommend you to load your original into the 3d viewer.. and then save it as a SBM for future re-use. And try to use highpoly models with only vertex positions exported ( no normals, no UVs ) to speed up the process.
Can you tell me more specifically in what stage is halting so bad? In the loading process? In the vertex normals calculation? Tangent basis calculation? Remove illegal geometry? Have your highpoly model normals and UVs? Size of the file and # of polygons/vertices? And what kind of file are you using too pls (obj, lwo, fbx, ... )
[ QUOTE ]
I also noticed that sometimes i get offset pixels on certain polygons and i cant really seem to figure out why, it usually seems this happens on mechanical type objects that have details close together.
[/ QUOTE ]
Can you post a wireframe over that render to know if they are in the triangle edges please? Also enable the "show lowpoly tangents" and if you see duplicated axis around the same object then probably is because the UVs are not well welded. You can use too the checked.tga texture in the examples directory to see more visually the UV mapping.
Are you sure your UVs are well welded and pixel-snapped?
Triangle edges can be conflictive while rendering them though because is a point owned by more than one triangle and maths can be unestable.
To use dual-core to import won't be a good idea I think... because you will gain CPU performance but you will loose it because the hard disk head have to jump from a portion of the file to another. Also probably will have to lock/sync too many times the vertex lists so will get too much thread deadlocks. Have to study it in depth to tell really.
[/ QUOTE ]
Yeah i figured there was some reason you did it that way, wanted to mention just incase it was something you missed but didnt think it was likely.
[ QUOTE ]
Can you tell me more specifically in what stage is halting so bad? In the loading process? In the vertex normals calculation? Tangent basis calculation? Remove illegal geometry? Have your highpoly model normals and UVs? Size of the file and # of polygons/vertices? And what kind of file are you using too pls (obj, lwo, fbx, ... )
[/ QUOTE ]
The vertex normal calculation is what takes the longest with mudbox files. Importing OBJ files as always.
Im positive that the UVs are welded, but am unsure on them being pixel snapped, thats actually the first time i've heard of anything like that. But would make sense, if a uv vert is inbetween a pixel. Is that what you're suggesting? Would it be posible to snap the uvs to whatever resolution you're rendering out before it renders or something?
Heres the image you requested:
I notice those seams are in the triangle edges... Perhaps is some kind of math problem problem then, will look it. Are you using the antialiasing option btw? And other question... does maya/xsi/3dsmax generate well the normal map for it?
Yes pixel-snap ensures the UVs are not in the middle of a pixel but I think thats not the problem tho.
If you see any of the "Calculating vertex normals..." messages is because mudbox doesn't export vertex normals indeed, so xNormal have to calculate them.
The calculating vertex normals have various stages... Like
Averaging vertex normals (crossA)...
Averaging vertex normals (faceN)...
Averaging vertex normals (finding positions)...
Averaging vertex normals (averaging)...
Averaging vertex normals (normalizing)...
Averaging vertex normals (re-deindexing mesh)... I bet is here is where it halts badly...
Do you see any of these halting more than other pls?
Try to export the vertex normals if possible then? Open it in Maya/XSI/3dsmax the exported object in mudbox and re-export it and test again? Also don't export UVs for the highpoly mesh if you are not using the base texture bake thingy.. that will slow a lot the calculation.
I am trying to dowloaded mudbox to test... but requires me to apply a beta or something?
ps: That screenshot reminds me I have to put a slide bar to control the axis scale
I'm pretty sure it was on (re-indexing mesh)
Oh yeah also if you didnt see my message before i can give you free webhosting. Send me a PM and ill set you up with a sub-domain and ftp account.
Will investigate all this in depth when I have a moment. The only thing that preocupies me are the seams at edges... I'm changing all the loading model and loads much faster in the version I am develpoing atm... don't worry load times will be reduced a lot.
also thx for the web host thing but first I think I need to develop the upgrade system before to ask you
Well anytime if you want some space to play around with just let me know.
I had three loaded with a setting of 5
[/ QUOTE ]
Why not one with a value of 15?
What's your normal map size btw? How much memory is trying to allocate the program?(see the xNormal_debugLog.txt)
ps: I'm preparing the 3.9.0 correcting tons of bugs and with superimproved mesh loading speed
The normal map is 2048. Memory allocation is thus:
Wed Aug 02 00:21:20 2006
startingimport
Wed Aug 02 00:21:20 2006
Trying to allocate 35.193 Mb in virtual memory for highpoly vertices/normals/UVs...
Wed Aug 02 00:21:20 2006
Trying to allocate 245.761 Mb in physical memory for highpoly triangles...
Wed Aug 02 00:21:26 2006
High polygon vertices/normals/UVs memory: 35.193 Mb
Wed Aug 02 00:21:26 2006
High polygon triangles memory: 245.761 Mb
Wed Aug 02 00:21:26 2006
High polygon acceleration structures nodes memory: 32.744 Mb
I upped my page file to 3GB and I have 2GB of RAM.
Sweet, looking forward to 3.9.0! Keep up the good work.
In theory you can put up to 64 dilation pixels. Is it not letting you to put a value greater than 5 in the numeric control or?
With that memory usage and 2Gb installed should not give you problems, so I bet is some kind of bug?
Yeah, 3.8.1.32971. I'm not sure what's going on. I didn't uninstall 3.8.0 before installing 3.8.1. Could that have something to do with it?
[/ QUOTE ]
Hmm could you try to uninstall completely the program, erase its directory, re-install again from zero the 3.8.1 and retry to put 64 dilation pixels, please?
I just loaded the acid example, went to the "Normal map" tab, right-mouse-click over the dilation image filter, "Configure", type 900, it will autocorrect to the max, 64.
Anymore can try to set dilation pixels in the normalmap tab to 64 pls? Does it resets to 5? It appears to works fine for me hahah, weird
I'm sorry, I definitely misunderstood. Some people use the terms interchangeably, so I thought that was the case here. Turns out, you were using them correctly.
Oh my bad, I thought that's what the Pixel Padding was. Haha - so I guess I can leave that at 2 then and use the dilation. :P
I'm sorry, I definitely misunderstood. Some people use the terms interchangeably, so I thought that was the case here. Turns out, you were using them correctly.
[/ QUOTE ]
Hehehheh it happens! ( and that means probably I might touch the docs to clarify it! )
Yep yep, I use "dilation" as an image filter to avoid normalmap seams... Basically pixels not rendered will be filled with the average of the near pixels. That's what you wanted to touch and what you need.
The "pixel padding" is just to consider a few pixels near the triangle border UVs to avoid avoid barycentric cooordinates floating-point problems while the triangle-scanline process... That's only for math-crazy people and usually you won't touch it nevah!
Both terms are, indeed, related but the "dilation" is an image filter, "pixel padding" is just a hack to prevent math overflows/exceptions near triangle edges. In fact I think xNormal is enough floating-point-protected so you prolly could use a "pixel padding" of ZERO without problems but decided to put that param in case you are getting strange pixels at triangle edges. Btw, Earth, I suspect the images you posted before suffers from this... Can you try to use a pixel padding of ZERO or 5 and see if the spiral thing is solved please?
You can't set more than FIVE pixels in the "pixel pading". Five is a good maximum I think... usually 1 is enough ( edges usually have 1 pixel ), 2 is the recomended because one appeared to me too weak, more than 2 is just to solve weird weird problems I barely can imagine...
ps: The 3.9.0 is almost finished and also i'm considering now a linux port... not sure if can be done because GTK+ and Elipse/NetBeans/Anjuta with the C++ modules are touching the nuts badly.
And btw, latest Catalyst 6.7 has a bug with FBOs and cubemaps that produces a nice crash... Revert to the 6.6 in case.
This time optimized the mesh loading speed a lot ( can reach 10x! ), improved the cage system ( now you can control too the ray directions ) and solved some minor bugs.
As usually you can download it at
http://www.santyesprogramadorynografista.net/projects.aspx
Change list is at
http://www.santyesprogramadorynografista.net/archives/xNormal_changes.txt
blog at
http://santyhammer.blogspot.com
btw, there is a bug in the latest ATI Catalyst 6.7 drivers with FBOs and render-to-cubemap texture and texture corruptions... If you are using these drivers I recommend you to rollback to the 6.6 or 6.5 to avoid problems with the xNormal 3D viewer. ATI has been informed and they're working to solve it...
For the v3.10.0 will add a Direct3D 9 graphics driver because I'm really tired of crappy OpenGL driver implementations. Also wanna experiment a few with GPGPU raycasters.
ps: Enabled two mirrors to download... tomorrow will add more if I have time.
thx
You know i just noticed that the pixel dialation only works on the normals map, it should be applied to all the maps that get rendered.
[/ QUOTE ]
Well, I tryed with the xN example an these are the results:
Dilate 2 pixels :
Dilate 64 pixels:
What you mean by "all maps"? Can you post an image showing the problem please?
ps: wooooooooot! I found a supernice probram to make Flash tutorials called Wink!
Hey btw, what do you think about the new mesh re-design to load faster? Do you load meshes faster now? Had to touch a few some internal structures.. now occupy a few more... hope that won't wipe all your RAM heheh
Also included the possibility to move the cage points to control the ray direction like I saw in the Mr.Poopping's tutorials. Tryed this yet? Was very useful for me to tweak better the xN example at hard edges (I know still have a few seams but still need some retouches)
Now I'm implementing the Direct3D driver because I'm tired of crappy OpenGL drivers... Also exploring the "Wink" at
http://www.debugmode.com/wink/
to make some tutorials... Hey is much better than CamStudio!
sorry for my bad english, i just join this forum i read every day, such a shame
I read this thread carefully, but i just have no clue and tested around for ambient occlusion. Since maya lack in baking ao, i tried xNormal for the job. I only got a low res model which i want to bake. As far as i get it i need also a highres model which won´t made such sense for my example, i want to bake ao for different low poly Houses.
At the moment i test something about 450 rays, it´s still calculating. I just imported the house as fbx with the parts mapped i need. Windows for exampel are still there but with no UV information. If i render a half day it might be ok because here at work are often pc´s free for such tasks. And there is a batch render option i read ^^
ThX for this FREE tool. Seems to kick asses!
I want to bake ao for different low poly Houses.
At the moment i test something about 450 rays, it´s still calculating. I just imported the house as fbx with the parts mapped i need. Windows for exampel are still there but with no UV information
[/ QUOTE ]
xNormal was initally designed to project the highpoly AO into the lowpoly one. To calculate the AO for the lowpoly models without highpoly ones probably you will need to use a trick... Set as highpoly models the lowpoly models. Then set the lowpoly model. As alternative, you can use other programs like AOGen for that task.
450 rays to do a test I think is too much. Use 24-50 with a 512x512 map to do test. 450 is high quality. 1000 is almost film quality.
Also remember the lowpoly models need to include texture coordinates(UVs) and the AO will be baken using those UVs ( can't generate the UVs for you atm)
Hope this helps
i get it ThX jogshy
xNormal rocks
I'm getting a virtual memory error if i try and render 4096x4096(No AA) telling me i'm out of disk space even tho i have 1.6 gigs free on C:/ and 7 gigs free on / where my page file is located. I'm trying to do 4096x4096 because i'm still getting those inaccuracy issues where certain triangles will be offset a pixel or two and i think down sampling from 4096 to 2048 will fix it.
I'm getting a virtual memory error if i try and render 4096x4096(No AA) telling me i'm out of disk space even tho i have 1.6 gigs free on C:/
[/ QUOTE ]
Each pixel in the "maps" occupies:
nx,ny,nz -> 3 floats ->12 bytes
ao -> 1 float -> 4 bytes
displacement -> 1 float -> 4 bytes
r, g, b, a for base-texture projection -> 4 floats -> 16 bytes
total 36 bytes per pixel
4096x4096x36 = 603Mb
Like some operations like dilation/blur requires a "copy" of the map, you need to 2x that... so like 1,2Gb only for the "maps". If you have only 1,7Gb in C:\, the system volume, that probably won't be enough because you must add the highpoly memory, etc... I'm not sure how windows manages the swap file but I suspect it just uses the one located in the Windows drive. Perhaps you should try to delete all the swap files but the \ one and set its size to a lot of Gb...
I'm still trying to optimize the memory but is hard without affecting the rendering speed and precission a lot. 64bits probably will solve this if the RAM vendors can drop some day the DIMM prices a few hehe. Also remember to use the "memory-conservative" raycaster instead of the "fast" one when you are managing big meshes.
Yes, I know, xNormal devoures your RAM! I wonder what would be the limit putting here 4Gb DIMMs
Hehehhehe
Just fyi, tell your admins to stop wasting time and complicating crap. And i'm availible as a consultant :P
Jogshy, i'm curious if you would be interested in paid programming work in the future. Nothing major, just applying new functions or altering some items in TSE when it's finalized for a prototype.
Are you trying to say if its on the same partition it dosent matter where it is its still going to use c:/ ??
[/ QUOTE ]
Im not sure how Windows manages virtual memory but... I think there should be only ONE swap file. You can have multiple ones but only one will be used. If you have not enough space in C:\ then set it in the \ and erase all the other swap files to ensure no other swap files are used. Set the \ swap file to 5-7Gb and sure you have enough physical RAM installed to perform the swapping ( like 1,5Gb-2Gb )
And remember... only AVAILABLE physical RAM counts... so you can install 190293209Gb, if you have only 512Mb free when Windows starts you will have problems.
If you really need to manage BIG BIG mesh assets ( >1M polys ) I suggest to use a 64bits OS with 2Gb+ RAM. 32bits are absolutely not enough for those meshes ( that why the last Maya, 3dsmax, etc started to release x64 version too ). 32bits OS will start to fall soon because 2Gb is too low for highpoly mesh management ( and aren't 2Gb really because Windows will own like 600Mb at start ).
Other option is to use Windows Server 32bits and setup a "cluster" using shared RAM ( yes I know Windows Server is expensive but usually enterprises already use it to manage their "domain" )
Oh and also other note... usually graphics cards don't allow to paint meshes with more than 1M polygons.
Hope it helps.
I was wondering if it would be possible to make an auto-assign random colors to groups/elements option? Also an option to go into 3d view, select highpoly elements and pick/change colors.
Thanks for your hard work!