Home Technical Talk

xNormal - MASTER THREAD

1202123252659

Replies

  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    I just uploaded the 3.15.1 beta 1 in case anybody wanna test it a bit.

    Respect to the cavity options I added numerical controls instead of the track bars... and also added an option to trace more samples. To measure a good cavity distance pls referer to the xNormal documentation... and perhaps the default contrast I put is too high(2.0... better use 1.25... i'll change it for the final version though).

    thx
  • vik
    Offline / Send Message
    vik polycounter lvl 13
    thank you kind sir :poly002:
  • Vylaroth
    I'm having some problems with the few newest versions of xNormal: the normals are rendered as if the hi-poly model had been set to hard edges instead of smooth. I've imported the models as .objs as I did before and it shouldn't be because of my modeling app.

    problemum7.jpg

    Also, the newest version caused a lot of artifacts in the normal map. This hard edge baking problem exists in 13.14.2 too (I've only tried 13.12.0, 13.14.2 and 13.15.1). I've played around with settings (such as smooth normals etc.) to no avail. Do you have any idea what's causing this?
  • pior
    Offline / Send Message
    pior grand marshal polycounter
    Hi Jog!

    Thanks a ton for the new wireframe baking option! it's one of these little things that help a lot.
    Also It would be nice to see the return of multiple bakes at once, even with something very simple when it comes to naming conventions. Like, the user gives a generic name to the bake and the maps get named like nameofthebake_normal, nameofthebake_ambiant and so on. But I am sure you have something similar planned for the near future.

    However I couldn't get the cavity maps to work - it definitely shows parts of the wireframe of the highres mesh, like this
    http://img530.imageshack.us/img530/2984/handfuxjl2.jpg

    I understand that most of the time the meshes to be baked are so dense (sculpted) that it gives an impression of a decent cavity map, but for mechanical objects for instance highres sources might be much lower - hence giving weird result with the current calculation technique you use.

    Hope this helps!

    [edit]
    Also - I usually don't really use imported cages as I rather spend the time to find the right distance values. However I just discovered that there is a cage slider in the 3d viewer. I would assume that as soon as I 'expand' this cage in the 3d view, this gets used as cage reference for the next bake ; however when I launched a new bake after setting this up I still got the same baking errors due to too long a fetch distance. It felt like the bake didnt take the user-defined interactive cage into account.
    Am I doing something wrong?
    (I tried 'save mesh' as Ogre when messing with the cage in the 3D viewer, but when I loaded back the saved object it was the same thickness as my original lowpoly... I must be missing the 'save cage' button somewhere!)
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Vylaroth wrote: »
    I'm having some problems with the few newest versions of xNormal: the normals are rendered as if the hi-poly model had been set to hard edges instead of smooth.

    problemum7.jpg

    Also, the newest version caused a lot of artifacts in the normal map. This hard edge baking problem exists in 13.14.2 too (I've only tried 13.12.0, 13.14.2 and 13.15.1). I've played around with settings (such as smooth normals etc.) to no avail. Do you have any idea what's causing this?
    Well, assuming it's not a bug(perhaps it is), now you can specify three "smooth normals" modes:

    - "harden normals". It's an experimental mode to avoid the need for beveling objects with hard edges.
    - "average normals". Equivalent to the old "smooth normals" one.
    - "use exported normals". Use the normals the user set.

    If you use 3dsmax then remember the old 3dsmax2obj plugin which comes with 3dsmax is very buggy ( that's why they changed to g::Obj in 3dsmax 2009 ).... so better use other OBJ exporter or format ( or better the SBM mesh exporter ).

    On the other hand, the 3.14.0 introduced adaptive sampling like Mental Ray does. You need to set correctly the min/max samples and the AA threshold. To get more quality, for example, use minAA=1, maxAA=4, threshold=0.05. To disable it just set the threshold to 0.0. Use the diagnostics option to see how the samples are taken.
    However I just discovered that there is a cage slider in the 3d viewer. I would assume that as soon as I 'expand' this cage in the 3d view, this gets used as cage reference for the next bake ; however when I launched a new bake after setting this up I still got the same baking errors due to too long a fetch distance. It felt like the bake didnt take the user-defined interactive cage into account.
    Am I doing something wrong?
    (I tried 'save mesh' as Ogre when messing with the cage in the 3D viewer, but when I loaded back the saved object it was the same thickness as my original lowpoly... I must be missing the 'save cage' button somewhere!)
    You need to check the "use cage" option in the corresponding lowpoly mesh slot. If not the cage is ignored ( the data won't be loaded )... that's probably why you see the cage reset. Btw, better use the SBM mesh format when saving the cage(the "save meshes" button). Currently, the Ogre mesh exporter cannot save the cage's data... only the OVB and the SBM can(but the OVB is a text format using XML DOM so will be very slow, specially when dealing with big meshes ).... so better use the SBM one ( which, btw, you can export/import in 3dsmax ).
  • Vylaroth
    Thanks for the quick reply Jogshy. I'll play around with the settings. ;)

    I got to tell you that I just love xNormal. Keep up the good work with it.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Uploaded the final 3.15.1. Two critical bugs were solved.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Uploaded the 3.15.2:

    - Added an option to paint the UV seams in the "render wireframe and ray fails" map type.

    - Solved some bugs ( incorrect mesh clear, 1/1 diagnostics error, etc )
  • Thewiruz
    Offline / Send Message
    Thewiruz polycounter lvl 16
    Hello i have a problem.I cant render out a normalmap,Ao map from 3ds max becuse the highpolymesh is to dense

    Now i found Xnormal but i get stretchings in the final render,But in the middle of the render it looks perfect,Why do i get this error?I have been trying everything now i think

    This is in the middle of the render which looks perfect!
    xnormalproblemve3.th.jpg


    This is the final render and you see the streched parts
    xnormalproblem1lz5.th.jpg

    *Edit*I found out that it was the Padding that was set to 16,I now use 2 and it looks okay besides i get like this guy!,Stange waves or stretches inside the mesh
    http://img86.imageshack.us/img86/6374/aoproblemcy6.jpg
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Thewiruz wrote: »
    Hello i have a problem.
    *Edit*I found out that it was the Padding that was set to 16,I now use 2 and it looks okay besides i get like this guy!,Stange waves or stretches inside the mesh
    http://img86.imageshack.us/img86/6374/aoproblemcy6.jpg
    Well, it's hard to tell you a solution if you hide the interesting options with the preview window on the screenshots :poly120:

    Use padding 16 better... 2 is too low for a 1024x1024 image. With 16 you will avoid mipmapping artifacts for a 2048x2048 map... Don't worry about the pixel "flooding" unless you need a very very compact UV layout(more dilation=more wasted space... but less seam problems )

    To avoid those waves do this:

    1. Increase your AO # of rays to 32 or more. More=more quality, but slower.

    2. Your AA threshold is too low. 0.100 is for fast tests. Usually you need 0.05. This works like MentalRay's adaptive sampling.

    3. Increase the minAA to 1 or 4. The 1/4 is for fast tests and produces too much "interpolation" between pixels.

    4. Sure you disable the adaptive AO. This is a thing to stop occlusion ray firing if after casting "adaptive interval" the occlusion does not vary more than "ao adaptive threshold"... This is used to speed up AO... but can introduce artifacts(specially with less than 32 rays in the adaptive interval).
    To disable adaptive AO just set the "adaptive interval" to the same number than the AO "rays".

    So... use the settings I put in this screenshot and the AO will be renderered well:

    xnormalaosettingsel7.png
  • EarthQuake
    johny has been having some problems, and he's too big of a baby to get real internet, so i'm posting this for him.
    xnormal_artifacts.jpg
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    EarthQuake wrote: »
    johny has been having some problems, and he's too big of a baby to get real internet, so i'm posting this for him.

    Haha!
    EarthQuake wrote: »
    xnormal_artifacts.jpg

    I think that's a problem with the AO bias. Try to increase it.
    And sure you aren't using adaptive AO ( set the AO interval to # of rays to disable it ).
    Have you tried to scale up a bit the mesh btw? And sure the cage is not very close to the lowpoly model.

    Well, I would need the model and the settings to debug it though.
  • Joao Sapiro
    Offline / Send Message
    Joao Sapiro sublime tool
    hey man, the thing is the settings that worked perfect in 3.15.0 get all screwy on 3.15.2 , it also happens with other models :( so im sure must be some part of code saying "throw random pixelization" :)

    I love the cavity map tool now , its perfect.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Johny wrote: »
    hey man, the thing is the settings that worked perfect in 3.15.0 get all screwy on 3.15.2 , it also happens with other models :( so im sure must be some part of code saying "throw random pixelization"
    Caution with the 3.15.0... it used a defective normals compression algorithm... If you save a SBM mesh using the 3.15.0 and then you use those meshes in other xNormal version you will get nasty artifacts... Try to re-export all using the 3.15.2... and if you can send me by email the model I could debug it, thx.
  • bugo
    Offline / Send Message
    bugo polycounter lvl 17
    hey Jogshy, im starting to use Xnormal alot right now, and im loving it. But i may found a bug... its kinda anoying, as i dont know if its a bug or not. Its showing these quads or noise points on my SBM AO map. My bias stand as 0.1. Let me know if thats right. My mesh is almost 4million tris. I believe my system can run xnormal with all its features (am2 x2 with a 8800gts 640mb and 4gb 800mhz).

    wip14le6.jpg
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Let me know if thats right. My mesh is almost 4million tris. I believe my system can run xnormal with all its features (am2 x2 with a 8800gts 640mb and 4gb 800mhz).
    4gb should be more than enough for the software renderer. The 8800gts with 640Mb should be more than enough too... so you can discard a hardware problem( if you use the latest graphics drivers ).
    bugo wrote: »
    Its showing these quads or noise points on my SBM AO map. My bias stand as 0.1.
    Did you render the AO map using the Simple GPU tool or using the software path? If you can post some screenshots showing your settings and other with the cage I can tell you more :p
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Btw, I just released the 3.15.3 beta 1 if anybody want to test it.
    Basically solved some bugs and added an experimental renderer based on CUDA 2.0
  • bugo
    Offline / Send Message
    bugo polycounter lvl 17
    I used Simple GPU tool. Settings:

    xnormalproblemuy3.jpg
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    bugo wrote: »
    I used Simple GPU tool. Settings:
    Your settings are fine ( although I won't pass 0.01 for bias... 0.1 is a lot! ).

    I see you render the AO per-vertex... how do you render then the image above? Rendering it in xNormal and UNchecking the "ignoring per-vertex AO" option on the highpoly mesh? In affirmative case, sure you set well the minAA/maxAA/AAThreshold ( for example, minAA=1, maxAA=1, threshold=0 )... or pixels will be interpolated too much.... and sure you uncheck the "limit ray distance" option .... if not, you need to setup a good cage or a precise uniform ray distances.

    ps: How ugly is the Win2k/WinXP/Vista without themes omg! :D
  • bugo
    Offline / Send Message
    bugo polycounter lvl 17
    Thanks for your answer jogshy, I will give a try again and see if it works.
  • Racer445
    Offline / Send Message
    Racer445 polycounter lvl 12
    Johny wrote: »
    hey man, the thing is the settings that worked perfect in 3.15.0 get all screwy on 3.15.2 , it also happens with other models :( so im sure must be some part of code saying "throw random pixelization" :)

    I'm having the same problem, except it's happening on every version I have used. Everything comes out pixelated and blurry, as if it's been resized many times.

    I'm also having problems with some stuff not being baked. it might be user error tho.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Racer445 wrote: »
    I'm having the same problem, except it's happening on every version I have used. Everything comes out pixelated and blurry, as if it's been resized many times.
    Pls, don't use the 3.15.0... it had a very bad bug with the normal compression. If you saved any model using it, pls re-export the model using the 3.15.2... then the render should be fine.

    I could use some models to debug though, just in case that the problem is not solved re-exporting. Currently I cannot reproduce that pixelization with the xNormal's examples... so if you have any model causing those artifacts, pls send it to me by email(7zip compresses very well the .OBJ )... i'll treat it confidentially.
  • Joao Sapiro
    Offline / Send Message
    Joao Sapiro sublime tool
    hey jogshy ill see what i can do to send you the models i get trouble with the normals :)
  • ironbearxl
    Offline / Send Message
    ironbearxl polycounter lvl 18
    Hi Jogshy, is it still possible to bake random mat id colors?
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    ironbearxl wrote: »
    Hi Jogshy, is it still possible to bake random mat id colors?
    It seems you're a very good tester :poly136: Opppz! For some reason I am hidding incorrectly the base tex baking options hehhe! I'm gonna solve it asap. Meanwhile try with an older version.. like the 3.12 or the 3.13 ones.

    Hehehe! sorry.. I don't know when I changed that :poly004:
  • ironbearxl
    Offline / Send Message
    ironbearxl polycounter lvl 18
    Thaaaaaankssss jogshy :D
  • Unleashed
    Offline / Send Message
    Unleashed polycounter lvl 19
    is the 8800gt not yet supported in CUDA? I tried to render using it, but got an error saying no cuda device could be found(i installed the drivers)
  • bugo
    Offline / Send Message
    bugo polycounter lvl 17
    you need to install beta drivers from nvidia
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Unleashed wrote: »
    is the 8800gt not yet supported in CUDA? I tried to render using it, but got an error saying no cuda device could be found(i installed the drivers)
    You need the 174.55 beta drivers, the official ones don't CUDA2 yet.

    Btw, i'm trying to reproduce the map "pixelization" bug but I cannot find it. Do you know when it happens, pls?
  • Joao Sapiro
    Offline / Send Message
    Joao Sapiro sublime tool
    hey jogshy another thing, the height map doesnt work , it only gives me a 2 color map with white and black , full of holes, and there doesnt seem to be any way to edit settings etc :( and the normal map pixelization is happening with every model, random pixels get added and kinda fudge it , its kinda frustrating since your cavity tool is really improved :(
  • bugo
    Offline / Send Message
    bugo polycounter lvl 17
    jogshy wrote: »
    Btw, i'm trying to reproduce the map "pixelization" bug but I cannot find it. Do you know when it happens, pls?

    Do you know if the high model needs a UV map so this way doesnt make that pixelization?

    Thanks
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Johny wrote: »
    hey jogshy another thing, the height map doesnt work , it only gives me a 2 color map with white and black , full of holes, and there doesnt seem to be any way to edit settings
    Notice the height is normalized using the maximum cage or uniform ray distance. You need to place the cage as close to the highpoly model you can ( see the smiley example ).
    the normal map pixelization is happening with every model, random pixels get added and kinda fudge it , its kinda frustrating since your cavity tool is really improved :(

    Do you have assigned a heightmap to the fine detail? Are you using the "jitter" option in the AO/cavity? Do the xNormal examples render ok?
    Well, I tried here with several models and cannot reproduce it. I would need some screenshots/models to debug it :poly006:
    o you know if the high model needs a UV map so this way doesnt make that pixelization?

    The highpoly model only needs UV if you are baking its texture into the lowpoly model. In that case, the highpoly model should be tessellated a bit or you will get a blocky result.... but for normal maps, AO maps, etc... you don't need UVs and neither extra tesselation(except beveling on the hard corners).

    It's important that you avoid the 3.15.0 version... it had a very bad and evil bug compressing the normals... so the tangent basis was completely messed... the problem was solved in the 3.15.2.
    Also sure you are using the "Default tangent basis calculator" in the plugin manager and not the D3DX/NVMeshMender ones(which are very experimental yet).

    Btw Johny... what kind of mesh are you working with? .mesh? .OBJs? Perhaps Jeff updated the Stooge mesh structures and forgot to notify it to me... so my Stooge mesh importer is not reading the correct data...
  • Unleashed
    Offline / Send Message
    Unleashed polycounter lvl 19
    I did install the beta drivers, and the tools, but Ill give it another go and see what happens
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Unleashed wrote: »
    I did install the beta drivers, and the tools, but Ill give it another go and see what happens
    The toolkit and the SDK aren't needed really.... although the SDK contains some interesting CUDA examples hehe ( I love specially the nbody, particles and fluids ). Only the 174.55 beta drivers are needed... and notice the good WQHL ones don't support CUDA2 yet.

    Btw, to compare fairly the speed you need to disable the adaptive AA on the CPU renderer... because it's still not coded in the CUDA renderer... so use threshold=0.0000 and minAA=1.... and disable the tile notification(which is CPU-bounded). I've tested only with my cheap 8500GT... and got very good results without optimizing it... I can figure with a more-decent card like the 8800GT will be much faster... and expect serious speed improvements soon.
    Btw, currently it can render only normal maps in the beta 1 version....

    I need to solve the damm random pixelization bug before anything :D
  • Unleashed
    Offline / Send Message
    Unleashed polycounter lvl 19
    Ok I did more fiddling. For reference my video card is an EVGA 8800GT 512MB.

    The reason it wasn't working was probably that the source file was much too big for the card to handle, it would error out on Generating Nodes. The biggest file I could manage was 51k polys, with 420MB(setting 450 would crash xnormal) set in the config.

    Also in the config, unless I disable the first device(0) (two devices, 0 & 1 show in the config not sure if this is an error or not), the render will go extremely slow for some reason, cpu wont be maxed but the whole computer will crawl.

    8800gt speeds(default)
    --
    2048x2048
    bucket size: 128
    speed: 26.2s

    cpu x3210 (3.1ghz)
    --
    2048x2048
    bucket size: 128
    speed: 21.7s

    seems quite promising except for the fact that I guess its limited by memory
  • Joao Sapiro
    Offline / Send Message
    Joao Sapiro sublime tool
    @ jogshy :

    the highpoly is always in .obj , no jitter etc on normal, the settings that work ok in the new version make pixels appear in the new, nothing changed, and i dont think stooge changed, i still use the same xnormal was prepared to :) the Ao i use jitter, but always have and doesnt give me problems.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Unleashed wrote: »
    The reason it wasn't working was probably that the source file was much too big for the card to handle, it would error out on Generating Nodes. The biggest file I could manage was 51k polys, with 420MB(setting 450 would crash xnormal) set in the config.
    The CUDA memory required needs to be split into two components:

    1. Node pool. It's used for the spatial acceleration structure. Usually 16/32Mb should be enough for a 4M poly mesh.

    2. Triangles. The mesh itself. Each triangle occupies 48 bytes. A 4M poly mesh will occupy around 192Mb.

    If you set 450Mb for the nodes(that's brutal btw...) the triangles fon't fit in memory. Try with 32Mb as max in the nodes(or 64Mb as absolute maximum if the program complains).

    And yep, I need to optimize the usage and speed a bit.
    Also in the config, unless I disable the first device(0) (two devices, 0 & 1 show in the config not sure if this is an error or not), the render will go extremely slow for some reason, cpu wont be maxed but the whole computer will crawl.
    Do you have 2 cards plugged ( 2 8800GT ? ). Notice that to use multiGPU with CUDA you need to disable the SLI mode. With multiGPU you can accelerate a bit the render... but if the plugged cards are too different ( for example, a 8500GT and a 9800GX2 ) it's possible that the "weakest part in the chain" could slow the render.

    On the other hand, to use CUDA effectively you need to disable the cards connected to a monitor ( unless you're in vista... because there is a problem and the desktop needs to be extended to all the cards you wanna use ). If not, a 5 seconds watchdog will be fired ( if xNormal takes more than 5 seconds to render a tile then CUDA will abort the process ).... and the UI will be unresponsive because the card needs to paint the UI + heavy GPGPU xNormal computations.... If you have seen any Tesla card that's why they don't have display outputs... GeForces work too, but you need to unplug them from the monitor and to detach them from any desktop.


    prod_shot_initial_tesla_c1060.jpg

    8800gt speeds(default)
    Have you disabled the tile notification? The preview window is CPU-bounded(I need to map/write/unmap/rescale a GDI+ bitmap in RAM)... so if you don't disable it the times will be similar. Disable the tiles notification for both CPU and GPU tests... will be faster and more accurate time.
    the highpoly is always in .obj , no jitter etc on normal, the settings that work ok in the new version make pixels appear in the new, nothing changed, and i dont think stooge changed, i still use the same xnormal was prepared to :) the Ao i use jitter, but always have and doesnt give me problems.
    It might be a bug then... but cannot reproduce it... and I revised all the code and cannot figure what's happening... I need some help to find it, pls. When it happens? Happens too in the xNormal examples? Happens when you use object space or tangent space? Screenshots with settings, pls? Only for normal maps or for all the maps? With a conflictive model I could debug it very well.
  • bugo
    Offline / Send Message
    bugo polycounter lvl 17
    Jogshy, could you tell us more what CUDA in xnormal will be able us to do? Faster or better? Thanks alot
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    bugo wrote: »
    Jogshy, could you tell us more what CUDA in xnormal will be able us to do? Faster or better? Thanks alot
    Once I optimize the renderer you should be able to render maps much faster. The public beta is showing a very old version of the renderer i'm working now.

    For the people with more modest processors(an E2140, athlon64, p4, etc)... a cheap 40$ GF8500GT can be 10x faster than them. ... so you won't need to spend 300-500$ on a new computer to render the maps.

    The only serious limitation is the polycount: a very dense mesh won't fit in video memory... but with 512Mb you should be able to manage 10M polys meshes well. With a more advanced solution like the GT200 Tesla(which has 4Gb each one... ) the speed improvement can be really impressive compared with the best CPU(QX9775 Skulltrail). I'm playing too with MPI(a distributed rendering system) to make CUDA rendering clusters.... Notice you can plug 3 or 4 GF8600 with a minimal power consumption vs 20 dual cores which consumes much more power...

    I plan to port some Photoshop's filters which scales much better than the ray tracing(a 16x dilation will be performed almost instantly)

    If I'm able to optimize very well the CUDA renderer I could improve a lot the realtime ray tracing graphics driver... so you could preview models with real radiosity, global illumination and über shadows... all in realtime.

    Btw, got a nice speed improvement today... and I feel I can make it even faster with a little help from NVIDIA.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    I've just uploaded the Beta 2 with an CUDA2 renderer, now it can render nm,heights,baseTex and ao... , added GTX280/260 support and optimized the speed a bit.... although I'm far from maximizing its efficiency.
  • bugo
    Offline / Send Message
    bugo polycounter lvl 17
    cool, i will test it out buddy, thanks!
  • Unleashed
    Offline / Send Message
    Unleashed polycounter lvl 19
    Ok I tried with 32, 64 and 96MB for the nodes, with 96 I got the no cuda device found error, and with 32 and 64 it complained about increasing the memory size. Just couldnt get it to work with the model I am working with at the moment(the mesh was 7mil polys).

    Also I only have one graphics card(regarding the 0 & 1 devices)

    Ill try it more later but the drivers go a bit nutty after a while(screen corruption) so until they release newer beta drivers or whql em, I guess no real cuda use for me.

    BTW do you plan on implementing a renderer with ATI's version?
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    Unleashed wrote: »
    Ok I tried with 32, 64 and 96MB for the nodes, with 96 I got the no cuda device found error, and with 32 and 64 it complained about increasing the memory size. Just couldnt get it to work with the model I am working with at the moment(the mesh was 7mil polys).
    Thanks for testing! I will try to increase the polycount soon.
    Also I only have one graphics card(regarding the 0 & 1 devices)
    Haha, curious! I suspect that's a CUDA beta problem... I think somebody reported that before in the CUDA's forums.
    BTW do you plan on implementing a renderer with ATI's version?
    No idea. The StreamSDK/Brook+ but is still a bit immature and has no Vista support which is the OS where I program xNormal :\. Depends on ATI support too.... and I still need to improve my GPGPU technology a lot before porting it to other systems.
  • pior
    Offline / Send Message
    pior grand marshal polycounter
    Hi J!

    I have a new suggestion, I don't think it's in there yet :
    I see that you have an option to bake texture information from the highpoly to the lowpoly as in your flag example. I was wondering if we could have something that just applies random flat color to all the highpoly elements?

    (An obj containing a head and ten teeth plus another obj with two eyeballs, would be 1+10+2=13 elements)

    I think you could store that as per-poly information, and then give an option to bake this color information to a lowpoly map. I think 8 colors would do the trick. It would make the creation of masks very easy!

    It would work just like the red color you have by default in the bake base texture options - except that it would give a color to every element.

    Thanks for everything
  • Greg_Brown
    Hey Jogshy,

    First want to say thanks for creating such an amazing piece of software. The quality of the height and displacement maps is spectacular. Definitely far more useful than the horrendously over-priced crazy bump.

  • fritz
    Offline / Send Message
    fritz polycounter lvl 18
    jogshy, man....i haven't been here in a while. but i was wondering...i had some probs w/the viewer due to me having two vertical monitors. i was just wondering if there may still be a prob. got some new concepts ready for bakin!!!!

    thanks for everything mang!!!!!!!
  • katzeimsack
    Offline / Send Message
    katzeimsack polycounter lvl 18
    pior wrote: »
    Hi J!

    I have a new suggestion, I don't think it's in there yet :
    I see that you have an option to bake texture information from the highpoly to the lowpoly as in your flag example. I was wondering if we could have something that just applies random flat color to all the highpoly elements?

    (An obj containing a head and ten teeth plus another obj with two eyeballs, would be 1+10+2=13 elements)

    I think you could store that as per-poly information, and then give an option to bake this color information to a lowpoly map. I think 8 colors would do the trick. It would make the creation of masks very easy!

    It would work just like the red color you have by default in the bake base texture options - except that it would give a color to every element.

    Thanks for everything
    xnormal already gives every obj a different color if they don't have a colormap..
    just export the teeth and the eyeballs as an seperate obj and import it into xnormal.
    It saves quite some time, you otherwise would waste creating mask manually
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    pior wrote: »
    I have a new suggestion, I don't think it's in there yet :
    I see that you have an option to bake texture information from the highpoly to the lowpoly as in your flag example. I was wondering if we could have something that just applies random flat color to all the highpoly elements?

    (An obj containing a head and ten teeth plus another obj with two eyeballs, would be 1+10+2=13 elements)
    In theory you could enable the "write ObjectID if no texture" on the highpoly baseTex baking options... you'll need the 3.15.3 beta2 for that... because some some strange reason the baseTex options button was disabled.... you can use up to 16 objectIDs.
    .i had some probs w/the viewer due to me having two vertical monitors. i was just wondering if there may still be a prob.
    Yep, sorry... vertical spans aren't supported yet because I cannot test (I only have one monitor)... but you can edit the ui.lua to adjust the UI controls manually.
  • jogshy
    Offline / Send Message
    jogshy polycounter lvl 17
    I'm trying to reproduce the "pixelization" bug ... I tried all but I cannot see it. Does it happen with the new 3.15.3 betas? Anybody could post some clarifying screenshot, repro steps or something I coult use to get rid of that artifacts pls? I don't know even where to start finding the problem...
  • Joao Sapiro
    Offline / Send Message
    Joao Sapiro sublime tool
    the thing is what worked in the previous version gives those pixel artifacts, its weird, everything is amazing, but the pixelization with teh same exact settings appear there :( if you remember that image i posted earlier i did the same settings etc in both versions, and i happens wich every highpoly source :(

    try rendering the maps REALLY big, and with max antialias etc :) ill see about sending some examples :)
1202123252659
Sign In or Register to comment.