I wonder if it only compresses on quad level - I assume its after the compression still a valid tool that you can further work on. It would be nice to see some wires and technical shots to see how it actually works.
hmm it looks like its doing something similar to poly cruncher, it you look at the closeups its a mass of triangles, dunno how well it would sub-d to work on further, I think its mainly designed to replace PC in transitioning back to max/maya for baking. i could be wrong though, havent had a chance to try it.
well yes because on a quad level there is not much opportunity to compress without getting a topology mess.
If anyone tries this out mind creating some technical test shots like how it really compresses in detail?
from what i've seen in the really simple test i did it triangulates the whole thing which makes it somewhat terrible to work with afterwards just as pixelmasher said but i had no artifacts like i do with polycruncher.
don't know of what kind of technical test shots you think of renderhjs, but it seems like it's just a nice addition to zbrush before exporting highpolys for baking. takes polycruncher out of the flow.
Right. Discovered this today myself. Useful for app transfers, but its just a pre-export thing. Not something you'd want to do on a work in progress I think.
if you haven't tried this out yet, get it now. It is amazing.
@ most of the comments; yes, it is mostly for when you are "finished" sculpting, and ready to export to bake your normals. I would also recommend saving a backup file in case you need to go back to do touch ups, since it does pretty much annihilate your geometry for the sake of optimization.
Cool! time to give Z3 the usual bimonthly try :P
I am wondering how this compares to Meshlab? But yeah the less apps the better. Also this should make Xnormal bakes *even* faster!!
Reading the documentation you can use masking to preserve detail in specific areas, where the mask opacity controls the severity of the optimization. Pretty cool.
I'll take this over meshlab or polycruncher any day. That's one less app to filter my shit through on the way to normal maps. Optimization has always been a pita and I've often wished for it in ZB.
Cool! time to give Z3 the usual bimonthly try :P
I am wondering how this compares to Meshlab? But yeah the less apps the better. Also this should make Xnormal bakes *even* faster!!
The thing with this plugin is that you can have your 8 millions vertices ZBrush tool and decimate it. You can't open such an obj in Meshlab, he'll crash.
I was beta tester on this one, and it's working pretty well. Not for everything, but still awesome enough to bring back a model in max without crashing ^^
I was beta tester on this one, and it's working pretty well. Not for everything, but still awesome enough to bring back a model in max without crashing ^^
and again i have to say how big are those meshes you want to get into max? i had 30mio working and i had 43mio working and those where totally wasted detailwise but the client wanted it :P
but i guess now its way faster to bake this stuff, really nice addition anyways
and again i have to say how big are those meshes you want to get into max? i had 30mio working and i had 43mio working and those where totally wasted detailwise but the client wanted it :P
but i guess now its way faster to bake this stuff, really nice addition anyways
when you still have a crappy single core + couple of bad giga ram like me, it's pretty usefull I crash in max after a couple of millions tris. And when you have a bunch of subtools, etc It really can give you better results than just exporting a lower lvl.
Just don't let max show them to you, the renderer can handle those for baking.
Turn your scene to bounding box view mode, import the mesh (take guruware importer or any max above 2008 where it is integrated), if you have it imported, go to the object properties and turn it to boiunding box and thats all, the renderer can handle it. Depending on how much ram you have you might not be able to render it with the lightracer or only with less samples/textureresolution/bounces but it definitely works. Also on single core machines, the only issue i really noticed are 32bit machines where max cant handle more then 1.4gig of ram (no matter what it crashes when it uses more then 1.4).
SupRore: Would you mind doing a little test for me and bringing that into whatever package you use (maya max xsi modo etc) and smoothing out the normals?
do you know, the original meaning of decimate only means to reduce by 10%. As punishment for failure/mutiny, roman solders were split into groups of 10, and through random selection one soldier from each group was executed by the remaining 9 </random factoid>
very nice indeed, just crushed my dom war 4 model down from 80 million to just 6.5 at 20% and the loss of detail was minimal and with only 2 slight errors, i was then able to import into xsi which is something i didnt imagine i would be able to do this morning.
Will need to test this further reduce to 10% and if that holds up then do a decent render.
Hmm I wonder if I could say. Take a mesh I want to optimize out of max. Sub-d it a few times in Zbrush keeping hard edges on. Then use this tool to get below the original mesh for ingame? Does it work on triangles as asked earlier. Unsure this was answered.
not sure it keeps the mesh as clean as you would want it for an ingame mesh Oxy, but i havent tried that out.
i think it does some pretty creative stuff to the geometry to make sure that it maintains shape while keeping quality of the original form, but not a work around to make a super sick low poly model from the high.
I think moose is right, it does a great job at optimizing the insanely high poly into a very messy, slightly less insane high poly, aimed at preserving detail. In the process it makes using the mesh for anything other then baking a headache or flat out impossible. You're better off using the old tried and true methods of building a low poly.
I don't see any real easy way to crunch any high down into a great low poly model without a little effort, not yet anyway...
i wouldnt dream of using this for a lowpoly, even at 90,000 tris it started to have some weird pinching, but i got my model running smoothly in max to render and bake without any effort or loss of detail.
a good test for everyone to do is like start with the rhino and divide it like 4 times,
then draw a long squiggle down the side and then run then plug-in
you'll see how it really keeps the new squiggle while optimizing the rest.
The only part i don't like is it triangulates everything :-(
Tried this out today. One huge annoyance is that when I used it on a subtool it changed the scale/position of that subtool on export.
I'm working on a model with a mixture of mechanical and organic parts. I exported all the parts into ZBrush, but only needed to export the ZBrushed organic stuff back out into Max. But the organic parts came into Max about 2x smaller than what was originally exported and it's position offset.
Without using decimation it came back into Max at the correct size. Not sure what causes the size discrepancy after running decimation. Will have to play around with it more.
Other than the scale issues, it worked great. A pretty cool optimization tool.
This is exporting from Zbrush after decimating it, importing the mesh in to MAX results in the mesh being at a different size than when you export the undecimated mesh from Zbrush. Resulting it not fitting the capturemesh/basemesh/lowpoly etc. weird thing is, mines not just bigger, but its slightly offset from 0,0,0 too...weird shit
Replies
If anyone tries this out mind creating some technical test shots like how it really compresses in detail?
don't know of what kind of technical test shots you think of renderhjs, but it seems like it's just a nice addition to zbrush before exporting highpolys for baking. takes polycruncher out of the flow.
I just confirmed it myself on this larger screenshot:
http://www.pixologic.eu/preprint/documentation/compare_damien_high.jpg
just triangulation stuff, nothing revolutionary but probably fast and clean
@ most of the comments; yes, it is mostly for when you are "finished" sculpting, and ready to export to bake your normals. I would also recommend saving a backup file in case you need to go back to do touch ups, since it does pretty much annihilate your geometry for the sake of optimization.
It is really fast, and super crazy awesome
I am wondering how this compares to Meshlab? But yeah the less apps the better. Also this should make Xnormal bakes *even* faster!!
I'll take this over meshlab or polycruncher any day. That's one less app to filter my shit through on the way to normal maps. Optimization has always been a pita and I've often wished for it in ZB.
This should be great for taking things out just before baking (or 3d printing!) to keep file sizes and processing times down.
I love Pixologic!
The thing with this plugin is that you can have your 8 millions vertices ZBrush tool and decimate it. You can't open such an obj in Meshlab, he'll crash.
and again i have to say how big are those meshes you want to get into max? i had 30mio working and i had 43mio working and those where totally wasted detailwise but the client wanted it :P
but i guess now its way faster to bake this stuff, really nice addition anyways
when you still have a crappy single core + couple of bad giga ram like me, it's pretty usefull I crash in max after a couple of millions tris. And when you have a bunch of subtools, etc It really can give you better results than just exporting a lower lvl.
Turn your scene to bounding box view mode, import the mesh (take guruware importer or any max above 2008 where it is integrated), if you have it imported, go to the object properties and turn it to boiunding box and thats all, the renderer can handle it. Depending on how much ram you have you might not be able to render it with the lightracer or only with less samples/textureresolution/bounces but it definitely works. Also on single core machines, the only issue i really noticed are 32bit machines where max cant handle more then 1.4gig of ram (no matter what it crashes when it uses more then 1.4).
B
If this works as well as i hope, this is a godsend!
750k on the left, 90k on the right. Would have the original (1269760) for comparison too but i dont think i can get it into max.
presumably this plugin only reduces 10% of your mesh's polygons, the other 90% are the ones who do it
Will need to test this further reduce to 10% and if that holds up then do a decent render.
My workstation can handle a great amount of polygons , lets see if i need this or not hehe.
Sup: looks great. Thanks!!
Or stick with meshlab and such for that.
i think it does some pretty creative stuff to the geometry to make sure that it maintains shape while keeping quality of the original form, but not a work around to make a super sick low poly model from the high.
I don't see any real easy way to crunch any high down into a great low poly model without a little effort, not yet anyway...
then draw a long squiggle down the side and then run then plug-in
you'll see how it really keeps the new squiggle while optimizing the rest.
The only part i don't like is it triangulates everything :-(
I'm working on a model with a mixture of mechanical and organic parts. I exported all the parts into ZBrush, but only needed to export the ZBrushed organic stuff back out into Max. But the organic parts came into Max about 2x smaller than what was originally exported and it's position offset.
what happens if you just export without decimating?
Other than the scale issues, it worked great. A pretty cool optimization tool.
edit, ok should have read before posting lol, does anyone have a solution? my mesh comes out like xx times bigger..