r_fletch_r, that is very similar to what I was thinking and would be easier to implement than my idea. It would be slower though because it would be doing tons more raycasts.
r_fletch_r, that is very similar to what I was thinking and would be easier to implement than my idea. It would be slower though because it would be doing tons more raycasts.
Indeed, but your method would not be able to record concave geometry and over hangs. or am i misunderstanding(im prone to this )
Your casting a ray from the front and the back and doing a fill between them on the z right?
yes 2 rays, but those rays would intersect all surfaces they hits and then after ordering the contacts by distance from one of the projection sides you could use those 2 lists of contact points to fill in what should be filled in and leave what should not. It could break on meshes with lots of open edges and intersecting geo though.
the process described in your link could probably be done on the gpu too using many renders with progressing nearClip and farClip planes in an atlased 3d texture.
yeah you can do the same in max and even automate it using the volume select (or vol. select) modifier on a grid of vertex points in a mesh. Then apply cubes on all of the selected verts of that mesh - done!
But what all of those examples fail to do is picking color values from the mesh texture - or it's uv space. Using the gBuffer method that is super easy but it's more tricky to determine voxel in 3d space because it is just a 2d projection data set (the gBuffer).
So I am thinking about determining the FOV distance of the camera in conjunction with the zDepth range to calculate each pixel back in 3d space.
Should it work all I'd need to do would be creating several snapshots from different angles and simply exclude double voxels so that at some point all needed voxels are sampled.
Another idea was to use a orthographic camera and scan from fixed angles which would make it easier to determine the gbuffer pixels back into 3d space. Either way there must be a way without the need of expensive plugins like krakatoa which can map particle colors based on camera mapping and or other projections.
It shouldnt be too hard to lookup uv coords from a raycast hit in max right? I would probably implement this in unity which gives you uv coords for raycasts automatically.
It shouldnt be too hard to lookup uv coords from a raycast hit in max right? I would probably implement this in unity which gives you uv coords for raycasts automatically.
IntersectRayEx gives you the barycentric Coords and face id straight off the bat. it'd be pretty easy to sample relavent pixel from there
Pior: tnx didn't know if anyone else would see any use in it
i'm going to optimise it so it should go a little faster and i still have to fix some of the bad collisions it sometimes creates on a mesh. I also still have to implement the 3D option of the script which enables you to project the back points of a mesh onto another mesh and have the rest of the 3d mesh adjust to the projected surface points
for example a pocket that should follow the curve of a jacket could be projected onto the jacket this way
renderhjs: yet another cool feature ^^
syncviews: looks pretty good so far
I just realized something today I thought I'd share. I'm probably not the first to think of this. As I was setting up some simple shading on my testmesh for my rigging project I came up with a cheap way to simulate eye drop shadow. By using hemispheric ambient lighting I would use a dark color from the top and a brighter one from below. This would allow you to rotate the eyes around without the shadow following.
This would of course require you to use a dedicated shader for the eyes. It would break if the characters head wasn't oriented in an upright rotation. If you could link the hemispheric directions to the head joints local axises this issue would be solved.
I have only made cartoon eyes, but feel like some variant on Ben Mathis' method here would still look okay. Possibly with a darker painted shadow, so long as the geo itself doesn't cast shadows or anything crazy like that.
Very very cool Kodde, I am a huge fan of a bit of floating geo to simulate the effect but I love how this one is part of the eye itself. If you ever have the chance to tweak it further I would recommend having a slider for the sharpness of the shadow edge - I find sharper castshadows to be more sriking in general.
Pior> Thanks. I'm actually contemplating on adding this feature to my eyeShader. It would be cool to also incorporate the possibility to have the hemi axises follow the head joint. I'll post if I figure it out. Adding control to over what distance the gradient from black/white goes shouldn't be a problem either.
tnx , actually it's a schoolproject , the idea behind the script is to make it easier to get topology following a certain shape without having to push vertices. Like details on an armor for example. you model the basicplate, model the details shape and just project it on the curved surface
But it has been great so far for getting to know maxscript.
Ravenslayer: I decided to pick up max again, because of that script. (I mean there are tons of useful tools as well, but thats aside from the point)
(Foaming at the mouth for that script)
Grimm_Wrecking: lol tnx but don't get too carried away
MightyPea:
it doesn't atm, when you select a spline it converts it to a editable poly object
i could implement that in a cheap way without having to add to much extra code by taking the final projected mesh selecting the edge and simply using the create shape from selection function but that would add extra points on the shape.
could be that I'll implement it lateron when all of the rest on the list is working properly
Getting into robotics stuff with programmable microcontrollers. Heres a simple example of unity transmitting rotation information to a microcontroller which controls the servo.
Any idea why whenever I press certain buttons with TexTools (The Mirror and CheckerMap Texture buttons), it causes a Maxscript error and gives me an infinitely looping error sound, whilst I can still continue to work in Max? Very strange, and I've tried installing different versions of TexTools.
Any idea why whenever I press certain buttons with TexTools (The Mirror and CheckerMap Texture buttons), it causes a Maxscript error and gives me an infinitely looping error sound, whilst I can still continue to work in Max? Very strange, and I've tried installing different versions of TexTools.
Maybe because you tried doing that on a mesh or some other non editable poly object. All scripts are aimed for editable Poly objects.
The mirror one expects that you have at least 1 edge in the editUVW window selected.
As for the chekerMap one - don't know - maybe non supported max version? You need at least Max 9, I am just testing with Max2010 and it works fine for me.
I was digging into many Voxel generation scripts but they all couldn't generate the voxel colors based on a texture or material.
So i started writing my own one from scratch and the main trick in my one is to use the slice modifier on 3 axis to make sure that for each voxel on the surface there is a close match of a vertex to read out.
The nice thing about the slice modifier is that it updates the UVs as well so all I did was per zLayer reading out the unique vertices's that match within a voxel snap 2d layer, get their counter part UV verts and using that read out the material bitmap x / y color value.
As you can see the V or y-axis of the texture / UV is flipped which needs to be inversed. Also the capping needs some attention But I think that I know how to solve that.
I think speed wise I can improve it a lot from what i have right now but it is already slightly faster as others I tried.
Once this is all stable I need to write some binary fileType that comes with a color table and various sprite states (guard, jump, stand,...) so that I can trigger the states in a engine.
I did a little more work on my BSP/lightmapping in unity:
I had to pretty much rewrite the BSP manipulation code due to some stupid thing I overlooked the first time, and theres still a nasty crash I need to track down. These lightmaps are rendered with area lights, radiosity and AO.
Certainly nice stuff, check out the site for nice introduction videos and example animations created using the tool.
There is also a maya conversion script to convert a poly/mesh with textures to a Voxel object
I did a little more work on my BSP/lightmapping in unity:
I had to pretty much rewrite the BSP manipulation code due to some stupid thing I overlooked the first time, and theres still a nasty crash I need to track down. These lightmaps are rendered with area lights, radiosity and AO.
Replies
Would be nice to see if someone could make a commercially successful product utilizing this.
keen: are you trying to achieve this
http://forums.luxology.com/discussion/topic.aspx?id=28803
Indeed, but your method would not be able to record concave geometry and over hangs. or am i misunderstanding(im prone to this )
Your casting a ray from the front and the back and doing a fill between them on the z right?
the process described in your link could probably be done on the gpu too using many renders with progressing nearClip and farClip planes in an atlased 3d texture.
But what all of those examples fail to do is picking color values from the mesh texture - or it's uv space. Using the gBuffer method that is super easy but it's more tricky to determine voxel in 3d space because it is just a 2d projection data set (the gBuffer).
So I am thinking about determining the FOV distance of the camera in conjunction with the zDepth range to calculate each pixel back in 3d space.
Should it work all I'd need to do would be creating several snapshots from different angles and simply exclude double voxels so that at some point all needed voxels are sampled.
Another idea was to use a orthographic camera and scan from fixed angles which would make it easier to determine the gbuffer pixels back into 3d space. Either way there must be a way without the need of expensive plugins like krakatoa which can map particle colors based on camera mapping and or other projections.
IntersectRayEx gives you the barycentric Coords and face id straight off the bat. it'd be pretty easy to sample relavent pixel from there
www.keenleveldesign.com/pimp/voxelrenderer/VoxelRenderer_Wip_06.zip
The shadows are 3d shadow maps aka "deep shadow maps" so they can cast through semi-transparent stuff like voxel volumes.
Topoproject is a 3d topology projection maxscript
it enables you to project a flatsurface topology mesh onto a 3d object
[ame]http://www.youtube.com/watch?v=9G3vvyPzX-E[/ame]
(yup its a vector graphic so it can be scaled in any crazy dimension)
Steps to reproduce:
@ MoP: I hope you don't mind me using your awesome Goblin to demonstrate the shader
i'm going to optimise it so it should go a little faster and i still have to fix some of the bad collisions it sometimes creates on a mesh. I also still have to implement the 3D option of the script which enables you to project the back points of a mesh onto another mesh and have the rest of the 3d mesh adjust to the projected surface points
for example a pocket that should follow the curve of a jacket could be projected onto the jacket this way
renderhjs: yet another cool feature ^^
syncviews: looks pretty good so far
I just realized something today I thought I'd share. I'm probably not the first to think of this. As I was setting up some simple shading on my testmesh for my rigging project I came up with a cheap way to simulate eye drop shadow. By using hemispheric ambient lighting I would use a dark color from the top and a brighter one from below. This would allow you to rotate the eyes around without the shadow following.
This would of course require you to use a dedicated shader for the eyes. It would break if the characters head wasn't oriented in an upright rotation. If you could link the hemispheric directions to the head joints local axises this issue would be solved.
The example model and textures by http://www.niklaselling.com/
I have only made cartoon eyes, but feel like some variant on Ben Mathis' method here would still look okay. Possibly with a darker painted shadow, so long as the geo itself doesn't cast shadows or anything crazy like that.
SyncViews, looks cool to me, can't wait to see.
http://dl.dropbox.com/u/5055465/robotics/pof/stickbalancer/StickBalancer_ProofOfConcept4.html
Its a simple self balancing robot. It will try to stay upright if you push it around.
Looking great!
small update on my topology projection script,
http://www.youtube.com/watch?v=dumIMqAD4Zg
But it has been great so far for getting to know maxscript.
(I mean there are tons of useful tools as well, but thats aside from the point)
(Foaming at the mouth for that script)
MightyPea:
it doesn't atm, when you select a spline it converts it to a editable poly object
i could implement that in a cheap way without having to add to much extra code by taking the final projected mesh selecting the edge and simply using the create shape from selection function but that would add extra points on the shape.
could be that I'll implement it lateron when all of the rest on the list is working properly
Reminded me of these scripts. They're fairly ancient, but might evoke another approach for projecting splines...
http://www.cnc-toolkit.com/maxscript.html
going to take a look at them. tnx for the link
[ame]http://www.youtube.com/watch?v=3n5Pj2tTQzs[/ame]
And it comes with a few new features:
[ame]http://www.youtube.com/watch?v=G2q1fF5TuYo[/ame]
[ame]http://www.youtube.com/watch?v=Tzj7ytMYAG8[/ame]
The most recent version can be found at:
http://renderhjs.net/textools/
I also found today this nice MaxScript cheatSheet
http://news.3das.com/2010/05/free-max-script-cheat-sheet.html
Which is certainly nice for those who are somewhat new to MaxScript, I will print a copy tomorrow at the office.
The mirror one expects that you have at least 1 edge in the editUVW window selected.
As for the chekerMap one - don't know - maybe non supported max version? You need at least Max 9, I am just testing with Max2010 and it works fine for me.
So i started writing my own one from scratch and the main trick in my one is to use the slice modifier on 3 axis to make sure that for each voxel on the surface there is a close match of a vertex to read out.
The nice thing about the slice modifier is that it updates the UVs as well so all I did was per zLayer reading out the unique vertices's that match within a voxel snap 2d layer, get their counter part UV verts and using that read out the material bitmap x / y color value.
As you can see the V or y-axis of the texture / UV is flipped which needs to be inversed. Also the capping needs some attention But I think that I know how to solve that.
I think speed wise I can improve it a lot from what i have right now but it is already slightly faster as others I tried.
Once this is all stable I need to write some binary fileType that comes with a color table and various sprite states (guard, jump, stand,...) so that I can trigger the states in a engine.
I did a little more work on my BSP/lightmapping in unity:
I had to pretty much rewrite the BSP manipulation code due to some stupid thing I overlooked the first time, and theres still a nasty crash I need to track down. These lightmaps are rendered with area lights, radiosity and AO.
http://www.minddesk.com/?page_id=60
Certainly nice stuff, check out the site for nice introduction videos and example animations created using the tool.
There is also a maya conversion script to convert a poly/mesh with textures to a Voxel object
Awesome work, I need to mess with unity.