I got some time to continue on the extension of the maxscript editor. Added a textfield with c# syntax highlightning where you can write c#/VB code and compile it to an assembly, exectuable or in memory. I also added a perviously created script organiser to the left.
working on a semi isometric tile based engine in unity as a personal project, a map editor for it written in Python/Qt and max scripts to batch export the tiles. Not sure where to go with it, but I got the idea when I stumbled over xu4. Also it gives me a concrete project to work on while learning Unity.
ooh, beautiful. Too low contrast - I do that a lot too
it is but is it to low? I kind of like the idea for this tool to have very low constrast because unlike other tools or applications its meant to be a background supporting tool- not something that needs attention as a work or standalone tool. Also because I am going to limit its UI features (no multi resize, maximize,minimize,..) it is more predictable in the way it behaves so it needs less UI decoration to function in a work environment. I like the 3point shader UI of the material editor a lot- it has more contrast as this tool but it also has more UI variety (checkboxes, spinners, labels, groups, buttons,....). Whereas this app idea is supposed to have ideally only drop buckets - hence why I want to experiment more to the extreme with contrast, design,..
Worked today booth code and designwise on some of the JSON settings:
JSON is basically a interchangeable string representation of object types. That way I can export settings into a single string and share them with people- similar to how hex colors in Photoshop have the advantage of being able to be copied as a whole to the clipboard (instead of separate R, G and B values).
Say I would add a drop tool for uploading images to a ftp account, with the Export settings button I could export the current settings to the clipboard as string format (all variables with their correct value types)- message or email it to a friend and he can import those settings through the clipboard (copy text from email) and click "Import". I think this is a very cool idea and I wish modern apps would use something like that so people can share easier app settings.
The other thing I have been playing around with is automated color highlighting of value fields and the descriptions, making it faster to read (because the eye can jump faster to sections of the ui that matter). Might add some spinners if I have extra time for floats and number fields. Also a preset dropdown menu for JSON settings presets depending on the tool for better understanding of the settings and quick access to common cases.
The other thing I have been playing around with is automated color highlighting of value fields and the descriptions, making it faster to read (because the eye can jump faster to sections of the ui that matter). Might add some spinners if I have extra time for floats and number fields. Also a preset dropdown menu for JSON settings presets depending on the tool for better understanding of the settings and quick access to common cases.
@r_fletch_r: that would be the right way to do it, but since I am only scanning for simple characters or strings I avoided regular expressions as they are a pain to figure out. Just a simple string query find / mark function that loops through string using something like string.indexOf(key,lastOffset)... Fast enough for all i need.
Started working on one of the more serious plugins: A Blob (pixel island) extractor of PNG, GIF or PSD files based on absolute transparency
This image shows a tricky case as many of the sprites bounding boxes are intersecting which makes it a bit tricky to mask out the individual masks- but I found a nice way around that. More tricky cases to be posted later with anti aliased feathered pixels and super tiny islands. Blob detection works basically using threshold and flood fill operations to filter out islands of pixels that connect together. As I collect slices I mark processed pixels in special colours that I can subtract later from the individual masks per slice. Once done it will save local image slices next to the file you dragged on the tool.
I am also planning to write a sprite packer plugin, animated GIF/ SWF to image sequence converter and several other game dev tools.
what happens, if a small island belongs to a bigger island but is not within the bigger one? (like floating details). is it possible to hand link them?
anyway: awesome tool, seems really well done and thought out.
i'm sry, would somebody explain to me why it would be so usefull?
i mean, how often do you come across a pic with patterns all over it which are already masked and want them to be seperate them?
take no offense, please, i bow before you, but i dont get why you would put so much effort into such a program...
or am i missing something?
I can see a pixel island extractor and a packer being pretty useful to sprite based programs and games. Especially if it outputs an xml map file or something similar when it packs them.
i'm sry, would somebody explain to me why it would be so usefull? i mean, how often do you come across a pic with patterns all over it which are already masked and want them to be seperate them?
to me it could be very useful...
for splitting up a photoshop UI design into its separate elements (buttons, labels, frames,...) so that the UI dev can take them easily into a sprite sheet and start coding.
Another case might be that its easier for an artist to draw sprites or assets into 1 big canvas instead of separating it into several layers with hidden visibilities. For an artist that makes more sense - and speeds up the workflow between artists and devs.
Or simply crop a PSD / PNG to its alpha bounds if its just 1 blob.
ripping sprite sheets, UI sheets -anything tile/ sprite related
for splitting up a photoshop UI design into its separate elements (buttons, labels, frames,...) so that the UI dev can take them easily into a sprite sheet and start coding.
Very cool, i wonder how that algoritm works?
I had some mutations that almost looked like they could take off and fly but it seems the asymetry which is kinda always there kept it from that. Might be nice to try it with force symmetry somehow?
Keen or anyone else who uses Unity, could you do me a favour and whip up a quick benchmark? I would do it myself, but I don't have Unity or know how to use it.
Just some stacked boxes targeting 60fps would be perfect, let's say 100 boxes, 5 stacks of 20 or 10 stacks of 10, with stand by set to off if possible, so it's always working.
I'm working on a solid body physics engine and I want a reference in terms of performance.
Even though Unity's C# environment is approximately 5 times faster than AS3 along with GPU rendering, it would be a useful reference.
From what I've seen, the Ageia PhysX engine is probably the best thing about Unity. It seems to perform very well on average machines and even iPhones.
Obviously matching the speed of that with AS3 is going to be tricky to say the least, but even half the speed would be excellent.
I know that's an option, but because I've never used it before it'll take me much longer to get to grips with it than one of you guys who could probably knock up a benchmark like that in 5/10 minutes.
I've seen some VERY impressive Unity physics demos on youtube, but I don't how realistic that type of performance is on the average PC. I'm guessing they're running on CUDA enabled cards.
I'm playing around with Molehill at the moment, and one thing that's insane is how much slower the computation is on this FP11 beta. Magnitudes slower than FP10.
The most appealing thing about Molehill for me is freeing up all the CPU juice in order to use it what it's intended for - physics/object count/AI etc, instead of 90% on rendering.
But as of now, you're basically as limited as you are with Flash 10, because of how slow it is.
In addition, there's an annoying vertexBuffer limit of 4096 in this build. That means if each object you have has a seperate buffer for vertices/UVs/normals/vertex colour, that's only 1000 objects.
Unity is not your competitor if your main framework is AS3, I don't think that Unity3D will ever succeed in the browser space except in molehill form. Unity is mainly used on the iPad and iPhone / Android these days. Maybe just wait till booth molehill and the next Unity release are out and then compare.
I've also come to realise that it is mainly used on phones these days, and that somehow the browser based Unity is less successful than it should be. (Even Director games are more mainstream). I honestly thought it would take off far more on the web, the fact that it's got the WHOLE package, including physics, how much easier can they make it to deliver a 3D game?
Truth be told, AS3 won't be able to compare to Unity's native, lower level physics engine, I'm more interested out of curiosity to be honest, just to see how close I can come to the performance on the same machine, if at all.
Even when Molehill matches the performance Unity can deliver on lower end GPU's (right now it seems the rendering performance is similar on high end GPU's, but on basic GPU's, Unity is better) - it won't match the performance CPU wise.
It's going to be a lot more difficult to make proper 3D games on Molehill for most developers. AS3 is much slower, and their only option for physics at the moment is jig lib flash, which is really slow.
Molehill is exciting, but I do think Adobe need to really ramp up the speed of AS3 in order to create gameplay that matches the graphics.
Little question: I want to assign a hotkey to the 'show end result on/off toggle' so I can switch easily between low and high poly in 'Editable poly mode'. But I can't find it in the list of hotkeys?! Any suggestions? I use Max 2011.
Thx guys, I found out where to put it and stuff. But how do I bind it to a hotkey? Sorry for the newb questions but this is my first experience with Max scripts I'm afraid.
spent last night implementing a coverflow widget in python/opengl/qt.
not that we really need one but maybe some artists will like it because it looks "cool" and it's quite fun to use it.
not yet. I'm still polishing. the 3D is there but I need to add some shadows and other little tweaks. Having some inertia like the apple version would be nice too.
I'm also a bit worried that it requires pyopengl to be installed, which limits its use inside maya, unless we deploy pyopengl to everyone. I might have to calculate the transformations in the software (or in maya???) so I'm not dependent on openGL.
you post frequent updates about interesting sounding things but honestly I have hell now idea what it really is you do. Some screens sometimes might explain or show a bit nicer what it is you do.
my problem is, I cannot post images from work - IT is pretty strict about that absolutely nothing gets out. But side projects I work on at home should be ok...
I can talk about the underlying tech though, if anyone is curious.
Got that working nicely. Next step is to have an animated spline to control the overall shape of the tornado, so it don't look so shitty in 3d when it rocks in 2d : D
Looks like Autodesks Image modeler got a big update, its named now Project Photofly 2.0 and it matches pictures together and compute a 3d texturized mesh from a set of uploaded pictures. I only does cloud (client + internet required) and its free to test right now but I wonder why they went that way.
[ame]http://www.youtube.com/watch?v=g9jU-VUBhSQ[/ame]
Im pretty sure Frostbite is using Enlighten which is a static solution which requires precalculated data. Light Propagation Volumes is what Cryengine 3 uses. Its less accurate than the way Enlighten works, but it doesnt require any precalculated data (although I am using precalculated indirect light occluders which can moved but not rotated or scaled). LPV is also nice because its volumetric so you have the same lighting applied to both static geometry and dynamic geometry such as characters.
Nysuatro - The basic steps for to calculaing final LPV result are:
1. Render each lights point of view color and cameraPosition-worldPosition into 1 or more screen locked 3d textures, now you have a 3d texture where voxels are filled with the light that corresponds to where actual lights are cast in volume space.
2. Render light point of view worldNormals into another 3d texture using the same technique.
3. Somehow create an occluder 3d texture that represents solid geometry involume space.
4. Blur (3d) both the light and normal 3d texture. taking into account the occluder texture. Pro-Tip: stagger blur radiuses, example: 4 blurs with radius of 5 pixels, 3 blurs of radius of 2 pixels, 3 blurs of radius of 1 pixel.
5. Now you have an average light and normal 3d texture which can be used to sample your final indirect lighting result on the shaders in the final scene (or composited as a post process).
All of these steps are done per frame, except actually rendering the light view and normal views (which could be stored in buffers per light). There rendering could be staggered based on distance from camera and if the lights are moving or if something moves inside the lights range, so for instance you could say that only 20 lights will be updated each frame, and increase or decrease this based on average framerate. Tweaking this should be a big step in optimizing the technique. You also only need to take into account the lights and occluders within the current 3d volume you are calculating.
Replies
I got some time to continue on the extension of the maxscript editor. Added a textfield with c# syntax highlightning where you can write c#/VB code and compile it to an assembly, exectuable or in memory. I also added a perviously created script organiser to the left.
As a precursor to a Molehill benchmark, I've created a Flash 10 demo to see how the Molehill version compares.
Here is 1100 cubes flocking at around 50fps on my humble Dual Core.
http://rumblesushi.com/cube_flock.html
In theory the Molehill version should be good for around 10,000 on an average GPU.
I want to see how easily you can match or beat the performance of my humble Flash flock code, here - http://rumblesushi.com/turbo_flock.html
Seeing as you have GPU access it should be fairly easy, but I'm curious as to what kind of flock size you can get in Unity running at or near 60fps.
Worked today booth code and designwise on some of the JSON settings:
JSON is basically a interchangeable string representation of object types. That way I can export settings into a single string and share them with people- similar to how hex colors in Photoshop have the advantage of being able to be copied as a whole to the clipboard (instead of separate R, G and B values).
Say I would add a drop tool for uploading images to a ftp account, with the Export settings button I could export the current settings to the clipboard as string format (all variables with their correct value types)- message or email it to a friend and he can import those settings through the clipboard (copy text from email) and click "Import". I think this is a very cool idea and I wish modern apps would use something like that so people can share easier app settings.
The other thing I have been playing around with is automated color highlighting of value fields and the descriptions, making it faster to read (because the eye can jump faster to sections of the ui that matter). Might add some spinners if I have extra time for floats and number fields. Also a preset dropdown menu for JSON settings presets depending on the tool for better understanding of the settings and quick access to common cases.
How're you handling this? regular expressions?
Started working on one of the more serious plugins: A Blob (pixel island) extractor of PNG, GIF or PSD files based on absolute transparency
This image shows a tricky case as many of the sprites bounding boxes are intersecting which makes it a bit tricky to mask out the individual masks- but I found a nice way around that. More tricky cases to be posted later with anti aliased feathered pixels and super tiny islands.
Blob detection works basically using threshold and flood fill operations to filter out islands of pixels that connect together. As I collect slices I mark processed pixels in special colours that I can subtract later from the individual masks per slice. Once done it will save local image slices next to the file you dragged on the tool.
I am also planning to write a sprite packer plugin, animated GIF/ SWF to image sequence converter and several other game dev tools.
another test showing the merge sub clusters feature and minimal cluster size to catch even tiny pixel size blobs.
edit, one more cool test case:
(I like how the shapes are broken up into their separate island parts)
anyway: awesome tool, seems really well done and thought out.
i mean, how often do you come across a pic with patterns all over it which are already masked and want them to be seperate them?
take no offense, please, i bow before you, but i dont get why you would put so much effort into such a program...
or am i missing something?
I can see a pixel island extractor and a packer being pretty useful to sprite based programs and games. Especially if it outputs an xml map file or something similar when it packs them.
thats the most plausible, i'll take that one :P
great job then!
http://www.youtube.com/watch?v=zKRJbJMb1VM
http://keenleveldesign.com/pimp/atrificalevolution/builds/WebPlayer05/WebPlayer.html
Very cool, i wonder how that algoritm works?
I had some mutations that almost looked like they could take off and fly but it seems the asymetry which is kinda always there kept it from that. Might be nice to try it with force symmetry somehow?
Edit:
I let several simulations run on my 2nd pc, and they came out very differently, here are some good ones:
http://dl.dropbox.com/u/2650899/arm.jpg
developed a arm that dragged the root forward
http://dl.dropbox.com/u/2650899/rotateroot.jpg
tried to invent the wheel with its square root, but only rotates a few degrees and moves forward with it
http://dl.dropbox.com/u/2650899/monster.jpg
this got a bit crazy
maybe they should compete in a race
Just some stacked boxes targeting 60fps would be perfect, let's say 100 boxes, 5 stacks of 20 or 10 stacks of 10, with stand by set to off if possible, so it's always working.
I'm working on a solid body physics engine and I want a reference in terms of performance.
Even though Unity's C# environment is approximately 5 times faster than AS3 along with GPU rendering, it would be a useful reference.
From what I've seen, the Ageia PhysX engine is probably the best thing about Unity. It seems to perform very well on average machines and even iPhones.
Obviously matching the speed of that with AS3 is going to be tricky to say the least, but even half the speed would be excellent.
I've seen some VERY impressive Unity physics demos on youtube, but I don't how realistic that type of performance is on the average PC. I'm guessing they're running on CUDA enabled cards.
The most appealing thing about Molehill for me is freeing up all the CPU juice in order to use it what it's intended for - physics/object count/AI etc, instead of 90% on rendering.
But as of now, you're basically as limited as you are with Flash 10, because of how slow it is.
In addition, there's an annoying vertexBuffer limit of 4096 in this build. That means if each object you have has a seperate buffer for vertices/UVs/normals/vertex colour, that's only 1000 objects.
I've also come to realise that it is mainly used on phones these days, and that somehow the browser based Unity is less successful than it should be. (Even Director games are more mainstream). I honestly thought it would take off far more on the web, the fact that it's got the WHOLE package, including physics, how much easier can they make it to deliver a 3D game?
Truth be told, AS3 won't be able to compare to Unity's native, lower level physics engine, I'm more interested out of curiosity to be honest, just to see how close I can come to the performance on the same machine, if at all.
Even when Molehill matches the performance Unity can deliver on lower end GPU's (right now it seems the rendering performance is similar on high end GPU's, but on basic GPU's, Unity is better) - it won't match the performance CPU wise.
It's going to be a lot more difficult to make proper 3D games on Molehill for most developers. AS3 is much slower, and their only option for physics at the moment is jig lib flash, which is really slow.
Molehill is exciting, but I do think Adobe need to really ramp up the speed of AS3 in order to create gameplay that matches the graphics.
heres a heightmap from the tool rendererd with 3dsmax
read more: here
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=maxscript+switch+subdevision#sclient=psy&hl=en&safe=off&source=hp&q=maxscript+toggle+subdevision&aq=f&aqi=&aql=&oq=&pbx=1&bav=on.2,or.r_gc.r_pw.&fp=852aa315e9ee30d4&biw=1016&bih=819
Edit: Solved!
not that we really need one but maybe some artists will like it because it looks "cool" and it's quite fun to use it.
I'm also a bit worried that it requires pyopengl to be installed, which limits its use inside maya, unless we deploy pyopengl to everyone. I might have to calculate the transformations in the software (or in maya???) so I'm not dependent on openGL.
I can talk about the underlying tech though, if anyone is curious.
Scripting a tornado in particle flow in between reading for my exams.
I'm creating a 2d vector field for the twirl using this:
http://demonstrations.wolfram.com/UsingEigenvaluesToSolveAFirstOrderSystemOfTwoCoupledDifferen/
Made some sliders to interactively change the twirl in the viewport:
Got that working nicely. Next step is to have an animated spline to control the overall shape of the tornado, so it don't look so shitty in 3d when it rocks in 2d : D
[ame]http://www.youtube.com/watch?v=g9jU-VUBhSQ[/ame]
[ame]http://www.youtube.com/watch?v=9cb9Suu8w3Y[/ame]
[ame]http://www.youtube.com/watch?v=Ql1gPHXDvyo[/ame]
[ame]http://www.youtube.com/watch?v=R_sOYRbfypU[/ame]
Nysuatro - The basic steps for to calculaing final LPV result are:
1. Render each lights point of view color and cameraPosition-worldPosition into 1 or more screen locked 3d textures, now you have a 3d texture where voxels are filled with the light that corresponds to where actual lights are cast in volume space.
2. Render light point of view worldNormals into another 3d texture using the same technique.
3. Somehow create an occluder 3d texture that represents solid geometry involume space.
4. Blur (3d) both the light and normal 3d texture. taking into account the occluder texture. Pro-Tip: stagger blur radiuses, example: 4 blurs with radius of 5 pixels, 3 blurs of radius of 2 pixels, 3 blurs of radius of 1 pixel.
5. Now you have an average light and normal 3d texture which can be used to sample your final indirect lighting result on the shaders in the final scene (or composited as a post process).
All of these steps are done per frame, except actually rendering the light view and normal views (which could be stored in buffers per light). There rendering could be staggered based on distance from camera and if the lights are moving or if something moves inside the lights range, so for instance you could say that only 20 lights will be updated each frame, and increase or decrease this based on average framerate. Tweaking this should be a big step in optimizing the technique. You also only need to take into account the lights and occluders within the current 3d volume you are calculating.
Here is what I got so far:
[ame]http://www.youtube.com/watch?v=iCxFKU3wjZg[/ame]