Minecraft map data is stored as voxels, but rendered as standard polys.
Edit: Just tried out MoonForge, like what you've got so far.
But I think Instead of building on the moon it'd be neater if it was like a holodeck, tron inspired even. Maybe then have it so players could fill the cubes different textures of their choosing instead of having material blocks like minecraft. A multiplayer minecraft+Holodeck Reimagined hybrid game, that's what I'm talking about .
If there will be the possibility of competive play will there also be weapons? Im just thinking about worms3D. may not be the best game but the weapons destroyed the world that was made more or less out of block/parts. So what about weapons that can destroy environment in moonforge?
Minecraft map data is stored as voxels, but rendered as standard polys.
Edit: Just tried out MoonForge, like what you've got so far.
But I think Instead of building on the moon it'd be neater if it was like a holodeck, tron inspired even. Maybe then have it so players could fill the cubes different textures of their choosing instead of having material blocks like minecraft. A multiplayer minecraft+Holodeck Reimagined hybrid game, that's what I'm talking about .
importing textures could then be a neat feature too.
Loving the stuff here. I just posted this to tech-artists.org but thought I'd post here as well.
I've been working on a GPU lightmap renderer for some time but recently tried my hand at lighting volumes. There's a video below and more info on my blog copypastepixel.blogspot.com if you're interested. The lighting volumes were generated in my lightmap renderer and the video was captured from a deferred render engine I've also been working on.
Loving the stuff here. I just posted this to tech-artists.org but thought I'd post here as well.
I've been working on a GPU lightmap renderer for some time but recently tried my hand at lighting volumes. There's a video below and more info on my blog copypastepixel.blogspot.com if you're interested. The lighting volumes were generated in my lightmap renderer and the video was captured from a deferred render engine I've also been working on.
that looks fanatstic! So is it purely lit by lighting volumes, or a combination of lightmaps for static geo and lighting volumes for dynamic objects? I'm pretty impressed with the ambient specular you're getting .
that looks fanatstic! So is it purely lit by lighting volumes, or a combination of lightmaps for static geo and lighting volumes for dynamic objects? I'm pretty impressed with the ambient specular you're getting .
Thanks. Yep, all the ambient light comes from the light volumes. Rough light maps are generated initially and used when rendering the volumes but they're discarded once that's done.
If you wait at the connect screen for a few seconds it should find the IP address of an "official" server I setup (not 127.0.0.1). Once it finds that you can click connect to play online.
Im working on inventory and crafting, jetpacks are something you can make or buy :P
MattLichy, that makes me realize how much I need to work on loading time. I wounder if there is a write up somewhere about how the minecraft dev got such fast mesh generation time, and how much data is actually loaded during any moment of gameplay. I think I would be better off generating triangles manually rather than using Unitys buildin Mesh.Combine() function. Theres also some other unoptimized code in the loading process, I think its doing lots of disk access while creating cells.
I'm about to properly start work on my first game, but before I do I'm just posting this demo showcasing a few things. The main thing being 3D sphere to triangle collision detection. Also some basic dynamic lighting and fast culling.
Regarding culling, this pseudo game level has 54,000 polygons, which represents a fairly large level in DS or N64 level polycounts. Because it's broken up into 75 pieces and only a small amount of polygons are visible at any one time, it runs at 60fps with ease, as models that are completely offscreen are quickly discarded from the transformation/render engine.
Also regarding collison detection, the per poly collisions also use my broadphase uniform grid, so the amounts of polygons needing to be checked are narrowed down to an average of about 60 per frame, with the majority of those being quickly discarded anyway, due to them being too far away or backfacing.
The actual sphere to triangle collision routine is more complex than you imagine though. A simple raycast to the polygon is not enough, you need to first find the intersection point, and if that intersection is seen to be NOT in the current triangle, that doesn't mean there is definitely no collision.
Why? Because the radius of your object could be touching one of the vertices or edges of the triangle, even if the ray doesn't intersect the actual face of the triangle. You can probably visualise this, imagine a sphere approaching one triangle from every angle.
And the process for checking collisions with the 3 edges and 3 vertices not only brings the collisions checks per triangle to 7 instead of just 1 ray cast, but the edge/vertex check are more complex than the raycast, and slower.
So once you've found your colliding polygon, you can just adjust the velocity and the function right?
No, this is the bit that'll surprise most people. You have to carry on iterating through the polygons in case you collide with multiple polygon simultaneouslly (which you are, a lot of the time). Not only that, but even after you've finished iterating over all the polygons in the current node, you have to do it AGAIN, using the new velocity that exists as a result of the first collision.
So you have to actually run the routine up to 5 times per object per frame, to properly implement "sliding" physics.
As you can imagine, getting the routine to run fast isn't easy.
I have got mine running very fast as you'll see, and for those of you that are interested, if you maybe want to develop a 3D engine yourself (especially when Flash gets GPU support), the best references are this paper by Kapser Fauerby - http://www.peroxide.dk/papers/collision/collision.pdf and Ericson's book "Real Time Collision Detection".
I think these are the best, most concise references out there. Part of the trick to getting 3D collision detection running fast is a FAST arse broadphase, that narrows the possible collisions down with minimal overhead, and the other part is simplifying the actual collision routine as much as possible.
Anyway, here's the demo. Not only does it run at 60fps, but making sure you have nothing else running in the browser, you should check the CPU usage.
By default you control a Dreamcast in 3D person, using the WASD keys or the arrow keys.
Press Spacebar to shoot weapons, press 1 to swap the Dreamcast with a pixel art style Dreamcast, and press 2 to change to FPS style controls, where WASD is move/strafe, and the mouse rotates/looks. While in FPS mode, you can press V to enable vertical look.
Cool stuff RumbleSushi. Looks like you've nailed it. I had loads of browser windows open in chrome and it ran perfect. That was on a fairly bodgey old laptop as well. What's your demo coded in out of interest?
I also wanted to post another light volume test. This one features an animated character so it's been a lot of fun to work on.
I implemented some hacky dynamic indirect lighting using a "many lights" technique. Im currently using Unitys default deferred lights so its still slower than it should be. I will probably make a special post process for really cheap lights.
I like the indirect lighting, but I think it's actually distracting to do it at a granularity smaller than the blocks themselves (and the outlines on the blocks don't help). It sort of accentuates the blockiness to the point of distraction, instead of it being an interesting pixel-art-like style. Imagine if you took a pixel art game and drew tiny black outlines around the pixels - it would prevent any larger shapes from forming.
Also, if you were looking to differentiate from Minecraft, you might experiment with your core block size. Part of what makes Minecraft Minecraft is the size of the blocks - just small enough to jump on but not small enough to walk over. Big enough to build with but too big to really do physical simulation with. If you had smaller blocks you might experiment with world simulation more like this:
And then you could reasonably add vehicles: diggers, jeeps, cranes for building. . . You would probably need a smaller world than Minecraft, but who needs all that space anyway?
I think it's a really interesting type of game engine and might have potential for non-Minecraft gameplay if you play with the core tech some.
The indirect light granularity is much larger than a box, there are a max of 25 lights right now each with 15 block radius. Or do you mean having a block be fully lit as one solid color and not using smooth gradiant lighting? Doing outlined pixel textures was one of the original ideas I had. Im not convinced its a bad idea, but I just havent tried it yet.
Having smaller scale blocks and more physics would be cool, but very hard to do in a large world, which I do want. I can still have vehicles, I just need to give them really big wheels
I meant how there is a core shadow, what might be AO, on each block in a stair-step. So, when you look at a slope, the stair-step effect gets accentuated. It makes your game world seem much more like it's literally built from blocks, and less like it's a blocky but coherent world. . . if that makes sense. There's blocky-as-a-style (pixel art) and there's blocky (legos). I like the pixel art aesthetic. That's just my take on it.
Maybe experiment with making the block outline a slightly lighter color than the rest of the block?
Finally got edge detect working. Tried with UV coordinates but failed everytime. Got a sobel filter working on scene texture. Might play with that and see if i can incorporate a depth map for that. Learning more and more and think I can go back to my anaglyph fail PP and redo that with what I have learned by trying to do an edge detect.
Came up with the idea of having thumbs for objects in a Max File that u want to Merge into your scene.
So using this tool, u could see each Object or Group of Objects, or a whole layer as a thumbnail, then merge that into the scene via right click and maybe drag/drop the thumbnail.
Edit: Updated the image. Added some new features/options.
Got the filtering working so far . Just need to try to make it work better than the way I have it now... Right now u have to choose the max file to merge each time u change the filter text for it to update..
Cool, thanks guys. Yeah, I was debating if this would be useful or not, but I think it will be.
The main issue with it, which isn't a big deal and is expected, is that it can take a bit to render all the thumbs if you choose Single Objects with scenes that have like 150 + objects. But it also depends on the thumb sizes u render, and other stuff. Gonna see if I can get the speed up.
I guess it would be useful if you could create a db (datebase) or thumbnail zip archive file with a index of things. That way Max files could be batch processed into small indexed files that contain the thumbnails and perhaps some ini or XML based info table with the stats.
There are several other scripts out there that do that in a similar way but often they are rather focused on Materials. Having a standalone tool for just that would be nice with support for FBX and OBJ next to max.
Yeah... well 1 issue or thing I thought about with this, is if I DID create a database of thumbs, it wouldn't be up to date if they delete an object and save their scene, unless a callback detected it, and re-created the thumbs.
Right now I'm just rendering the thumbs for the file you want to merge, and storing them in Ram, not even to disk. This works pretty well with smaller files to merge, or like I said layers and whatnot, but huge scenes when doing single objects will take time....
@Matt: How are you rendering the thumbnails? Always wanted to build like a database plugin that just accesses a folder and shows thumbnails on the objs. I know there are some scripts that do this for maya...really need to get back to doing Mel or figuring out python.
I'm just using the Render method max has. Just going through objects, selecting them per certain parameters (visible/grouped/ect), zoom up on them, isolate them, and then render it and add it to an array.
Then later I take that array and add it to the controls of a DotNet flowlayoutpanel.
I was saving the bitmaps to file at first, just for testing, but keeping in memory seems to be a better choice right now, because its faster and I dont run into I/O errors trying to delete/re-render/save bitmaps and so on.
I'm still somewhat of a noob to maxscripting, and not really a programmer in general, but I like this stuff and keep learning.
Finally got edge detect working. Tried with UV coordinates but failed everytime. Got a sobel filter working on scene texture. Might play with that and see if i can incorporate a depth map for that. Learning more and more and think I can go back to my anaglyph fail PP and redo that with what I have learned by trying to do an edge detect.
Nice! Also isn't this the GDC demo level? where did you get hold of it?
Nice! Also isn't this the GDC demo level? where did you get hold of it?
It should be in UDK maps directory since the April installation Got tonnes of work to do on this.
@BeatKitano: Thanks! I want to add a camera fadeoff as far off there just tends to be dark lines. Also gonna try to work my way through some toon shaders and add that to this.
I think I will probably update my tools tomorrow, with this in the build.
I'm going to try and add another filter textbox to the menu, at the bottom, but u can use that one, hopefully, to filter the thumbs that were generated.
Actually... I was up till like 3 AM programming this lol... But anyways, when I was going to bed, I realized how I can catalog the Obj names/Thumbs and get them back/edit them accurately/appropriately like RenderHJS Mentioned...
Soo.... I got it all written out in English right now, it shouldn't be that hard to add/not a ton of code.
So it will check to see if a folder based on that Max File name exists, if so, then check the Max File Last Modified Date, if that doesn't match the date stored in the INI File, or if the Folder didn't exist, it will tell u that it needs to Catalog/Re-Update the Catalog the thumbs.
But then after that, if the folder exists and the file date is the same, it can just read in the images from that folder and rescale them if need be based on what u choose for the thumb size. So this way, it should be ALOT faster after you initially
catalog the Max File
Replies
Edit: Just tried out MoonForge, like what you've got so far.
But I think Instead of building on the moon it'd be neater if it was like a holodeck, tron inspired even. Maybe then have it so players could fill the cubes different textures of their choosing instead of having material blocks like minecraft. A multiplayer minecraft+Holodeck Reimagined hybrid game, that's what I'm talking about .
If there will be the possibility of competive play will there also be weapons? Im just thinking about worms3D. may not be the best game but the weapons destroyed the world that was made more or less out of block/parts. So what about weapons that can destroy environment in moonforge?
importing textures could then be a neat feature too.
awesome.
keen's stuff is obviously wicked as well, but that hardly needs to be mentioned
that looks fanatstic! So is it purely lit by lighting volumes, or a combination of lightmaps for static geo and lighting volumes for dynamic objects? I'm pretty impressed with the ambient specular you're getting .
Thanks. Yep, all the ambient light comes from the light volumes. Rough light maps are generated initially and used when rendering the volumes but they're discarded once that's done.
http://www.keenleveldesign.com/pimp/moonforge/MoonForge_Wip_11.zip
If you wait at the connect screen for a few seconds it should find the IP address of an "official" server I setup (not 127.0.0.1). Once it finds that you can click connect to play online.
JETPACKS.
MattLichy, that makes me realize how much I need to work on loading time. I wounder if there is a write up somewhere about how the minecraft dev got such fast mesh generation time, and how much data is actually loaded during any moment of gameplay. I think I would be better off generating triangles manually rather than using Unitys buildin Mesh.Combine() function. Theres also some other unoptimized code in the loading process, I think its doing lots of disk access while creating cells.
I'm about to properly start work on my first game, but before I do I'm just posting this demo showcasing a few things. The main thing being 3D sphere to triangle collision detection. Also some basic dynamic lighting and fast culling.
Regarding culling, this pseudo game level has 54,000 polygons, which represents a fairly large level in DS or N64 level polycounts. Because it's broken up into 75 pieces and only a small amount of polygons are visible at any one time, it runs at 60fps with ease, as models that are completely offscreen are quickly discarded from the transformation/render engine.
Also regarding collison detection, the per poly collisions also use my broadphase uniform grid, so the amounts of polygons needing to be checked are narrowed down to an average of about 60 per frame, with the majority of those being quickly discarded anyway, due to them being too far away or backfacing.
The actual sphere to triangle collision routine is more complex than you imagine though. A simple raycast to the polygon is not enough, you need to first find the intersection point, and if that intersection is seen to be NOT in the current triangle, that doesn't mean there is definitely no collision.
Why? Because the radius of your object could be touching one of the vertices or edges of the triangle, even if the ray doesn't intersect the actual face of the triangle. You can probably visualise this, imagine a sphere approaching one triangle from every angle.
And the process for checking collisions with the 3 edges and 3 vertices not only brings the collisions checks per triangle to 7 instead of just 1 ray cast, but the edge/vertex check are more complex than the raycast, and slower.
So once you've found your colliding polygon, you can just adjust the velocity and the function right?
No, this is the bit that'll surprise most people. You have to carry on iterating through the polygons in case you collide with multiple polygon simultaneouslly (which you are, a lot of the time). Not only that, but even after you've finished iterating over all the polygons in the current node, you have to do it AGAIN, using the new velocity that exists as a result of the first collision.
So you have to actually run the routine up to 5 times per object per frame, to properly implement "sliding" physics.
As you can imagine, getting the routine to run fast isn't easy.
I have got mine running very fast as you'll see, and for those of you that are interested, if you maybe want to develop a 3D engine yourself (especially when Flash gets GPU support), the best references are this paper by Kapser Fauerby - http://www.peroxide.dk/papers/collision/collision.pdf and Ericson's book "Real Time Collision Detection".
I think these are the best, most concise references out there. Part of the trick to getting 3D collision detection running fast is a FAST arse broadphase, that narrows the possible collisions down with minimal overhead, and the other part is simplifying the actual collision routine as much as possible.
Anyway, here's the demo. Not only does it run at 60fps, but making sure you have nothing else running in the browser, you should check the CPU usage.
By default you control a Dreamcast in 3D person, using the WASD keys or the arrow keys.
Press Spacebar to shoot weapons, press 1 to swap the Dreamcast with a pixel art style Dreamcast, and press 2 to change to FPS style controls, where WASD is move/strafe, and the mouse rotates/looks. While in FPS mode, you can press V to enable vertical look.
http://rumblesushi.com/dreamcast.html
Cheers,
RumbleSushi
I also wanted to post another light volume test. This one features an animated character so it's been a lot of fun to work on.
[ame]http://www.youtube.com/watch?v=TSYJWuCtW-0[/ame]
blog: copypastepixel.blogspot.com
Very nice lighting by the way, the spheres especially look excellent. Is your shader written in C++?
Also, if you were looking to differentiate from Minecraft, you might experiment with your core block size. Part of what makes Minecraft Minecraft is the size of the blocks - just small enough to jump on but not small enough to walk over. Big enough to build with but too big to really do physical simulation with. If you had smaller blocks you might experiment with world simulation more like this:
http://www.onemorelevel.com/worldofsand.php
And then you could reasonably add vehicles: diggers, jeeps, cranes for building. . . You would probably need a smaller world than Minecraft, but who needs all that space anyway?
I think it's a really interesting type of game engine and might have potential for non-Minecraft gameplay if you play with the core tech some.
Having smaller scale blocks and more physics would be cool, but very hard to do in a large world, which I do want. I can still have vehicles, I just need to give them really big wheels
Maybe experiment with making the block outline a slightly lighter color than the rest of the block?
Looking forward to seeing what you do next
That made me think, someone could potentially do a minecraft style game with the cortex-command mechanic.
Finally got edge detect working. Tried with UV coordinates but failed everytime. Got a sobel filter working on scene texture. Might play with that and see if i can incorporate a depth map for that. Learning more and more and think I can go back to my anaglyph fail PP and redo that with what I have learned by trying to do an edge detect.
So using this tool, u could see each Object or Group of Objects, or a whole layer as a thumbnail, then merge that into the scene via right click and maybe drag/drop the thumbnail.
Edit: Updated the image. Added some new features/options.
Got the filtering working so far . Just need to try to make it work better than the way I have it now... Right now u have to choose the max file to merge each time u change the filter text for it to update..
The main issue with it, which isn't a big deal and is expected, is that it can take a bit to render all the thumbs if you choose Single Objects with scenes that have like 150 + objects. But it also depends on the thumb sizes u render, and other stuff. Gonna see if I can get the speed up.
There are several other scripts out there that do that in a similar way but often they are rather focused on Materials. Having a standalone tool for just that would be nice with support for FBX and OBJ next to max.
Right now I'm just rendering the thumbs for the file you want to merge, and storing them in Ram, not even to disk. This works pretty well with smaller files to merge, or like I said layers and whatnot, but huge scenes when doing single objects will take time....
Then later I take that array and add it to the controls of a DotNet flowlayoutpanel.
I was saving the bitmaps to file at first, just for testing, but keeping in memory seems to be a better choice right now, because its faster and I dont run into I/O errors trying to delete/re-render/save bitmaps and so on.
I'm still somewhat of a noob to maxscripting, and not really a programmer in general, but I like this stuff and keep learning.
Nice! Also isn't this the GDC demo level? where did you get hold of it?
Commander keen: awesome light rendering, I really like the Ca and soft blur at thestart of shadow
It should be in UDK maps directory since the April installation Got tonnes of work to do on this.
@BeatKitano: Thanks! I want to add a camera fadeoff as far off there just tends to be dark lines. Also gonna try to work my way through some toon shaders and add that to this.
I'm going to try and add another filter textbox to the menu, at the bottom, but u can use that one, hopefully, to filter the thumbs that were generated.
Yeah basically, I still need to do some tweaks to make it look correct when viewed from orbit.
Actually... I was up till like 3 AM programming this lol... But anyways, when I was going to bed, I realized how I can catalog the Obj names/Thumbs and get them back/edit them accurately/appropriately like RenderHJS Mentioned...
Soo.... I got it all written out in English right now, it shouldn't be that hard to add/not a ton of code.
So it will check to see if a folder based on that Max File name exists, if so, then check the Max File Last Modified Date, if that doesn't match the date stored in the INI File, or if the Folder didn't exist, it will tell u that it needs to Catalog/Re-Update the Catalog the thumbs.
But then after that, if the folder exists and the file date is the same, it can just read in the images from that folder and rescale them if need be based on what u choose for the thumb size. So this way, it should be ALOT faster after you initially
catalog the Max File
Cant wait!
It will make it run so much faster, making it much more competitive/a better alternative to using the Max Merge Dialog for Meshes.