What do you find essential in a level editor?
The reason I ask is I've never used one, but I need to build one for my 3D engine.
It can be basic to start off with, and develop gradually, I just want to know the essentials for making a level.
The 3 things that seem essential to me are arranging models, adding 3D sprites (trees etc), and adding placeholders for enemies/items. Those 3 are surely the bare essentials, but is there anything else I need to add that is somewhat essential to make designing a level easier?
In what part of the process are enemies and items added to a level? I would imagine once the basic construction of the level is complete, then placeholders for enemies and items are added with an identifier, then the actual enemy class is loaded in at runtime?
Cheers,
RumbleSushi
Replies
placing objects
placing object generators (spawn points, sfx etc)
drawing splines for AI (enemy paths, racing lines etc etc)
These are some screenshots for a upcomming game project I did some research on:
(flash demo: http://renderhjs.net/bbs/polycount/iso_project_engine/
In the past I used to write always my own editors and tools but since I discovered maxscript there is no reason anymore to write one in Flash or inside the engine. With all the building block types in max (helper objects, bones, primitives, mesh, poly,...) you can just make up the types you need for your engine i.e door, enemy, start point,... and generate a XML, binary file or whatever suits you.
I would have to learn maxScript obviously, but perhaps that would be better/easier than writing my own custom level editor that would do a good enough job to make a proper game level. Does Max itself come with sufficient documentation to get up to speed with maxScript?
The documentation is pretty good but the easiest start for you might be the MaxScript Listener which can record most of the actions you perform in max (i.e add a object, change a attribute, add modifier,...) as MaxScript code. So all you have to do is perform manually some actions in max and let the MaxScript listener record it for you.
After that you simply select the generated code and slightly modify it. To get started with all this simply press F11 in 3dsMax and the Maxscript Listener should pop up. The upper red part is the part where it lists the recorded commands. The lower white one is the output panel for trace (aka print) commands.
From the file menu (in the maxscript Listener window) choose File > New Script... and you should have a new Script document.
Now it you write for example: and then from that document choose Tools > Evaluate All (or on older max versions: File > evaluate all) or alternatively just hit CTRL + E.
It should trace:
well and thats how you get started
it will output something like: which lets you store all the selected objects in 3dsMax into a As3 array. Of course you can read out more than just positios but thats usually a start.
It's good to know that major studios do use Max as a level editor, and just customise it to their needs.
It looks like getting up to speed in maxScript should be much quicker and more efficient then developing my own level editor, that would be complete and proficient enough to do the job.
When studios use 3dx max as part of a custom workflow, as a level editor etc, do they still use standard file formats like collada etc? IE some placeholders would exist for enemies, then at run time they would be loaded in as colladas with the appropriate enemy class etc? Or is the actual geometry exported as a custom format, much like exporting coordinates as AS3?
I was thinking it would be better to still use standard formats, to make it easier if I need freelance artists to do some work etc.
Thats also something I believe most flash 3d engines don't get, most of the flash developers have a background with XML so something like DAE feels familiar to them but its certainly not the best choice when it comes to loading times and parsing or initialization speed.
The way I see it you have 2 options if you want a quick loading file format (in terms of filesize and parsing/ reading into arrays):
Standard formats can be nice if you depend on others but if you are working in a in house production this stuff doesn't matter at all in fact it all what counts usually in those areas is production speed and thats usually archived using proprietary scripts and formats.
I'd like to cover both options actually. My engine has been developed for in house use, but I'd like to at least have OBJ parsing (which I currently use), plus DAE and probably MD2 for when I do hire freelancers.
But for my own workflow, a custom solution seems the best way for sure, for overall workflow speed, and would also be ideal for adding custom things like disabling backface culling only on selected polygons within a mesh, or applying other custom values per polygon. A custom export would be great for stuff like that, as well as ease of import/file size etc.
And instead of parsing any standard file formats at runtime, I could just convert them to AS3 classes in advance as per your option 1, so as to become streamlined into the custom workflow.
Now it's just onto learning about rigged animation, and I can get on with actually making a game
I'll get on with learning maxScript too, at least I don't have to build myself a proper level editor.
Or.. create a AIR/ FP10.x+ application that writes a snapshot of a class as binary file when the OBJ or whatever file is parsed to the desktop. You can then inject any future AS3 code with those snapshots without ever needing to parse any ASCI formats again, because its already a compiled class. It should be heaps smaller and faster.
If you go the swf route (i.e artists need only a browser) all you need is to create a SWF that lets you upload a OBJ file to a server, then load and parse it and then use the write functionality in FP 10+ to write a binary file to the desktop.
In Air you have more power and can access files directly from the desktop including drag and drop operations. But the exporting part would be the same.
The guy who did that Sophie 3D model viewer in Flash did the same, he wrote ZOBJ compressor that compresses and optimizes a OBJ file to a tiny binary file all in Flash 10.
But back on topic:
If you want to export meta informations such as helper objects, pointers or stuff thats needed in most games because just models alone wont do for a game, you still need a exporter from max, maya or some other editor that supports comfy 3d navigation with a set of different object types.
I have to admit, the Sophie engine is quite funny. The guy is obviously clever enough to build a 3D engine, but also slightly insane, he has scenes with like 10 thousand polygons running at about 2 frames a second. Same with another one, yoghurt 3D. You need to be a bit more realistic with what your engine can handle. I know it's more of a model viewer than a proper 3D engine, but still, framerate is very important.
But back on topic - I'm excited to try out level design - it looks at least as much fun as character/vehicle modelling. Now I just need to get a handle on maxScript and CA's. I'll study rigged animation simultaneously.
http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/net/FileReference.html
or a tutorial on gotoandlearn
http://www.gotoandlearn.com/play?id=76
it goes something like this: A friend told me once that you can simply typecast any class as binary so that way you could save a class as binary file to the file system. At a later point you might then load embed it back into flash, for embedding i.e it would be something like this: or you can load it dynamically into flash using the given loading mechanics. I use the URLloader whenever I want to load binary files into Flash just set the endian to little endian and you should be able to load and parse any binary data generated from maxscript, mel or other tools.
Other interesting links:
http://permadi.com/blog/2009/05/actionscript-3-using-file-reference-to-load-local-swf/
http://aidan.rfm.co.nz/blog/2010/02/using-binary-data-in-actionscript-3/
Well that guy has less a background in low poly or realtime game models instead it seems he has a Rhino and industrial design background with nurbs and high poly stuff. I think his engine is quite speedy for static high poly models but it is certainly a different direction as games and interactive stuff. There was a thread about his engine here at polycount
Oh and about Sophie - yep, I'm not surprised he has more of a background in high poly stuff, those scenes are probably low poly to him
Maybe I'm underrating it, but I wasn't impressed for several reasons. First, I'm a true framerate whore. If I see something running at a ridiculously low framerate, I instantly switch off. Plus, I guess the other reason I wasn't impressed is that it is just a model viewer really, no physics or movement, or interaction, or even perspective correct texture mapping.
I didn't think it was fast because the framerates are so low, but maybe I didn't take into account just how many polys are being pushed.
The link on your thread is gone - but what framerate did you get on your 60,000 poly model? (assuming around 30,000 were being rendered at any one time).
Yeah that demo link is gone, can't find it anymore but it was rather good for that time back then. I think he used some caching tricks because all the models are static in his demos, like building up a culling array for certain rotation angles or some other faster look up stuff.
Did you test it textured, and see the speed?
I make sure to cover enough ground with my engine, I only have a Core Duo and Core 2 Duo, laptops - I also send the demos to my mother, as she has a 5 year old AMD sempron - if my demos run OK on her computer, I assume they'll be OK for almost everyone
I use a Core 2 Duo as a general benchmark though, as I assume that's probably the average computer these days.
Out of curiosity's sake, I might try a 60 000 poly model in my engine, textured, with clipping, depth sorting etc, and see how it runs.
Like I said before, I guess it was hard to gauge, because when faced with some awful framerates, I didn't really know how many polygons were being pushed, I knew it was quite a lot, but not exactly how much.
Hats off to the guy, his engine is almost as fast as mine. I loaded in the same 20,000 poly model, full textured, sophie ran it at 16 - 20 frames a second, and my F10 build ran at 18-25 frames a second. Although his polys are affine mapped, no perspective correction, so for certain things his engine would need to use significantly more than mine.
But when you say model viewer, you mean it It's very limited, you can literally just load in 1 object and that's it, no multiple objects, no movement, no positioning etc.
I guess for people who just want a fast 3D model viewer to show off a product etc, it's not bad.
You know, it's odd though, on the one hand I'm sure he is not using the old school F9 way of doing things, bmFill with a map matrix and lineTo - because I worked tirelessly getting my Flash 9 build running as quick as possible, even using a faster, custom depth sorter, faster than array.sortOn() - and gettings things quite this fast simply isn't possible without using drawTriangles and rendering polys in 1 huge batch.
But I know he is not using drawTriangles, because if you go into his gallery demo and go close to a wall, you can see the hardcore affine distortion, and also it seems he is routinely using multiple materials per object in his demos, which again, slows DT down a lot, more draw calls.
So by these powers of deduction I'm 99% sure he's built a software scanline renderer in C++ and compiled it with alchemy, which is possible, seeing as Quake has been compiled to AVM too.
If you wanted to try something like this yourself with C++/Alchemy, bear in mind, you would see a performance hit by using perspective correct rendering, like in the Quake era of software rendering, by recalculating the UV coords every 4th pixel or so from the pixel's Z value. It's probably best to stick with drawTriangles.
Keep in mind that his stuff is already older and that yes probably he uses either the matrix/ old dApi but it is still very fast. I think he has some smart caching going on, something that runs effectively through the required vertices because even his Wireframe stuff runs rather well.
And yes I agree its probably best to stick with the drawTriangles and provided API's by adobe because by the looks of it it will most likely be transfered to the GPU rendering sometime not to far away. I take whatever is the fastest - don't care about precision as long as it runs fast so correct z rendering is not really that important to me.
I'm sure you've realised too, you can generally avoid z fighting to some degree with level design. I get the odd bit of z fighting between moving objects and the background, but honestly, I don't care, you only see it for a second and I don't think it hinders gameplay. I just want to make proper 3D games running at 60 frames a second.
I also experimented with rendering quads, not n-gons, and for orthographic stuff it's great, essentially batching, rather than the 1 draw call per polygon that is otherwise necessary with Flash 9 style rendering.
I didn't persue it though, because to honest, I don't plan on making any games with orthographic views. I'm a big fan of arcade games, I'll be making arcade style 3D games but pretty much always with a perspective view.
I'm really happy with the performance I've got now with my latest F10 build, but as I think you've found out too, it's a bit annoying being more limited texture wise now with DT.
With the Flash 9 way of a draw call per polygon, you can have multiple textures per model, with DT to reduce drawCalls you should be looking at multiple models per texture instead, which makes tiling tricky.
It's a bit of a nause, but can't complain, it runs faster with perspective correction. Which equates to a LOT faster than the dynamic triangulation I use on big polygons on the F9 build.
Oh and by the way, I've been wanting GPU support for ages in flash. Apparently it's stability, that's the reason they choose to keep it software.
Nonsense. Unity, Shockwave and Flash are all as likely to crash as each other.
I guess with a software renderer you might have more of an idea of performance across the board. With an average computer, say a Core 2 Duo, performance is going to be similar, where as with GPU's, there is a HUGE difference between the crappy integrated GPU's in many laptops, and a proper dedicated graphics card.
But still, I think Flash seriously needs to think about adding GPU support in the next year or so. With Unity picking up steam, Flash is going to need to keep up, horsepower wise, in order to maintain poly position - what do you think?
maybe interesting cause the engine is built into maya...
http://area.autodesk.com/gdc/ondemand8
http://engine.readyatdawn.com/home/Welcome/Welcome.html