So I've done prefabs in Unity and from what I understand when you turn an object into a prefab and Instance it in the scene it retains the objects information like connections to textures and shaders so you save on draw calls.
However in UDK you create a prefab based on a number of objects and it puts them in a group and you can duplicate that around the scene instead of having to place each object if they were modular pieces.
So my question is does UDK Prefabs retain Information like Unity prefabs do?
since I don't want to be duplicating individual objects to find that each one is creating draw calls and thus slowing down the performance of UDK.
Thank you for any input.
Replies
If that's truly what Unity's prefabs do, then that's pretty badass.
Prefabs in UDK are more like...groups of objects that you can save in a upk to place elsewhere. They're super handy for say, placing a house with all it's set dressing in a ton of places in an open world... or pre-fabing a campfire together with it's light and a fire FX and ambient sound actor or something.
The cool thing with UDK prefabs is that isolated changes in a prefab can be retained uniquely...meaning, if you place a prefab, say that campfire example from above, and then change out the color of the light that is on the placed one, then that unique prefab will remember how it is different from the parent--even if you update the parent. And if you DONT change an aspect, then it still maintains the relationship to the parent still. For example, if you swap the fire-log mesh on the parent, the uniquely changed prefab will change its mesh too, but it will remember that you made the light a different color and keep that.
The uncool thing with prefabs is that they can be brittle, you can't copy/paste them, and they have odd pivot issues (which become very tricky when updating prefabs until you're used to the quirks). Working with them takes a lot of getting used to.
But yeah...it doesn't batch drawcalls.
I tend to avoid prefabs in Unreal where possible due to general buggyness and their annoying gigantic P icon. I tend to just group things together that form a set, unless it requires Kismet functionality. Then I am forced to use Prefabs.
Any more info or opinions would be much appreciated.
The mesh(es) will be instances of the same source mesh (called the sharedMesh in Unity) - and the materials are instances of the same material (called the sharedMaterial).
Also, editing any meshes or materials at run-time will create a temporary copy of the edited mesh or material. Although you can edit the "shared" mesh or material and have the changes affect everything using it.
We aim for about 100 to 200 drawcalls max in view on mobile. Latest mobile devices can take up to 400 or 500 quite ok dependent on how much post process you want to apply and other factors (overdraw etc).
My way of getting Unreal PC games to work on mobile has been to place it as individual objects in the editor, export as FBX, attach them all to the same mesh in Max, then export it back to Unreal and have it replace the original separate meshes.
There are considerations regarding memory to keep in mind, so you can pretty much only do this to low poly meshes, so to not drive up the memory too much.
You would most preferably need to handle collision as separate meshes also (so not included as simplified collision within your normal meshes).
I intend to create a large in-depth tutorial next month or so on how we made Unmechanical and to a lesser extend The Ball work on all iOS devices, and how we made it scale back and forth between PC and a wide range of mobile devices.
That would be insanely useful to a lot of people :thumbup:
Also---there are mobile devices that can push 400-500 draws now? That's awesome...that's about what we maxed out at on Titan Quest! Crazzzzy!
Much more to come
I never benchmarked the ipad4 but we run 250 drawcalls + full physics driven game + bloom and dof + various translucent layers visible in the scene + normal maps and specular/environment maps and at 2048x1536 resolution at 30 to 40 FPS on iPad4. The thing can really take a lot, but it is held back by the need to being able to also support low end mobile devices. You can't really properly make use of the power of the latest devices that way.
The difference between an Ipad4 and an iPhone3GS is huge though, so it is a huge gap to bridge.
On 3GS we run 50 drawcalls with diffuse only, limited translucency, textures half the size, limited physics, at a resolution of 340x240 or something, at 20 to 25 FPS. So you need to go all the way from that to iPad4.
Eh, kind of late response, but I think what you want is Static Mesh Proxy.
It's something like prefab, except that it generates one single mesh out of all the meshes you selected, also generates custom diffuse/normal for it.
Here's how to get it:
1: Select the meshes you want to merge, then group them (ctrl+g or right click-->Group Actors).
2: Then Right click the group you just created, go to groups-->Merge/Remerge/Unmerge Meshes
3: Next step is kinda self explanatory wont go into depth with it, just select the numbers you want (texture size etc.)
This will generate single mesh out of the meshes you selected, the downside is that it will generate single texture sheet for new generated mesh, that means that every single individual mesh before the merge will be treated as individual object, so if you used same mesh couple of times, new texture sheet won't "instance" them, but every element will have unique UV space in that texture.
This tool might be handy in some cases, but for max efficiency I'd do what Hourences does : export the meshes out of UDK, and then merge them into a single object in another program.