Remember Paperman? Disney's latest attempt to revive traditional animation in CG form. I was wondering, is it possible to recreate this technique but with 3D video games?
I was thinking the closest attempt could be through a series of texture swaps and perhaps at a fixed camera angle, however would that be too memory intensive?
Replies
Why not simply do it like Ni No Kuni?
What you're talking about is intensive sprite based gameplay. It's hella expensive to make the way you're describing.
I don't see motion capture/ morph target animation doing that and that includes Ni No Kuni.
A) you need it to be realtime instead of prerendered.
you don't have a giant renderfarm like they do, I presume.
C) I think it's a proprietary method, at least for the moment.
The last part is definitely proprietary but I'm interested in the "inbetween work", i.e the actual drawing that shows up on the polygon.
Here's the video of the complete breakdown:
http://www.youtube.com/watch?feature=player_detailpage&v=TZJLtujW6FY#t=294s
I'm aware it's all pre-rendered but I'm asking how to "fake" parts of it in real time.
I bring up texture swaps because video games don't have any other way to alter a polygon's appearance (that I know of). Texture swaps also mimic this technique because they use pre drawn animated assets (ironic).
Question is, what would be your platform then? I can see something like working for RT rendering in a game engine (EI: UDK dumping movie shots) without any stuttering, but for anything more interactive, it wouldn't be really optimal.
As far as I know engines that use deferred rendering store both normals and X-Y motion vectors in Gbuffer as well as other passes. So, distorting drawings in screen space based on those channels would be quite doable I think.
What will you draw on top of if the camera can move all over the place?
I guess you could store a drawing or two per-character per-animation and 'spawn' it when animation hits certain frames on top of character and motion vectors would distort that drawing accordingly.
Or maybe not, I am speculating
3d games not necessarily deed camera freedom, in case like Diablo 3, it can be used to create even more handpainterly looks.
I was thinking something like SF4 opening [ame="http://www.youtube.com/watch?v=R8SD_M_ccac"]Super Street Fighter 4 - Trailer - Opening - Xbox360/PS3 - YouTube[/ame]
That means that you couldn't have an animator draw on top of the entire frame. You'd have to have them draw on top of the environment in one pass, then draw on top of the characters (from every angle that's used in-game), then superimpose those two somehow. Then if the environment has anything that is animated, that would have to get separated out into its own pass. Compositing everything together at the end.
It's possible, but doesn't sound practical, and I think you could accomplish something very similar with real-time shaders.
Edit:
You may also want to take a look at Overcoat, which allows you to paint in 3D. We've essentially been doing that in games for a while with stuff like 3D-Coat. But it makes more sense than drawing on top of the entire frame when you can't tell the content of that frame ahead of time.
Might help if nothing has any textures or UV's.
Obviously being 'Real-Time' the results won't be as elegant but if the market is there I'm sure it's feasible.
but have you seen dragons lair? it's an old game that was fully animated by some ex-disney animators.
If it was possible to have the animations running while the camera is moving, that would far exceed my expectations (although someone suggested it's possible to have the animations scale based on where the camera is positioned).
Anyways, I'll put together a short cartoon to better illustrate the technique in question. Need to rest up first.
I'm not talking about the camera being static. I'm talking about the content of the shot (each frame) being an unknown. In a movie you always know what each frame has before you animate on it. You can't do that in a game because you don't know what the player will do.
except anybody with eyes can see that Paperman is CG.
Basically, picture an RPG setting in a town. The character walks over to talk to the NPC. When the person says "Hey Dude" the camera stays there while the animation plays. When the NPC is done talking, the camera is free again.
How is the animation done? It's based exactly where the camera freezes. So when the person says "Hey, Dude!" the position the character is standing is where the drawings will overlap the polygons.
I could very easily do what paperman did in sprites for a RPG Maker game, but at that point, would it be even worth it? Small pixelating character walking around with several high quality overlayed sprites? What's the point then? How much movement can I see? Are we going to get rid of physics and other particle system to make things simple? Make a linear gallery shooter?
Again, this is all possible through shaders, in a limited way of Durer patterns, etc, but without someone drawing frame per frame the 'down to detail' layers, how exactly are we supposed to do that like they did in Paperman? They're real-time projecting stuff with Vector pattens and rasterized layers on an offline product at the end for the quality rendering.
Also, even if we did something like that "say character moves arm up from hip", then you could blend an overlay sprite texture for something like that, but at what cost? How many textures do you need? are we going 8x8 per height and side angles? A total of 64 expensive high quality textures based upon blend-morph targets per moment for 1 character? And that would be the cheap option for a limited animation.
It's not feasible, even on a PC.
EDIT: You'll basically need someone to sit next to you and paint the canvas like in Okami and it's attacks, which is just too much.
Since it's traditional animation, it could be anywhere from 10 frames to a thousand. Though mind you, I could cheat by only drawing one set of dialogue instead of animating a whole conversation.
For example, when the NPC says "Hey dude", only those words will have been animated. The rest of his speech could be a series of audible grunts or gibberish.
I also meant to answer this earlier. Since this is something that hasn't been done before, I would want to target the most powerful platform. So that would be PC followed by PS4 and Xbox 720.
Well in that case, just render the character frames ahead of time in a giant spritesheet and fill up a couple hundred megabyte? Just prerender the frames similar to how disney does.