OK. So I am working on an animation and I'm doing morph targets. I showed the animation to this guy on a forum and he says "come on dude, do mocap, it's dead easy these days". Mind you, this guy is not even an animator from what I can tell. I posted the animation in the "general" section of a music forum, mainly because the animation is to be an animated music vid and I wanted to show off a preview. So it's not a forum full of animators, but the guy swears he knows his stuff and that mocap is easy to achieve with today's technology.
In the most basic mocap setup video I've seen (for a 3ds Max plugin called BonyFace 3.0), you still need at least 3 cameras in order for the markers to represent their location in 3D space.
Another guy came into the thread (he, also, is not an animator) and claims that all one needs is a single $30 webcam or laptop cam to do mocap.
Now I don't know if this is true or if these two guys were just a little arrogant and making speculations about technology.
But////
Is mocap really as easy as they say.Am I wasting my time with manual animation for a realistic 3d character when I could be doing the facial mocap with my webcam?
Replies
http://www.crydev.net/viewtopic.php?f=311&t=106672&hilit=mocap
Looks easy enough...
I'm actually quite amazed mocap is getting this easy:
[ame="http://www.youtube.com/watch?v=WCcMnfOnn8c"]NuiCapture for Windows Kinect Markerless Skeletal and Facial Mocap, Depth recording software. - YouTube[/ame]
Also a word of caution...
The guy from "TrueBonez" goes ape over every new piece of software he gets his hands on. Everything is his new bestest friend, watch enough of his videos and download enough trials and you'll get a sense of how disconnected he is from reality.
Most of the kinect software out there is cool, but typically deeply flawed in some way. Normally that means one camera (so no turning around), Jittering (because it can't quite figure out the joints) and acting like you're playing a kinect game (arms and legs spread as if being frisked by the cops).
With that said, you should try the stuff out and see how it works and if it suits your needs, just be careful to check his level of enthusiasm at the door when you do.
Here are two that I've checked out and gotten decent results from, not great, but decent, which is better than some of the other ones I've tried and won't bother mentioning...
iPiSoft http://ipisoft.com/ (Body mocap, can use multiple cameras, smoothing and jitter removal)
Brekel http://brekel.com/shop/ (Body and face, the guy keeps working on it, its decent...)
http://www.naturalpoint.com/optitrack/products/expression/
Faceware (amazing tech, great results, insanely priced)
http://www.facewaretech.com/
Maskarad (simple markerless facial mocap, cheaper-ish)
http://www.di-o-matic.com/products/Software/Maskarad/
FaceRobot (part of XSI)
http://www.autodesk.com/products/autodesk-softimage/features.character-rigging-and-animation
And there are a bunch of other programs ranging in quality and price and some are markerless and others have zillions of markers. Most of them pop up in a google search.
I'm curious, since you've seem to have a lot of experience with budget Mocap.
Let's say you wanted to get an inexpensive mocap studio going for your next game. You have to spend 1,000usd or less. 3DS Max is you 3d app. Let's say the Cryengine was what your game would be running on...
What would you personally go with? Hardware, software etc.
Other things too, like a cheap green screen setup, a homemade mocap suit?
I'm sorry if I'm asking too many questions, I've just always been curious about doing mocap but know absolutely nothing about it.
We have a dual xbox360 Kinect system here in the studio, for simple storybord animations. it works fine on both ipisoft and brekel and some face capture software i dont know the name of.
Again, we use it for simple Viz animations or things and still needs a lot of cleaning after.
also its nothing compared to the usual system we use here at
www.Gameship.nl
have fun
I've used several off the shelf kinects with a iPiSoft and a bunch of other homebrew kinect mo-cap solutions and none of them required special hardware. SOME did require parts and pieces of microsoft kinect for windows SDK to also be installed.
If you don't have a Kinect to USB adapter you have to get one. But a lot of the first kinects shipped with an adapter and you can pick them up for cheap. You can get them online or in most major big box stores. I would also suggest getting a cheap kinect tri-pod too. They run about $15 and it really helps.
There might be some legal jargon buried deep in some EULA somewhere that bars you from using anything but a Kinect for Windows but for all of the software I've played with a standard kinect works just the same.
Basically my question is: As far as quality is concerned, is amateur key frame animation better, The same as, or worse than amateur motion capture?
Thanks guys. look forward to reading more about this stuff and how it applies to game development.
Edit: In addition, I hear talk of people using web cams. I don't image that is a good solution, But it made me think: Is it possible to do mocap with a D-SLR camera? A nikon d3200 to be specific. Just curious ^^
Proportions:
If you have T-Rex arms and your model has the wingspan of a 747, when you put your hands on your face, they will be putting their forearm through their head. You have some wiggle room but a big guy doesn't move or react like a small female so you need to be real about your expectations. Yes you can dump the animation data for a big guy onto a small female skeleton but it will only be worth a laugh.
The limitations of Kinect (it has a lot):
Kinect doesn't capture two people at the same time, if it does no one has figured out how to solve two people in the same scene and create an app that outputs usable data. So something like a fight scene between two people will be tough to pull off.
Some motions are just really hard to do and are better left to people who can do them...
For example: Someone throwing themselves on a mattress worried about the impact and slightly guarding themselves won't act the same way someone who is trained to throw themselves around without care. Actors can commit in every sense of the word, other people can't and that gets captured.
The kinect doesn't lend itself to motion capture all that well, especially if you're only using one. With only one it can only see from one perspective, so if you...
Turn your body and one leg or arm disappears behind the other, it doesn't know what to do.
It will probably freak out if you sit down, put your hands over your knees, or crouch down, crumple up into a ball in anyway or overlap your limbs, cross your arms, or put your hand on your face, even putting your hands in your pockets can freak it out.
That's because it's capturing depth and when one lump sits on another lump it has a hard time figuring out what is what. So a lot of the time if you do something that might confuse it its good to get your arms out away from you so it can figure out where your limbs are again.
Motion capture especially low budget mocap isn't a good replacement for a decent animator. The lower the quality the better the animator you need.
I wouldn't use a kinect for facial capture, ever. The resolution just isn't there, maybe kinect2.0. The only thing kinect really can do is very extreme facial motion capture. So unless you're animating zombies trying to chew peoples faces off its just not going to work, pick another system.
Very few mocap systems capture hands and feet and almost none except some very high end systems capture finger detail so you still need to do a fair amount of work with the hands.
Hmmm. Interesting. Would motion capture potentially be a quick base for a proper animator to then build off of, add facial animation, and finger/hand/foot animation? At that point, is it worth 230 per sensor, and whatever cost of the software, Or would you sooner just animate it "by hand"?
ty
mind you, the hardest part about mocap is not generating data (with today's technology like kinect and camera rotos) but to actually get it useable. try to figure out what the output data is you need to work with your package and how much you plan to touch up and clean later and how you want to do this.
Does anybody know if there is a mocap software package that allows you to "blend" between keyframes and mocap data?
I think I heard that EA had something like that for their sports games.
I'm not sure if the software is publicly available though.
Like you guys are mentioning, it seems like mocap is a keyframing nightmare. I once opened a mocap scene and every bone had a keyframe. I would have no idea on how to "clean" something up like that...
Mocap, no matter how you do it, isn't a silver bullet and you'll still need to work with it or 'post-animate' it afterwards to get the result you might be looking for.
Mocap is delivered with a keyframe on every frame and the best way to work with this is using animation layers. Software such as Maya and Motionbuilder have excellent animation layers and enable you to work 'on top' of the mocap, which sits in the base layer.
When it comes to working with mocap, retargeting, blending of clips, etc, then many people (including myself) regard Motionbuilder as the package of choice. Most people I know who do alot of mocap work, use Motionbuilder, including, EA, Ubisoft, Sony, Remedy, Capcom, Framestore, Weta.
As Graham pointed out, this is usually what you do with Animation Layers and a lot of people use Motion Builder for this. Although you could do the same in any other package but MoBu has a great system for layering animations and also blending two separate Mocap sequences with the Story Tool (in my previous job, this saved us so many times from having to re-take entire scenes).
You can do a lot of good clean-up in other packages before going into the "traditional" DCC pipeline. When you are working with Vicon Systems for example you have "Blade" (their proprietary recording software) which you need to Label and Clean all the Mocap markers (Labeling is making sure each marker knows his name throughout the entire Sequence - they often mix up when two markers get close to each other (e.g. during a handshake of two characters)). Cleaning then deals with taking out measurement errors or reducing key-density where it doesn't cost detail (most simple example: a static prop doesn't need 5000 translation keys).
Technically you can clean mocap data later (even after the retargeting), but the sooner you do it in the pipeline the better the result of every step afterwards.
Like pretty much state before there's so much things involved with mocap i'd suggest to keyframe your animation if you're not planning to setup a pipeline. Otherwise it's not worth the time (unless you want to learn the process)
Thanks for the post Bellsey.
From what you and others have told us, there are several packages available for mocap today (thanks for the links and information again guys), some of which come with special mocap hardware and some which can just use Kinect. Are there any software you know that can produce mocap with just whatever existing webcam/camcorder you have? Either facial or full body?
I definitely want to learn the process of motion capture.
In many ways, it could actually be cheaper and easier to go to a mocap facility to get your moves, as they have all the expertise to help you. You can also maybe buy/use mocap libraries.
However, whatever approach you take, the first first thing to perhaps look at is what you actually want to animate, what type of characters etc. Anything that's not basically humanoid, you have to question whether is worth while using mocap. But you can also use mocap for reference. If your filming a person for reference of their actions, etc, then why not put them in a suit and get data as well. At least the you have the choice of whether to use it.
Anything that has very specific movement and techniques, sports for example, is great and you can get some great stuff. But only if you know what you want and tackle it right, can get some great stuff. Many people go wrong in picking the wrong person to be in the suit, and the actually directing the actor. You have to get good talent in the suit and then good direction. You cannot just get a guy in a suit, then sit back and watch it happen. Mocap when its done well can be truly remarkable.
As I said previously, I wouldn't say mocap is actually easier, because in reality it's not. But it's certainly more accessable than its ever been. Mocap is not a one size fits all solution, its only works for some things.
However if you want to generate alot of moves in a short space of time, then there's probably nothing better, but mocap is not plug and play. There is still alot of work in order to make the stuff work, and many people get their terminology confused with things like 'clean up', etc.
If you use a professional mocap vendor, then usually your mocap will be delivered in a format of your choice, and onto a rig you have supplied (very often your own production rig).
Animations are delivered on a clean rig (no IK/Controls) and with a keyframe on every frame.
The data at this point should already be 'clean'. Clean up is done at the actual capturing and solving stage, using specific software before the mocap is delivered to a client. Clean up will involve optimising the capture data, ensuring its clean, no markers are obscured and that the data can be solved correctly. This is normally done by the mocap technicians at the mocap studio and not animation TDs, or animators on a game project. Some studios do have animators do some of this work as well, but it mostly should be done by the guys at the mocap studio.
Any animation done after the mocap hits the production rig is 'post animating', and for this, you do need an animator. Guys who do this are not cleanup monkeys. They are working and manipulating the mocap to work for their shots, sometimes it works straight out of the box, others it needs some work, but it takes someone with good animation skills to do this, make the right calls and then produce animation to the expected quality.
Really? Perhaps I need to take a more in-depth look at the current budget mo-cap scene. I picked up a 360 Kinect a few months ago, and wouldn't mind using it for some quick-and-dirty mo-cap. I'll look into that USB adapter you mentioned. I keep my ambitions humble for advanced animations, but any help through mo-cap would drastically reduce the time it takes to cook up animations.
Max does have animation layers and motion mixer is almost on par with some features inside of motion builder. I still prefer motionbuilder but for someone who only has access to max, you're not totally screwed. Biped and CAT both have good layer systems also along with decent key reduction tools.
Still if you have motionbuilder go with it. They've taken large chunks of motionbuilder and stuffed it in Maya as HumanIK, which is freakin awesome.
I'm not sure we should call kinect capture, motion capture.
One other drawback to using the kinect is that it doesn't do subtle motion all that well. Most of the software out there works fine if you're doing big gestures and always moving but if you stop moving or do something small, it tends to take a trip to jitter-town. Some tools have "jitter removal" but then it kills most of the subtle animation, which is a big part of why you want to do mocap... to get all of the subtle motion that can be hard to animate.
In addition to having trouble capturing the head, feet and hands, the kinect also has trouble with shoulders. With most of the software out there that uses the kinect, its very hit and miss with the shoulders, which is a pretty big deal. People who think they can do without shoulder animation quickly find out why they need it...
Kinect capture will handle 12-14 bones out of a 64-90 bone rig.
64 bones in a rig with 2 toe bones and no metacarpal bones but full fingers.
90 bones in a rig with fully articulated fingers and toes.
Things you will have to key by hand:
Shoulders (2 bones)
Head (1 bone)
neck (1-4 bones)
Hands (2-12 bones)
Fingers (28 bones)
Feet (2)
Toes (2-30)
Total 38-80 out of 64-90.
If kinect capture was capable of giving you the middle finger it would, ha.
I'll keep my expectations low. But I already own the Kinect anyhow, so it wouldn't hurt to give it a quick spin. Especially if I'm sticking to some of the hobbyist projects and not dropping fat cash on commercial software packages.
I struck me that a fine process for this sort of low-budget mo-cap would be in capturing individual gestures, and using the software to splice them together after the fact. (NLA-style) There are a lot more options these days for that kind of approach, even to the point that they are integrated into game engines themselves. (such as Unity's new Mechanim animation system)
I could use the Kinect to block out some quick, rough animations for arm gestures, clean them up in the 3D software, and then incorporate them into the game engine. Adjust the blending for greater or less exaggeration. And I'm not too worried about the acting itself. I have a background in theater, so I'm aware of the difference between acting for the stage and acting for the screen. You can't be subtle when on the stage, and I'm guessing the same holds true for mo-cap. (especially budget mo-cap)
@Mark, how much do you think the Kinect 2 will improve over the Kinect in this area? Resolution goes from 480p to 1080p, USB3 instead of 2 (and about 30ms less latency), and a bit better depth resolution. At least from the stats I've seen.
Will this really improve 'Kinect Capture', will it still be almost unusable, or is it more of a 'wait and see' type deal?
But I was wondering about this; I saw this posted on Reddit. It's a shot from the new Planet of the Apes movie. It looks like they are wearing mocap suits with wires? Is this a new form of mocap that can just track your joints without the need for cameras?
If those stats are true it will be pushing a lot more data, which is great, but working with that data could be daunting. They might make the necessary leap in hardware but will the homebrew software people be able to keep up? Maybe but I really hope MS helps out. If they do it right they can also dominate the secondary market that helps feed games into the first. Some people like making games just as much as playing them, get those people the tools they need to make the games they want to make and you might have a glut of low cost games.
To cut a long story short, and generalising alot there are generally two types of mocap systems - optical or non-optical.
Optical is the one most people are probably familiar with - someone wearing a suit of markers with lots of cameras around the room.
Non-optical includes whats called inertial sensors. These have inertial sensors attached to the mocap suit. Inertial systems use gyroscopes to measure rotational rates which are then transmitted wirelessly to a computer, where the motion is recorded.
Both systems can work very well, but what can be good about inertial systems is that you don't need cameras so they can be used outdoors, like what's in the pic.
Xsens is a good example of an inertial system.
Humm that is interesting. I had seen some other shots that looked like they were using camera recording suits for some similar shots.
http://www.dailymail.co.uk/tvshowbiz/article-2307222/Andy-Serkis-sports-motion-caption-suit-set-latest-Planet-Of-The-Apes---Keri-Russell-makes-debut.html
Since they do a lot of on set motion capture in different lighting conditions, they are probably using a mix of mocap suits and can't always rely on camera based motion reading. If anyone is going to break new ground in the field of mocap its probably going to be on this movie, ha.
Some interesting links about the first movie, mostly fluff but still gives you some look at the different suits they've used over the course of the whole production.
http://filmdrunk.uproxx.com/2011/08/hyperbole-check-did-rotpotas-motion-capture-really-create-the-missing-link
http://www.thedailyzombies.com/2011/08/rise-of-planet-of-apes-final-preview.html
http://www.ign.com/videos/2011/12/06/rise-of-the-planet-of-the-apes-breaking-motion-capture-boundaries
Microsoft's 343 Industries using Kinect and iPiSoft. Personally I think they did a great job of papering over its flaws and hopefully they will push MS to open it up more as a development tool.
http://www.fxguide.com/quicktakes/mo-cap-on-a-budget-halo-4-terminals/
http://thesequencegroup.com/case-studies/halo-4