How do those animated characters move so realistically? That might be the first question you’ve never bothered to ask about motion capture. The technology of motion capture is responsible for the actions of characters in most DreamWorks and Pixar movies, and is the basis for physics packages in many video games. But the second question you’ve never asked about motion capture is, “Can this technology be used for good?” And the answer to that is yes.
In this post, I’m going to briefly walk through the science and development of motion capture mentioning some of the highest profile examples of motion capture in culture and entertainment. Then I’ll tell you about a bunch of other applications besides digital entertainment, including some really neat orthopedic research studies, a little sports training stuff, and a quick shout-out for the swimmers.
The Concept of Motion Capture
The idea of motion capture (or MoCap to those in the know) is to take real-life 3D movements and apply them to a model. The model can then act out exactly as the real person did. The series of images taken in motion capture recordings will give real-time positioning of points on a body and smart physics-type people can take that information and apply those kinematic equations and formulas to ‘describe’ human movement. In other words, not only can computer models mimic humans, by observing enough about how humans move and assigning weights and speeds and moments to models you could interpolate, or reasonably make up, a series of motions that look convincing from scratch. These are the ‘physics packages’ included in modern video games and even in some animation. Using multiple cameras, depth and precise dimension can be realized computationally from subjects undergoing motion-capture recordings.
The History of the Concept
The idea of using images of movement to create better, more realistic animation is older than you might have thought. The original devices were quite clever contraptions called rotoscoping. The principle of rotoscoping involved filming someone or something movie, then tracing the object of each frame as it is projected, thus creating a series of drawings that would also appear to move as a natural movement.
Rotoscoping was heavily used in early animation, and it was employed in the light-saber scenes of star wars as the actors holding sticks were drawn over frame-by-frame, with a glowing computer graphic. It can be used in conjunction with green-screen to give very realistic movement to animation.
Other techniques have arisen from rotoscoping; of course there is motion capture, but the technique of “onion-skinning” became popular in manual and digital animation. Onion-skin paper is very light and translucent. Drawing on onion-skin over onion-skin frames before and after allows an artist to create an in-between frame. Also, when used in final production, the onion-skin effect creates a motion blur or overlaid images in slightly different positions, which can create a melting or creeping appearance.
Motion Capture Today
Today motion capture is used to capture the movements of actors and performers in three dimensions. A subject is dressed with precisely placed external markers, usually round white balls to maintain even visibility, but sometimes patterned fields which can give interpretable rotation data, or even infrared (IR) patches that can be worn under clothing. The subject moves freely within the view of the input cameras (usually several). The points are then interpolated onto a figure at points corresponding to external markers. This is done by creating a reference frame in the recording studio and constructing a set of corresponding axes in computational space and real time. With multiple cameras the precise 3D location can be calculated with depth differences (like human’s binocular vision). The cameras are always rigid, and often the background is a regular-patterned flat screen which can also provide information to the computer doing the number-crunching. Essentially, the 3D space of reality is quantized into a grid of voxels (like pixels, but with volume) and the markers occupy specific voxels, and all that is recreated in the computer.
Some motion capture systems are being designed to recognize relative body landmarks (elbows, arms, even facial creases) and with an appropriate background can give rough approximations of body movement (like the technology in your Wii when you play “Just Dance”) using edge detection and background discrimination.
On the Big Screen
The application of MoCap in movie making have been far-reaching. Modern CGI is all influenced by real movement, facial expression and subtle gestures. With the passage of a few years the resolution of movement improves and the texturing of the surfaces makes the figures look incredibly realistic. For children’s movies, texturing and surface perfection isn’t as necessary since the animated feel is desirable, and the production is relatively simple – for CGI characters in adult drama films getting the fine motor movements and precise shading and layering is key for driving the movie forward and the effort required is much greater.
In fact, the ability of such technology to capture facial expression has made it a new, more comprehensive form of motion capture dubbed ‘performance capture’. The markings on actors like Ben Serkis’s face in Planet of the Apes allow computers to map the relative movements and creases from facial expression. This allows a far more nuanced application of the computer generated face. As computer vision continues to improve, markerless technologies are becoming better, higher definition, and much more popular.
Finally in popular culture, many of us have enjoyed games which feature realistic movements of athletes. From the simple movements in Wii sports to the spot-on motions of EA sports games, realistic avatar activity has never been better. Addressing the way humans run, jump, swing, crouch, and even fall has allowed video games like Madden, FIFA, and PGA Tour almost HD-quality. For these games capturing facial expression and surface layers isn’t as important as preserving the fluidity and mechanics of movement (though animators certainly have fun with facial expressions and celebrations). Without belaboring the point, we’ve come a long way and added literal new dimensions to gaming since the first NBA Jams.
A More Academic Usage
So all this technology has been developed, but could it do any good? Lots of neuroscience studies have been conducted using rapidly-developing retina-tracking software. It determines how people observe, gather information, focus, or become distracted. The University of Indiana has a world-class eye-tracking lab that also does other body-movement tracking and ties the data into behavioral results.
More in line with the motion-capture technology used in movies and games though, orthopedic groups have been tracking normal human gait with multiple camera external-marking motion capture and then analyzing disease etiology, surgical repair efficacy, the mechanics of joint failure and the precise physics of movement. Hospital for Special Surgery in New York has a Motion Analysis Rehabilitation Lab which tracks patients on an ambulatory course, up and down steps, and under load. Using well-designed experiments, computational modeling, and cadaveric joint-space pressure and load measurement as validation, the lab is able to identify minute changes in load a stress in the knee tissue and note the difference between a healthy knee and a knee with degraded or damaged tissue. The lab also conducts countless other experiments assessing multiple joints and motions.
All this data will hopefully help guide rehabilitation, aid in diagnostics, and even lead to preventative strategies to preserve joints and muscles longer into our old age.
Similarly, Nike created its own sports motion analysis lab. It claims to use the data gathered to engineer intelligent sportswear for consumers and athletes.
Just picking a random sport…Swimming
Hospitals can use motion capture to show differences in walking gaits and analyze movements. Can coaches use motion capture in sports with complex movements that can be hard to explain or perfect? That has been the goal with several technologies used in competitive swimming.
Qualisys has done intense work measuring kinematics and movement of elite versus recreational swimmers, trying to find the key differences. With line of sight obfuscated by a fluid interface, motion capture has always been a challenge for swimming. In the early 2000s the first commercial swim-analysis programs became available, and some elite coaches have given the technology a try. Using Opus underwater cameras technicians were able to measure dives, underwater kicks, and strokes of several high-level Swiss swimmers.
Carnegie Melon, in the US, has its own Motion Capture lab which dedicated some time to swimming studies. The technology is very powerful but some of the swimming participants in this study lack the stroke refinement required to be considered ‘elite’
Finally, NYU worked with Olympic swimmer Dana Vollmer in 2012 to go over her Olympic Champion Butterfly stroke and analyze it bit by bit. Vollmer maintains a strong body position with high hips, fast arms, a strong underwater pull, and a ruthlessly efficient set of kicks to propel herself through the water with minimal resistance. The original study is very compelling, but some of the graphics provided by the New York Times really make the work come to life.
For most of us, the closest we’ll come to motion capture is watching the new Star Wars movie or playing on X-box, but when you do that you can tell your friends you know a little more about how the technology developed, where it is used, and how it is even doing some good in hospitals, labs, and coaching facilities around the world.
alternative titles, eventually voted down:
-Catch a Wave – Some Landmarks of Motion Capture
-MoCap, Mo Problems
-I See What You Did There – the science of seeing what you did there