Tips on templating and how it can reduce post-processing time

You could save a lot of time by identifying your marker data in live mode instead of Post-Processing Mode. All it takes is utilizing a little feature we call “templating”. 

Surprisingly, many customers don’t even know what templating is. And those who do, often don’t realize the benefits it offers to their motion capture process. 

Cortex uses a template to identify markers, and the template is a collection of links. Those links provide allowable distance between markers. Using the links, list of markers, marker order and relative location of unnamed markers in the volume, Cortex applies an identity to the unnamed markers that the cameras see in order to get usable data in post process

If you take the time to build a great template, you can apply it during a live recording, which prevents you from having to identify markers in post processing mode.

When using a template in Live Mode, you should always use “New Subject”, which is a tool in Cortex that can be used in both live and Post-Process Mode. Creating a robust template and utilizing the New Subject feature allows scaling of the previously created template to fit new subjects which eliminates the need to recreate the template in post process for each subject.  

Being able to use New Subject to fit the template in Live Mode also prevents the user from needing to spend a large amount of time identifying features in Post Process Mode.  

Let’s say you have two people you’re recording data on - one of them is 5ft and one is 6ft. You would use New Subject to fit the "Robust Template" or "Golden Template" to the 5ft person.The beauty is, the same Robust/Golden Template will work on a 6ft person.

Templating offers Cortex users a huge benefit in that you can get identified data as soon as you've recorded it in Live Mode, which will reduce the amount of work you need to do in Post-Processing Mode.

Here are our top three tips for building a template:

  1. Make sure you’re starting with a static capture and then extending for a range of motion capture that’s representative of the dataset you'll be using the marker set on. So, for example, if you’re using it on gait analysis, you want to make sure you’re extending the template using gait data.
  2. Ensure that “New Subject” is used between each subject that you extend the template on. Scaling the template first to fit a new subject will ensure any extensions made afterwards will encompass only the range of motion of that specific subject and not the difference in size between the current and previous subject.

  3. Extend the template for multiple subjects, because different subjects will move in different ways, for example in gait analysis, various people (subjects) will have different gaits. You need to make sure the template encompasses the range of motion that you would expect from your full sample size.

And remember, if at first you don’t succeed, because building a template takes time (but is still worth the effort), you can always contact one of our legendary customer support team members for help. 

To find out more about templating, read our thorough and insightful guide here.

Cortex 9 has arrived: key features and updates of our latest software

Picture having motion capture software that enables you to automate repetitive tasks and batch process a set of captures. Or using an inverse kinematic endoskeleton that is able to very closely mimic human movement with very little subject preparation.

These are just two of the functions that our newly-released, Cortex 9 motion capture software features are able to fulfill, with the goal to make your motion capture work more efficient and effective. 

Here are five of our favorite features in the latest version of Cortex:

1. Activate Dark Mode

Who said mocap software has to be boring? This is a fun feature which allows you to take the traditional Cortex color scheme to a new, modern level, through the “Dark Mode” feature.

The process is simple: navigate to Tools>Colors and select the Dark Mode checkbox at the bottom of the window.

2. Take the “work” out of your Workflow 

The Workflows panel in our Cortex 9 motion capture software provides a swift means of automating repetitive tasks and can be set up to include any number of functions in Live Mode or Post Process mode: from setting up for a Live collection to batch processing captures. 

The workflow can be saved and applied during different capture sessions, by different users to maintain a consistent protocol. 

How to create a workflow? 

3. Two skeletons are better than one

We’ve already raved about the Ikendo skeleton in a previous post, so it should be no surprise that it would be one of the features of our Cortex 9 motion capture software. This skeletal modeling option consists of a scalable humanoid rig that can be driven by six Active Marker Rigs (AMRs) each with six degrees of freedom. Given the AMR prop information, intermediate segments, and joint constraints, Ikendo is able to very closely mimic human movement with very little work on your part. 

While we provide generic examples to get you started, the Calcium retargeting option provides a way to use Ikendo with any character you wish. 

4. Fill in the gaps with 6D Prop Join

This post-processing tool interpolates the position and orientation of a prop, which eliminates the need to fill marker data to calculate a prop skeleton. The function can be applied over a selected frame range for a selected prop or all props.

5. Turn up the heat (map)

A coverage “heat map” has been added to the Camera Coverage viewing option in the 3D Display Properties. Regions with greater overlap in camera coverage are shaded green, with less covered areas changing from yellow to orange to red.

And there’s more where those came from...

Okay, so we’ve impressed you with just a few of our favorite additions to our Cortex motion capture software, but we’ve also implemented some key updates to features like the “Live Mode Dashboard” and  “Devices Panel”, and made the following additions and updates: 

Want to experience all the great features and updates of Cortex 9?

Our latest and greatest Cortex 9 motion capture software is available for download or on a Cortex 9 DVD delivered via express mail (upon request). 

There is a Cortex 32-bit version and a Cortex 64-bit version available. They are both included in this release and available to all customers under warranty or with a current software maintenance contract.

If you’re interested in changing to Cortex, we’d love to give you a demo of our software. 

Our motion capture technology paves the way for safe, accurate gas leak detection

When gas leaks occur - for example in warehouses, environmental emergencies, or search and rescue operations - it is very important to be able to locate the source of the leak in order to take fast and effective countermeasures. This is usually done using animals, which is very costly and puts human and animal lives in danger.

Using robots to detect gas sources in 3D

Chiara Ercolani is a Ph.D. student at the Distributed Intelligent Systems and Algorithms Laboratory (DISAL) at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Chiara and her team believe that using robots to detect gas leaks is the best way to save lives.

One of the team’s most recent projects involved 3D gas source localization using a micro aerial vehicle (drone). 

In this experiment, they used a Crazyflie flying robot (drone) with a VOC sensor to sense gas particles in the air. Their goals were to:

  1. Determine the best system design
  2. Understand the impact of various environmental factors on gas source detection,
  3. Identify the best 3D gas source localization method: Ultra-Wide Band localization vs Motion Capture System localization.

Wind tunnels, gas plumes and drones

The experiment was conducted in a wind tunnel facility (16x4x2) m3 with laminarized wind flow of adjustable speed. An electric pump was used to disperse a mixture of ethanol and air in the wind tunnel to create the gas plume.

A view of the wind tunnel with gas source, flying drone, and Motion Analysis Kestral 1300 cameras.

Gas dispersion is a 3D phenomenon, so to be effective in 3D gas source localization, the team needed to use platforms capable of 3D motion tracking.

The team experimented with sensor location, performance under various environmental conditions, and two localization strategies. The full results can be viewed in this short video summary.


For the purposes of this article, we will focus on the third aspect of the experiment, namely determining what effect each of the following localization methods would have on the outcome of the gas source localization:

  1. Ultra-Wide Band (UWB) localization system with 8 beacons
  2. Motion Capture System (MCS) with 13 cameras

Motion Capture System localization outperforms Ultra-Wide Band localization


While UWB localization is easier to deploy and cheaper, this experiment makes it clear that using a motion capture system offers better performance under all tested environmental conditions. The drone’s movements are clearly much cleaner and smoother than those obtained when a UWB system is used for localization. As a result, the drone is faster and more efficient. 

These results show that MCS localization is more accurate than UWB localization. We also see that the drone’s movement is much more efficient when the MCS is used.


Motion Analysis is proud to be associated with this innovative project

Our team has been working with the DISAL lab for many years, so we were pleased when Chiara got in touch to purchase a second motion capture system from us. They now have 13 Kestrel 1300 cameras and use Cortex to process their data. 

Here’s what Chiara has to say about working with Motion Analysis:

“When we needed to purchase a second motion capture system, Motion Analysis was our first choice. Not only is the system great to use, but the sales team is always very helpful. They helped us to envision the system setup within the wind tunnel and we are in regular contact with them for tips and advice.”


To learn more about the work that Chiara’s team is doing, please visit

To learn more about the motion capture system and the powerful Cortex software, please visit our solution overview page.

To request a demo, please get in touch here:  

5 helpful Cortex features you may not know about

Did you know that our Cortex software enables you to merge multiple captures? Or that if your gait analysis subject strikes two force plates at the same time, Cortex can combine them, whether they’re the same make or not, to formulate the data you need? 

Whether you’re a long-time user of our powerful Cortex software or you’ve recently started using it, there are a number of features you may not know about that will make your 3D motion capture experience that much easier. 

We’re going to talk through just five of our favorite Cortex features, but you can always launch the full manual, available in Cortex via the “Help” button on the menu, to find out more. 


1. Easy camera coverage with just a few clicks

When you are positioning a camera to cover a capture volume, you’ve got to know how many cameras you’ll need and the areas they need to cover. To check if your cameras can cover your desired volume without wasting pixels, you can use virtual camera aiming (seen in the image below), that allows you to reposition and orientate your camera in the virtual world. 

Bonus feature: Cortex also comes with a free VolumeBuilder App, which will enable you to work out if extra cameras are needed.  You can find it in the same folder as Cortex.exe.

Cortex software easy camera coverage


2. Ready, steady, renumber your cameras

Let’s say you’ve just brought 42 of our new cameras and you’ve carefully written a number on each camera and placed them around the room, but then, to your frustration, the numbers on the camera display are not showing in the order you’ve put them up in.

Don’t fear - you can renumber them afterwards in the software, choosing to label which is Camera 1, which is Camera 2 and so on, in any order you want. The cameras will change their display number and follow the order you set, saving those details into the setup file. 



3. Manual masking no longer has to be a task

When you’re doing a calibration and the camera sees something bright and shiny that appears to be one of our markers but isn’t, it will cause problems at the calibration stage. Usually the masking system allows you to manually draw a mask around the stray bit of light, to get it out of the way for the calibration process.

Our cameras, however, have an automask feature, which automatically draws the smallest possible masks around shapes, making the process quicker and easier. 

What you may not know is that if you load an old setup, you won’t need to go through the arduous process of deleting existing masks. Automasking deletes those for you before adding new ones.


4. You determine the (prop) origin story 

A prop is a special marker set for rigid objects such as drones, robots, swords, or guns. With Cortex, you can move the origin of that prop and set it to be anywhere you like. 

If you need the local origin of a prop to be at a specific place you can choose for it to align with the global origin (very useful for animation skins that need to align with a subject); be at the geometric average marker value; at marker number one (you define which that is), or anywhere else if these shortcuts aren’t what you need. So, all you have to do is create your 3D sword in your software package, leave the wooden toy version lying on the floor with its handle at the origin, set the template so you can measure the prop to be the capture volume origin, and all your alignment happens for free. 


5. Never miss out on a moment, with post-event trigger

During the recording or streaming phase, you might be looking to capture the specific moment that causes your subject to slip and fall, and the worst thing that can happen is that you miss the event. This often happens in motion capture as events can be so brief and fleeting. 

Fortunately, computers can buffer data. Which means, instead of just leaving your system recording all day, and then later trying to pore through hours of data to try and find the event, Cortex can continuously buffer data before an event, to be written to disc after you start or stop the rest of the capture. 

We call this post-event triggering. And what it means is that the system will keep in memory, for example, two seconds of data at a time. When you press “start” to begin recording, the system will save those two seconds and then continue recording until you press stop again. This will give you two plus however many seconds of data you had it running for. The best part is you can do it after the event has occurred. 

You can also leave it streaming in live mode, and then use post-event trigger based on something you’ve seen happening in real time, in order to get the recording. 


Considering switching to Motion Analysis and interested in experiencing our Cortex software? 

Contact us here for a demo

How to select the best mocap technology for your needs

Whether you’re an animator wanting to create superior 3D characters; an engineer interested in evaluating how workers interact with machinery; a producer looking to utilize virtual and augmented reality in a broadcast studio; or a clinician wanting to assess walking deficiencies, there are a few factors you need to take into consideration when choosing mocap technology.

The best way to do this is to be armed with knowledge so you know exactly what to look for. Therefore, the first step you need to take is to understand the different needs required by the various industries mocap is used in: an animator might be able to get away with a lightweight system that has a quick and easy setup, but a large broadcast or film production studio might need something more powerful. 

Read through the list below and find your industry, then take note of the most important factors to look out for. Finally, work through our checklist of questions that you should answer before approaching any vendors. 


If you’re going to be developing stand-out 3D characters for film or for gaming you’re going to want mocap technology that can capture complex movement, facial expressions, and realistic physical interactions with low-latency results, which can be rendered in a physically accurate manner. 

This means your software should allow for segmental modeling and real-time streaming and use real-time previsualization data to inform and accelerate animation decisions.

You’ll want a system that consists of two Skeletons: one constrained by markers to match the mocap subject, and one that matches the animator’s rig, and you’ll want to produce the cleanest, highest quality data. It would be beneficial to have a system that offers a quick and easy setup for 3D motion capture and a larger amount of animation data to be produced, in order to improve efficiency and reduce overall production costs.


Within the field of 3D motion capture for movement analysis or biomechanics, there is a broad range of requirements. For example, a simple clinical balance study may need minimal tracking devices but need kinetic data from force plates, whereas high-level gait analysis, with enhanced foot modeling, may need a hundred or more markers and simultaneous force plates, EMG, and video vector overlays. 

You should be looking for fully integrated motion capture technology with the ability to capture and track the subtle movements of sports equipment, individuals, and entire teams, improving and extending the limits of athletic performance. 

Because of the range of needs motion capture may be used for in movement analysis, it would be beneficial if you could choose from a variety of out-of-the-box applications or have the ability to customize the system to meet those needs.

But most importantly, when it comes to movement analysis, fidelity of data is key. Therefore precise data collection and instant translation of that data is imperative and should be a top feature you look for in a system. 


If you work within a larger studio for broadcast purposes that requires the use of many cameras, efficiency and productivity of mocap technology is key. You need to equip your team with unparalleled camera tracking and mocap quality, speed, and usability, with a system that can integrate with all major graphics software solutions.

The system you should be looking for should be able to track an unlimited number of cameras, as well as the performers’ locations simultaneously, and should allow for precise real-time tracking of broadcast studio cameras to create live VR or AR sets quickly and efficiently.

Ask vendors whether their software is able to calculate the optical axis and nodal point of each studio camera, a feature which will dramatically shorten the camera calibration process by aligning the virtual camera with the studio camera. You’re going to want all your studio cameras to be able to work from the same global coordinate system, so that you can map multiple cameras. 


If you’re using motion capture to research expandable solutions for military, engineering, robotics, product designers, and manufacturers, you’re going to want to shorten  development cycles and reduce costs as much as possible. 

In order for this to happen, you need a system that’s integrated with a wide range of control systems, with a VRPN server for streaming real-time data to a range of engineering, simulation, and virtual reality applications. 

To effectively identify and correct ergonomic issues that lead to injury or design more effective ergonomic products, you’ll want software that can provide accurate and precise low-latency measurements of kinetic movement and body angles.

And lastly, it would be beneficial if you could find a system that can track anywhere from one subject to an entire squad, in order to cater to the range of potential applications you may need it for.


Now that you know the type of features you should be looking for in a mocap system, based on the industry you’re using it in, here are a few other considerations you need to think about before you talk to vendors:


Find out more about the systems Motion Analysis has to offer by booking a demo with one of our customer tech support staff today. 

A career motivated by customer support

For Emily Schaefer, our recently instituted Director of Product Development and Customer Support, there are a number of things she loves in this world: travelling and being outdoors; spending time with her toy poodle, Stanley; wine-tasting; golfing; and making sure that Motion Analysis customers are her daily priority. 

“Most often, I work with customers, doing tech support, training, installation and more. The most important aspect of my job is establishing a connection with the customer and providing them with a strong foundation in using their motion capture system so they can successfully and efficiently apply the system in the way they need.”

Her journey with Motion Analysis began in 2016, and although she was impressed with the  intuitive and transparent nature of the software (she had previously used Cortex during an internship), she was most excited about the opportunity to be in a customer support role, and to work with customers in all industries, get a chance to meet people using the system and see the environments in which they use it.

“While my background might be in biomechanics, and I’ve often been onsite with people in biomechanics, I’ve also been able to get a glimpse into the world of virtual reality, game development, animation, and broadcast through having a customer-facing role.”

Even in the midst of a pandemic, the opportunity to provide customer support is what helped her to transition to a new way of working. 

“ It took me some time to get into a rhythm at home and claim a space where I could work undistracted.  I have missed seeing coworkers each day but calls with customers have helped make working at home feel less isolating than it would have felt had I not been in a customer-facing role.”

Emily recently took part in a biomechanics webinar hosted by Motion Analysis distributors in Japan, and throughout the process, her biggest takeaway has been the customer support opportunities - being able to connect with people, some of whom she initially installed the system for in her previous role as Support Engineer. 

“It has been a fun, and interesting process to hear of the success stories and learn of ways in which they are using the system and completing projects of their own”, says Emily. “At the end of the day, the customer is what defines success for me in my job and  helping them accomplish their goals, making sure the product will address their needs, and anticipating the future needs is one of my biggest motivators at work.” 

You can watch the full webinar below:

It’s no surprise  that Emily’s biggest piece of advice for any future Motion Analysis colleagues is to “continue putting the customer first and continue pushing the envelope to give the customer the best product we possibly can.”

This is the fourth in our “Meet the Team” series, introducing you to all the incredible people who make up the Motion Analysis family. 


Two skeletons, half the work: the smart tech hidden in our software

You’ve heard about Cortex - our most powerful software yet, which offers a complete set of tools for motion tracking and editing - and you’ve heard about BaSix Go, our new, easy-to-use software with a setup time that takes under a minute. 

But do you know about the groundbreaking hidden tech inside our software? 

For decades, animators using mocap data have had to find methods to convert the 3D-tracked position of markers into the motion of a humanoid skeleton rig. Previous methods included using vector algebra - which was more suited to biomechanists who needed rigorous, repeatable, and reproducible data as well as a method that could calculate joint-centred angles - and global optimisation, which is the method most animators will want to use for motion capture as it allows them to import a Skeleton with fixed length bones that can be scaled up and down to fit the bone length needed for the character they want to animate. 

IKendo skeleton


But Motion Analysis has developed a third way to capture human motion. We started asking the question: What if there were two Skeletons? 

And so Ikendo was born. 

Building on Calcium Solver’s global optimisation method, Ikendo makes use of Motion Analysis’s AMRs (Active Marker Rigs) and can be used with both our classic Cortex software or our more affordable BaSix system. It consists of two Skeletons: one constrained by markers to match the mocap subject, and one that matches the animator’s rig. 

Meet the Ikendo Skeleton 

The first Skeleton has well-defined feet, head, hands and a root segment (usually the pelvis), and the markers keep accurate track of those segments. Because of the immense precision of the tracking we can then calculate the intermediate segmental positions. For example, we would know the exact position of the pelvis, and the exact position of a foot, but we can use further information - such as the knee being a hinge, the hip a ball joint and the ankle a universal joint  - to calculate where the thigh and shank must be to fit the data. This means we can calculate all of those joint centres - not just create a missing marker, but actually calculate all of those segments -  for a very well-defined, specific Skeleton.

We then import the user’s Skeleton and use virtual markers that are created with our internal Skeleton to act as the “springs” which drive their Skeleton.


The secret to BaSix Go’s quick setup time

Our BaSix Go software has always been powered by Ikendo, we’ve just finally put a name to the smart tech at the core of this system. 

By separating the retargeting from the inverse kinematics of the tracked skeleton, marker placement is simplified, and this enables us to produce our infamous one-minute mocap setup. 

We have programmed the character in the Ikendo Skeleton into a T-pose, which basically allows the system to compute how long your arms and legs are, enabling us to then scale the skeleton. Of course, this endoskeleton has to be scaled to agree with the subject being tracked, but all that requires is a simple click of the mouse. 

AMR markers are automatically identified out of the box as soon as they are switched on. The mocap performer simply needs to attach the six AMRs onto their hands, feet, waist and head, stand in a T-pose and we are able to sync. Nothing more is needed for the tracking.

BaSix set up is quick and easy


Ikendo not only makes the software simpler and quicker to use, but it lowers the overall cost of using the mocap tech and is compatible with all Motion Analysis streaming partners. 

Want to find out more about our systems, and how Ikendo could make your mocap experience much simpler? 

Book a demo with us here

What is motion capture for biomechanics?

From analysing the movement of dancers, to developing an improved basketball shoe, to rehabilitating wounded soldiers, biomechanics is a vital field of work in research centers, universities, and hospitals all over the world.

And those involved in the field of biomechanics - whether it be animal biomechanics, sport biomechanics, or industrial biomechanics - will agree that motion capture is an important tool to provide them with accurate and precise data when assessing the movement of their subjects.  

So what is motion capture for biomechanics? Let’s find out a bit more: 

Click here for a printable pdf of the above infographic.

Want to find out about the motion capture technology that could improve your biomechanics research?

Book a demo of our solutions today. 


How our mocap tech is benefitting biomechanics research at the University of Lincoln

The University of Lincoln has been using Motion Analysis software in their biomechanics department since 2011, benefiting from a 12-camera system. 

Dr Franky Mulloy joined the UoL in 2014 and, after being impressed with the performance of the mocap tech that was already being used, set out to acquire the funding to buy a second and then third camera system. The University now has a total of 29 motion capture cameras operating in their two labs. We caught up with him to find out more about the work he does and how his experience of the Motion Analysis software, and the buying process has been.

Q: What are your specialisations and/or areas of interests within the field of biomechanics? 

My passion revolves around three areas of interest when it comes to the world of biomechanics:  human interaction with equipment; coordination in complex skills; and dance biomechanics, both from a performance and injury prevention perspective.

Q: Can you describe some of the work you’re currently doing? 

Working with UK police forces, we found that the police have a high prevalence of neck (76% of nearly 400 police surveyed) and upper back problems (84%) in the last year alone. This was attributed to the body armour and tactical vests which are required equipment for an operational officer. In response to these findings, I established a 3-year funded Knowledge Transfer Partnership (KTP) with Arktis, a global tactical vest manufacturer that equips the majority of the UK police forces. Working with Arktis, and my post-doctoral researcher Dr Matthew Ellison, we have identified how we can alter tactical vests to better coordinate with the torso and head to reduce neck and back strain in dynamic activities. 

I’m also currently working with my other post-doc, Dr Olivia Brown, on another 3-year KTP. This focuses on the science to underpin trampoline designs with a global trampoline manufacturer, Plum Products Ltd. Alongside this, prior to the pandemic, I completed a 7-month longitudinal study focusing on core ballet skills with dancers. I used biofeedback to reduce injury risk in leap landing and enhance skill development in jumps and single leg balances. Safe to say, it has been a busy couple of years!

Q: Why did you start using Motion Analysis tech? What problem/pain points did you need the tech to solve? 

Motion Analysis has a variety of versatile and customisable software options. These work for us as a research group because we deal with large data sets for a range of different uses. The quality of data, ability to change a lot of capture settings (e.g. tracking options), and software interface are all hugely beneficial to the work I do as it allows me to have complete control of the data input and output, and avoid any ‘black box’ use. We also love that Motion Analysis tech allows for the integration of multiple technologies with the system, giving us greater flexibility for industry engagement to support research and development processes.

Q: What features of the tech have you benefited from the most and what are they used for? 

I mostly use standard motion capture to track kinematics, with integrated force plates. I also synchronise Delsys systems to incorporate EMG and EMG decomposition. I regularly tweak the tracking settings as it allows me to undertake varied data collections for different applications, but with clear kinematic tracking. This is helpful, because being an effective biomechanist requires precise data, and by being able to customise and manipulate different settings we can ensure we get reliable and accurate data. 

We also make use of the Sky Script coding feature to provide live biofeedback, and for batch processing large sets of data on multiple trials (we’re talking close to 5000 trials). It’s very time consuming to clean up large sets of data, and export force files for each, but being able to code in batch processing things means you can do it autonomously. 

Q: How has the support been from Motion Analysis since using their tech? 

We have received excellent support from Motion Analysis. I remember once we had an issue which resulted in a kinetic data lag that we were unable to fault find. We would be collecting data and it would randomly be a few seconds behind, and then sometimes drop out completely. One of the Motion Analysis technicians got back to us within 24 hours and remote accessed our lab computer for two days straight until the problem was solved. On top of that, they have also sent their team to train and upskill some of the staff here, even our very experienced users, and we’re always kept up to date with information about the software we currently use and any new software being launched - I’ve had only good experiences with this team. 

If you’re interested in experiencing how Motion Analysis software could benefit the work you’re doing in biomechanics, or learning more about our solutions, request a demo today!

How to build a career in motion capture for animation

USD 266 million by 2025.

That’s how much the 3D motion capture system market is projected to be worth, according to a recent report by MarketsandMarkets™. But becoming a mocap specialist in an industry that’s booming is not an easy feat, as it’s guaranteed to be highly competitive. However, if you’re willing to put in the work, you could land yourself a very lucrative career, and not just as a mocap animator.

What professions does a career in motion capture include?

There is this misconception about the industry, that it only requires someone to move around in a lycra suit while someone else records the movement on some mocap software. But, whether it’s being used to animate a performance, or conduct a gait analysis on a pair of basketball shoes, motion capture requires a number of professionals to be involved, whose job roles could be any of the following:

A motion capture career has endless possibilities

Because motion capture is a growing and constantly evolving industry, it’s the perfect space to learn on the job and acquire new skills as you go. You may enter the industry working in pre-production, and within a few years find yourself in a post-production role.

From working with actors, operating digital 3D cameras, or developing new software, to creating visual texture for the scenes, editing footage or even being in front of the camera in one of those lycra suits, there are plenty of opportunities for vertical and horizontal growth in a motion capture career.

Building up your mocap skill set

In terms of specific skills, if you’re interested in the creative side of motion capture, it’s beneficial to enter the industry with some formal training and competence in animation, editing, or character rigging. Computer skills, interpersonal skills, design skills and experience with camera equipment would also work in your favour. Many Motion Capture companies will seek out candidates who have degrees in Computer Animation, Media Arts and Animation, Graphic Design, Visual Communications.

But if you dream of becoming a motion capture actor, you’d be more likely to land the job if you could show formal training and experience with acting, dancing or movement.

And for those who love the more technical side of the mocap industry, a degree or experience in software engineering, computer science, biomechanics, or film production will go a long way in helping you to kickstart your motion capture career.

A motion capture career isn’t just for animation

You may have all the skills needed to take on the competitive motion capture industry, but perhaps you have no interest in working in broadcast or animation. Thankfully, motion capture is not limited to just working with actors on a film set, or animators in a video studio.

You also have the opportunity to apply your skills to the sport and medical fields. In medicine, motion capture is being used in a variety of specialties to help medical professionals more accurately assess and treat their patients. And in the sports world, motion capture tech is used to analyse a variety of movement factors of an athlete - from their physical condition to their athletic performance and identify how the athlete can improve their technique, posture, accuracy, speed and balance.

The best place to start building your mocap career

If you feel overwhelmed by all the possibilities and options available to you in the exciting world of motion capture, start small, and grow as you go. Seek out opportunities to work as an assistant - whether technical, or creative - and learn as much as you can about the various job roles involved before pursuing the one of your choice. No matter which direction you choose to go in, having a foot in the door will be a lucrative career move in a fast-growing industry that has no intent of slowing down.