In July last year, we introduced you to Cortex 9. In that blog post, we highlighted some of the features and software updates that users can expect from our newly-released motion capture software. The most powerful motion capture acquisition and editing software available, Cortex aims to make motion capture more efficient and effective with some aesthetic and functionality improvements to make your experience with the software so much better.
In this blog post, we’re taking a closer look at what some of these software updates can do.
Multiple Captures in action
Typically, a Cortex user could view only one capture at a time. One set of data. One subject that you’ve collected doing a specific movement. With Multiple Captures, you can add multiple captures and then view all of these captures at the same time. This makes it possible to see the actions side-by-side in 3D. The data can also be represented graphically so that users can compare the different movements. Why is this important? Well, if someone was doing gait analysis, for example, they would need to analyze and compare multiple examples of a person walking across the room. Perhaps one capture would see the subject walking barefoot, the next would have them walking in normal shoes and the third in shoes fitted with an orthotic. With Multiple Captures, it is possible to visually compare these different captures and see if/when any changes occur.
The video below provides a step-by-step account of how it works. We’ve also included a brief breakdown of the different steps below.
As the video below shows, using Multiple Captures is as simple as loading a capture into Cortex. This particular capture illustrates a jump, with the different graphs alongside the image showcasing the angle of the joints of the lower extremities of the subject.
When you add a second set of data to this capture, Cortex will simply overlay this new data on top of the original capture.
You can either pull it in and play the capture at the same rate you recorded it at or you can break the video down, highlighting different events across the capture.
In this case, the events would include the takeoff point when the person jumping leaves the ground, the point when they land back on the ground and then finally the point once they’ve absorbed the landing and they are standing again as normal.
If you take the time to assign these key events for all the different captures you’re looking at, it is easier to compare data at specific points across the movement. So, how do the kinematics compare at the point when the person lands on the ground again in captures one, two and three? Are there any notable differences or similarities?
At this point, as the video shows, all of the capture data is just layered on top of each other so it can be difficult to tell which dataset relates to which capture; making it difficult to read the information.
But it’s possible to offset the position of each capture, one to the left – one to the right and one in the center – so that you can see the information more clearly.
The same goes for the graphical data. By assigning different shades to different datasets, you can easily differentiate one dataset from another and see how the different captures compare to each other.
Pretty cool right?
Want to find out more about Cortex 9 and mocap? Click here.