IMUs and quaternions #128
Replies: 3 comments
-
Thanks for your message.
I agree, but I think it also depends on the purpose of the system. Depending on the application we may not need so much accuracy. I am new to this field but as far as I’ve seen, right now there are 2 big trends on IMU-based MoCap systems: 1/ “expensive” and quite accurate systems for labs, for instance the Xsens MTw Awinda. These systems already include a PC software to process and “clean” the IMU data using biomechanical models (avatars), etc. In fact when you buy these types of systems, the expensive part is the software, not the sensors themselves. These systems pretend to compete with the marker-based optical systems (e.g. OptiTrack) 2/ cheaper systems using small wireless sensors with internal storage and bluetooth (wearables), for instance the Xsens DOT. In this case the manufacturer just supplies the sensors and SDKs to develop apps to control the sensor, etc. They supply the raw inertial data (accelerometer, gyroscope, magnetometer) but also orientation data (quaternion) of the sensor. There is no PC software to process this data, nor biomechanical model, etc. It’s up to you how you process/correct the data. The advantage of this second type of IMU-based systems is that it allows to collect data during real life activities, outside the lab, for several hours. It would be equivalent to attaching smartphones, smartwatches, etc all over your body... but better :-) Of course you don’t have the PC software with the avatar and so, and so the orientation data you get might not be as “accurate”. Nevertheless, it has to be noticed that the orientation data supplied by the sensors has already gone through a sensor-fusion algorithm to correct drifts, etc. and for instance in the case of Xsens, this seems to be quite good. So the data you get, although not perfect (compared to an OptiTrack), it is completely usable for several types of applications like measuring ranges of motion, detecting activities, etc. When I wrote my request, I was thinking of this second type of IMU-based motion capture system (which is the one I am using for my research, btw). So instead of having a file with timestamps and marker coordinates, I have a file with timestamps and segment orientations (quaternions). Using a simple skeleton model (segments and joints) for instance 3dof on shoulder, 1dof elbow, 3dof wrist, and assuming I know the segment lengths (distance between the joints), I want to calculate the joint positions and angles and visually see the skeleton moving. The steps you suggest for the tutorial seem correct. I would just modify the first assumption:
Instead of this assumption, it should be: the relative orientation between the segment reference and the sensor reference is known and constant (does not change over time). Typically when working with IMUs you have first to perform a sensor-to-segment (StS) calibration to try to find out the relative orientation of the sensor reference with respect to the segment reference (assuming that this does not change over time). There are many ways to do it, but the typical one is to ask the person to adopt a specific posture (T-pose, N-pose, etc) for which the orientation of the segments is “known” and capture the orientation of the sensors at that precise moment. This is used later to obtain the orientation of each segment from the orientation of the sensor at each moment. In case you don’t perform this calibration, if you assume that both reference systems (sensor and segment) are perfectly aligned you will just need to set this StS transformation (calibration) equal to the Identity. In addition, even if you assume that sensor and segment are perfectly aligned, very often the name of the axis on the sensor and the segment are not the same (e.g. you want the longitudinal axis of the body segment to be the "y-axis" but the manufacturer of the sensor has decided that the longitudinal axis on the sensor is the x-axis) so including this constant transformation (between sensor and segment) on the workflow to correct this is useful. I will try to prepare a simple tutorial showing all this and share it with you on Colab to get your opinion. Best regards |
Beta Was this translation helpful? Give feedback.
-
Thanks for your explanations and clarifications. Your option 2 (simple skeleton model) seems cool and would be really nice as a tutorial. Probably in the "kinematics" section (https://kineticstoolkit.uqam.ca/doc/kinematics.html). In this section, we go from basic visualization of full markersets to reconstructing missing markers and probing. I guess the logical next step in this tutorial sequence would be to construct complete sets of markers based on IMUs. So let me summarize what we would have in this tutorial: Step 1. Create points clusters based on anthropometric measurements, expressed in local coordinate systems (e.g. for the thigh cluster: minimally knee center and hip center) At that point, we have a rough estimate of every joint center and we can process it as any motion capture data, and visualize it in the Player. I never performed such reconstruction and I'm very curious (will the avatar be strangely skewed due to error accumulation), but this interests me a lot for sure, if only to see the end result. Thanks for starting to draft something on Colab, and kudos if you manage to install kineticstoolkit because it requires python>=3.8... While you're starting it, let me still ask a few questions:
I'm loving this discussion! 😁 |
Beta Was this translation helpful? Give feedback.
-
I'm just wondering if you're still down to start a quick tutorial for your need, or if your need changed by the time. Thanks for letting me know! |
Beta Was this translation helpful? Give feedback.
-
Original message from @rogergallart in #100:
Hi @rogergallart
Thank you for this very interesting idea! It's true that IMUs are more and more popular and Kinetics Toolkit should include methods to work with quaternions, which it really doesn't at this point. I guess there are two steps for this:
Add such functionality to ktk
The first step would be to add functions in the
ktk.geometry
module to convert quaternion+translation pairs to homogeneous matrices and vice versa. This won't be difficult usingscipy.spatial.transform.Rotation.from_matrix(matrices).as_quat()
andscipy.spatial.transform.Rotation.from_quat(quaternions).as_matrix()
.Write a tutorial
This is where I will need your input, because at this moment I'm not playing much with IMUs.
My (limited) experience is that it may not be that easy; IMUs axes are not necessarily aligned with anatomical axes, and their attitude usually drift in time. Advanced system like XSens do correct misalignment and drift using an avatar, but they do it using complex modelling and continuous zeroing techniques. Therefore I don't really know where to start.
An idea would be a tutorial with strong assumptions such as:
Would it by sufficient?
In that case, to be coherent with KTK's representation of poses as matrices instead of quaternions (mainly to reduce the learning curve for new users), the methodology would be to:
Would this tutorial be appropriate? In that case we are not reconstructing points using anthropometry, we are only working on transforms from one IMU to the other. It's OK if I'm completely wrong and I don't understand what you need, just tell me 😁 !
Beta Was this translation helpful? Give feedback.
All reactions