Introduction to Sensor Fusion

dkcm_Sensors0301_APR2012

Why do we need Sensor Fusion?

Motion sensors or IMU’s (Intertial Measurement Units) are commonly used in a variety of devices these days. They are the sensors that are responsible for keeping track of the orientation of your mobile phone. They can also be used to detect steps, turns, jumps, and more general movement. There’s usually at least one of them in your FitBits, Jawbone UPs, Apple Watches, and Smartphones. On the MetaWear platform, we currently offer 3, 6, and 9 axis models. More sensors (more axis) usually means we can sense more data, but sometimes it’s hard to turn that data into a useful application. That’s where sensor fusion comes in.

There are many ways of fusing sensors into one stream. Which sensors you fuse, and which algorithmic approach you choose generally depends on the use case. The goal of this article is to give you a general idea of what sensor fusion is and what it can accomplish. The detailed theories and mathematics of these algorithms are usually quite complex and will be left for later exploration in a separate article or by the reader.

Each sensor has its own strengths and weaknesses. Gyroscopes have no idea where they are in relation to the world and are prone to “drift”, while accelerometers are very noisy and can never provide a yaw estimate. The idea of sensor fusion is to take readings from each sensor and provide a more useful result which combines the strengths of each. The resulting fused stream is greater than the sum of its parts.

A 6-Axis Example

Given an accelerometer and gyroscope (6 axis IMU), how does one accurately determine the angular position of an object in 3D space? In other words, if I hold up a cube in the air and turn it in ANY direction, how do I use the sensors to tell me what the exact position of tilt is in each axis?

sensor_fusion

Problems with accelerometer

As an accelerometer measures all forces that are working on the object, it will also see movement in every direction, including the force that gravity exerts on the object. Accelerometers are generally very sensitive, which can be very useful at times (for detecting a tap for example), but can also be too noisy for other applications such as positioning of any kind. It is extremely easy for noisy data to be interpreted as real data in the short term because in a short interval of time, it is often difficult to determine what is “noise” and what is “real movement”.

In these cases, the accelerometer data is reliable only in the long term, when a low pass filter has been used to treat the data. Of course, even running a low pass filter has some disadvantages, but for the sake of this discussion, we won’t fully go into the details of such trade-offs. Finally, it should be noted that an accelerometer cannot provide “yaw” measurements (turning left or right because gravity does not act on that axis).

Problems with gyroscope

You might think that since angular position is basically a measurement turns, a gyroscope might be better suited to task since its sole purpose is to measure angular velocity. In many cases, this is true. However, one must realized that gyroscopes only provide angular rotation speeds, not the angular orientations. In order to get the actual angular orientation, the speed/velocity values need to be integrated over time.

Basically, the (amount of time) * (speed of rotation) = (amount of rotation), which is what we are looking for. Great!

The less good news was that, because of the integration over time is not exact, the measurement has the tendency to “drift”, which basically means it is not returning to zero when the object moved back to its original position. Imagine a plane that rotated 45 degrees to the right, then 41, left, then another 4 left.

45-41-4 = 0

At the end of these 3 movements, you would expect the plane to be back at the original 0 degree position. But since integration in the real world is not exact, we might detect a 45.3 degree movement to the right, 40.9 left, then another 3.8 left.

45.3 – 40.9 – 3.8 = 0.6

At the end of these three calculations, you believe that your plane is still rotated 0.6 degrees to the right. At first, this might not seem like a big problem, but if we performed 10 rotation patterns similar to this, the plane could be off by 6 degrees! That’s starting to be very significant. You certainly wouldn’t want the plane you’re sitting to be 6 degrees off in any direction. Therefore, the gyroscope data is reliable only on the short term, as it starts to drift on the long term.

The Complimentary Filter

To avoid both gyroscope drift and accelerometer noise, we can apply sensor fusion techniques to apply gyroscope output for orientation changes in a short time interval, while using accelerometer data for correction and support in a longer time interval. There are several formulas that exist for this type of fusion algorithm. The most common and complex one is called a Kalman Filter. Kalman filters are quite difficult to explain and understand since it requires a lot of mathematical background in Euler angles, quaternions, etc., so we will leave that for another time. We will instead focus on an algorithm called the Complimentary Filter.

In pure mathematical terms, the Complimentary Filter can be described by a single equation:

In English, we can describe this equation as follows:

current_angle = (weighted_constant) * (previous_angle + integrated_gyroData) + (1 – weighted_constant) *(accData)

To further explain, let’s breakdown each component of the equation.

integrated_gyroData = the change in angle over a certain amount of time

Therefore, it follows that:

(previous_angle + integrated_gyroData) = a calculation of the current angle if ONLY the gyroscope was being used

In addition:

accData = a calculation of the current angle if ONLY the accelerometer was being used

The weighted_constant is generally a value that is chosen after experimental testing that is part of the tuning process. For now, we can use weighted_constant = .98 as an example.

This means that on every iteration of the algorithm, it calculates what the angle estimate is using gyro data, then calculates an angle estimate using accelerometer data. Finally, it takes 98% of the gyro based angle and adds it to 2% of the accelerometer based angle to obtain a final angle that maintains most of the accuracy of the in the short term while keeping drift from impacting the long term performance.

The following graph shows a sample of the performance of a complimentary filter vs. pure gyro and pure acc. As you can see, the complimentary filter provides the most balanced of the 3 techniques used.

This has served as a very basic overview of what sensor fusion can do in a practical sense. This technique is great for many applications, but does run into certain limitations for certain tasks such as gimbal locks (Apollo 11 had this problem!!). In future posts, we will explore some much deeper mathematics that will help in understanding the much more complex Kalman Filter.

Resources used:

https://github.com/memsindustrygroup/Open-Source-Sensor-Fusion/

http://smus.com/sensor-fusion-prediction-webvr/

http://www.pieter-jan.com/node/11

http://www.codeproject.com/Articles/729759/Android-Sensor-Fusion-Tutorial