Wednesday, March 30, 2011

Beginners Kalman Filtering Theory, Part 4

Process Noise

If we know that the constant really is a constant, then what we did before is just fine, but we might as well just do an average. Suppose however, that our constant isn't really constant. Suppose it has a step in it. If you think about the filter as an averager, an averager never forgets. The filter result will end up the weighted average between the measurements before the step and the measurements after.



So, what we can do is give the filter a little uncertainty, a little nagging doubt, about its previous results. We tell it that its estimate of the estimate error covariance is a little bit wrong. We create a little bit of doubt that the system really is a rock solid constant. We add process noise.

Reiterating the Kalman filter again in all its glory, let's reduce it again, but this time include the process noise term.

  1. <x^_i->=[A]<x_{i-1}>
  2. [P_i-]=[A][P_{i-1}][A']+[Q]
  3. [K_i]=[P_i-][H']([H][P_i-][H']+[R])-1
  4. <x_i^>=<x_i^->+[K_i](<z_i>-[H]<x_i^->)
  5. [P_i]=([1]-[K_i][H])[P_i-]

Once again, a cancelfest:

  1. x^_i-=x_{i-1}
  2. P_i-=P_{i-1}+Q
  3. K_i=P_i-/(P_i-+R)
  4. x^_i=x^_i-+K_i(z_i-x^_i-)
  5. P_i=(1-K_i)P_i^-

Now the estimate covariance never collapses to zero. In fact, it never decreases below the process covariance Q, which is what we would expect. If the process really is uncertain, our estimate can never be less than that uncertainty.

Lets sic this on the true constant first, to see what we give up:


The final estimate is 0.5055V±0.0520V. Naturally our uncertainty is a lot worse, since the filter thinks that the process really is uncertain about what its state is. It weighs the previous estimate quite a bit less than it did when the process was rock solid. As it turns out, once the filter converges, it weighs the previous estimate about three times heavier than the current measurement, so in a way its memory only extends for three measurements. The filter in this case is a moving average of the recent past, where older measurements' weight decays exponentially.

Let's see how it does now against a step in the constant:


Since it only remembers back about three measurements, it only takes about three measurements to start tracking the new value.

Process noise represents our uncertainty about what the process really is doing, uncertainty about our model. But, how can you assign a standard deviation to "maybe the constant has a step in it"? It certainly doesn't have a normal distribution. The right answer is to fix the process model, but maybe we can't. In this case, we use the process noise level as just a tuning parameter. We can tune it to get the right balance between noise filtering and responsiveness to change. In the above examples, the process noise is rather large, only 1/sqrt(10) of the measurement noise (Q=0.001, R=0.01). Let's see what happens as we turn the process noise down. All of these have the same measurement covariance, only the process covariance changes.


Q=1e-4, memory about 10 steps

Q=1e-5, memory about 30 steps
Q=1e-6, memory about 100 steps
So we see that as we turn the process noise down, the estimate in the flat part becomes smoother, in exchange for the response to the new data becoming longer.

This kind of tuning is a judgement call. It's actually an ugly hack, and should be avoided if possible, but it may not be possible.

No comments:

Post a Comment