Thursday, March 31, 2011

Intermediate Kalman Filtering theory, Part 2

Time for an example. We'll start really really simple, and just redo the linear velocity model.

First off, state vector. Basically the state vector is all the things that the system "remembers" about itself. Think about it as a simulation. Once we have a program, we run the simulation and freeze it at a certain point. What do we need to save so that when we restore, we have enough data to start the simulation back up? We'll use the cart on rails model as an example. We are working with a massive cart on smooth, level, perfectly frictionless rails. Therefore the cart is free to move in one dimension only. The very meaning of one dimension is that it takes only one number to describe the position, and we will call this number p. If we think of it as a function of time, we might say p(t). We need p in our state vector, but this does not yet completely describe the state. We also need to know the velocity. A cart at 4m down the track stopped is very different from a cart at 4m moving at 50m/s. As it turns out, this is all we need to know. This is all the cart "remembers" - everything else, such as our random gusts of wind, are instantaneous only. Any effect they have is only through changing the position and velocity. Therefore, <x>=<p,v>. Our state vector has two elements, p (in units of meters from the origin, positive on one side and negative on the other) and v (in units of meters per second, positive if moving one way and negative if the other).

Next, physics.We actually use Newton's first law here (normally problems start with the second law). The cart has no forces applied to it, so it continues in a straight line at constant speed. The derivative of the speed is zero (this is what constant speed means) and the derivative of the position is the speed, so we end up with this:



(An aside on vector notation. If the components of a vector <?> are named, I will specify the element named @ by ?.@ as if the vector were a structure in C. If the components are ordered, and they always are, I will call the first one ?[0], followed by ?[1] etc. Zero-based, again like C. So, for our state vector <x> above, we can talk about x.p, x.v, and note that x[0]=x.p . Vector components always have an order, but sometimes it is more clear to talk about the components by name if any, so I will freely use both notations. When I get to it, I will number matrix cells with two indexes, row first, then column, starting at 0. So, the upper-left cell of any matrix is ?[0,0].)

Believe it or not, we are done with the physics function.


Now for the observation function. The observation vector is a single number, the position along the track. So, we get


Again, that's it.

Now for jacobians. We need a function that can calculate [Φ] given any <x>, and to do this we need derivatives. We are going to do the derivatives once on paper, code them up in a function, and then that function will provide the actual numerical derivative whenever we need it.

There are only four elements, so we can go nice and slow. In the upper left cell of the matrix, we take the derivative of the 0 component of the force function F[0](<x>)=x.v with respect to the zero component of the state vector x.p . Since this variable doesn't even occur in the formula, the derivative is zero and we have Φ[0,0]=0. Next, in the upper right [0,1] cell, we have component 0 of the force function F[0](<x>)=x.v still, but this time we are going with respect to the 1 component of the state vector x.v . The derivative of any variable with respect to itself is 1, so we get Φ[0,1]=1. Bottom left cell [1,0] uses F[1](<x>)=0 and x[0]=x.p so we have Φ[1,0]=0. To finish off, we have bottom right cell [1,1] which uses F[1](<x>)=0 and x[1]=x.v so Φ[1,1]=0. As it happens this time, all the cells are constant, so our physics matrix function can just return a constant in response to any state vector [Φ(<x>)]=[[0,1],[0,0]] for any <x>. As we shall see in our next example, things are not always so easy.

The jacobian for g() is even easier, but it is still a matrix. Any jacobian of any vector parameter function with a vector return value has as many columns as the input vector has elements, and as many rows as the output vector has elements. So, in this case, 1 row by 2 columns. For H[0,0], we use g[0](<x>)=x.p and x[0]=x.p, so H[0,0]=1. For H[0,1] we have g[0](<x>)=x.p and x[1]=x.v, so H[0,1]=0

To sum up, our problem is described as follows:

State vector <x>=<p,v>
Physics function F(<x>)=<x.v,0>
Observation function g(<x>)=x.p
Linearized physics matrix [Φ]=[[0,1],[0,0]]
Linearized observation matrix [H]=[[1,0]]

Also, a word about a priori initial guesses. You will note that we always need an initial guess of our state vector at step 0 <x^_0> as well as an initial guess of our uncertainty of our initial guess, [P_0]. For some problems, it doesn't matter so much what the initial guess is, since the filter converges rapidly. Sometimes, though, it doesn't, and when this happens, the filter gets farther and farther from the correct answer instead of closer, and we say that the filter diverges. If this happens in a real vehicle, the thing may diverge from its intended course and converge with a wall, causing the parts of the vehicle to diverge from each other. Let's agree right now not to diverge. Your filter can handle the truth, as much truth as you can give it. Give it as good of an initial guess as possible.

Now just because the notation is a little neater, we are going to use the Matlab language to write this example. You can use Gnu Octave to run this if you don't have and can't afford Matlab.

The program is structured as follows. The driver function starts with a piece to simulate the cart. It will create one data point every 100ms for 10s, for a total of 100 points. Next we will add some noise to create the measurements. Then, we create the a priori initial guess.

Finally, we run the filter loop. Each iteration of the filter loop will process one observation and create one state estimate. The time stamp of this estimate is exactly the time of the measurement. The numerical integrators will be in line, instead of in a separate function to start with. Just to see how it's done, we will do a numerical integration time step of 1ms, so we need 100 loops to get from one measurement to the next.
All the other equations will be pretty much as written.

After we have run the filter loop, we will compare the estimates with the true noise-free state we started with.

function nonlinear_velocity
  %Simulated actual trajectory. Stopped at the origin for first five seconds,
  %then constant acceleration such that position reaches 25m after 10
  t=0:0.1:10; %Timestamp of each measurement
  pt=[zeros(1,50),(0:0.1:5).^2]; %True position
  %We want random noise, but the *same* random noise each time we run

  R=0.5^2; %Measurement noise covariance, 50cm on position
  pn=randn(size(t))*sqrt(R); %Position noise, standard deviation of 10cm.
  Q=[0.01,0;0,0.01];  %Process noise covariance, 10cm/s on position and 10cm/s^2 on velocity
  z=pt+pn; %Observations
  xh_0=[0;0]; %Initial state vector guess
  xh=zeros(numel(xh_0),numel(t)); %History of all estimates (starts as initial estimate)
  P_0=[1,0;0,1]; %Initial state vector uncertainty, 1m in position and 1m/s in velocity
  P=zeros(numel(xh_0),numel(xh_0),numel(t)); %History of all estimate covariances
  for i=1:100
    %Stupid matlab, not using zero-based indexing. Our i here is zero based,
    %but we will make an I which is one-based, and use it to index the
    %history arrays.
    %Pull previous estimate from histories
    xh_im1=xh(:,I-1); %Previous extimate <x^_{i-1}>
    P_im1=P(:,:,I-1); %Previous covariance <P^_{i-1}>
    %One numerical integrator for both 1A and 1B, so that 1B can use the
    %intermediate results of 1A.
    xh_im=xh_im1; %To become updated state estimate <x^_i->
    A=eye(size(P_im1)); %The thing we integrate in 1b, will be come [A]
    for j=1:100
      %1a. Use numerical integration to update state vector through time from
      %<x^_{i-1}> to <x^_i->
      %1b. Use numerical integration to update [A]
    %1c, state deviation estimate
    %2. Projected covariance estimate
    %3a. Linearized observation matrix
    %3b. Kalman gain
    %4a. Measurement deviation
    %4b. Measurement update of state deviation
    %4c. Update of reference state vector
    %5. Measurement update of state covariance
    %Record the current estimates
  %Plot some interesting results

%Physics function F(<x>). Takes a state vector, returns the derivative of
%the state vector. Primary physics model definition
function F=F(x)

%Physics matrix [Phi(<x>)]. Takes the state vector and returns the jacobian
%matrix. Secondary physics model definition, as it is derived from F() (on
function Phi=Phi(x)

%Observation function g(<x>). Takes a state vector, returns an observation
%vector. Primary observation model definition
function g=g(x)

%Observation matrix [Phi(<x>)]. Takes the state vector and returns the
%jacobian matrix. Secondary observation model definition, as it is derived
%from g() (on paper).
function H=H(x)

And now for some results:

Hmm, not so good. The estimate starts lagging behind the truth, and never catches up. This is more because the model doesn't fit the actual cart (which in this case has a rocket on it which lights at t=5sec). If the model could not be improved, then we will have to sacrifice smoothness and increase the process noise. Try different (larger) values on the diagonal elements of [Q].

It doesn't really matter, this just supplies us with a good scaffold to hang a real nonlinear problem on.

No comments:

Post a Comment