Main

Estimation Introduction: Bias Update, Kalman Filter, and MHE

Simple biasing, Kalman filters, and Moving Horizon Estimation (MHE) are all approaches to align dynamic data with model predictions. Each method has particular advantages and disadvantages with a main trade-off being algorithmic complexity versus quality of the solution.

Biasing is a method to adjust output values with either an additive or multiplicative term. The additive or multiplicative bias is increased or decreased depending on the current difference between model prediction and measured value. This is the most basic form of model updates and it is prevalent in industrial base control and advanced control applications.

Kalman filtering produces estimates of variables and parameters from data and Bayesian model predictions. The data may include inaccuracies such as noise (random fluctuations), outliers, or other inaccuracies. The Kalman filter is a recursive algorithm where additional measurements are used to update states (x) and an uncertainty description as a covariance matrix (P).

The extended Kalman filter (EKF) is the same as the Kalman filter but in applications that have a nonlinear model. The model is relinearized at every new prediction step to extend the linear methods for nonlinear cases. The unscented Kalman filter (UKF) shows improved performance for systems that are highly nonlinear or have non-uniform probability distributions for the estimates. UKF uses sigma points (sample points that are selected to represent the uncertainty of the states) that are propagated forward in time with the use of simulation. Points that are closest to the measured values are retained while points that are beyond a tolerance value are removed. Additional sigma points are added at each sampling instance to maintain a population of points that represent a collection of potential state values.

Moving horizon estimation (MHE) is an optimization approach for predicting states and parameters. Model predictions are matched to a horizon of prior measurements by adjusting parameters and the initial conditions of the model.

Exercise

Objective: Implement MHE with an without Model Predictive Control (MPC). Describe the difference between tracking performance (agreement between model and measured values) and predictive performance (model parameters that predict into the future). Tune the estimator to improve control performance. Show how overly aggressive tracking performance may degrade the control performance even though the estimator performance (difference between measured and modeled output) is acceptable. An overly aggressive estimator may give different parameter values (K and tau) for each cycle.

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the model below.

Observe the estimator performance as new measurements arrive. Is the estimator able to reconstruct the unmeasured model parameters from the input and output data? Repeat the above exercise of changing the K and tau and run the MHE with MPC controller.

Observe the controller performance as the estimator provides updated parameters (K and tau). Follow the flow of signals around the control loop to understand the specific inputs and outputs from each block. How does the controller perform if there is a mismatch between the estimated values of K and tau used in the controller and the process K and tau?

Solution