## Main.EstimatorTypes History

Changed line 23 from:

For this exercise, change the process model gain and time constant to unique values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the Simulink model below as \frac{5}{10s+1}. There are three models in this exercise including the process, the estimator, and the controller. The first part of the exercise is to tune just the estimator. While an estimator can be well tuned to track measured values, it may be undesirable to have large parameter variations that are passed with each cycle to a model predictive controller.

to:

For this exercise, change the process model gain and time constant to unique values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the Simulink model below in Laplace form as \frac{5}{10s+1}. There are three models in this exercise including the process, the estimator, and the controller. The first part of the exercise is to tune just the estimator. While an estimator can be well tuned to track measured values, it may be undesirable to have large parameter variations that are passed with each cycle to a model predictive controller.

Changed lines 19-20 from:

A linear, first order process is described by a gain K and time constant \tau. In transfer function form, this model is expressed as \frac{K}{\tau \, s+1} or in differential form as:

to:

A linear, first order process is described by a gain K and time constant \tau. This model is expressed in differential form as:

Changed line 23 from:

For this exercise, change the model gain and time constant to unique values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the model below. There are three models in this exercise including the process, the estimator, and the controller. The first part of the exercise is to tune the estimator.

to:

For this exercise, change the process model gain and time constant to unique values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the Simulink model below as \frac{5}{10s+1}. There are three models in this exercise including the process, the estimator, and the controller. The first part of the exercise is to tune just the estimator. While an estimator can be well tuned to track measured values, it may be undesirable to have large parameter variations that are passed with each cycle to a model predictive controller.

Changed line 19 from:

A linear, first order process is described by a gain K and time constant \tau. In transfer function form, this model is expressed as {\frac{K}{\tau \, s+1}'' or in differential form as:

to:

A linear, first order process is described by a gain K and time constant \tau. In transfer function form, this model is expressed as \frac{K}{\tau \, s+1} or in differential form as:

Changed lines 19-20 from:

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the model below.

to:

A linear, first order process is described by a gain K and time constant \tau. In transfer function form, this model is expressed as {\frac{K}{\tau \, s+1}'' or in differential form as:

$$\tau \frac{dy}{dt} = -y + K \, u$$

For this exercise, change the model gain and time constant to unique values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the model below. There are three models in this exercise including the process, the estimator, and the controller. The first part of the exercise is to tune the estimator.

Changed lines 177-178 from:

Observe the estimator performance as new measurements arrive. Is the estimator able to reconstruct the unmeasured model parameters from the input and output data? Repeat the above exercise of changing the K and tau and run the MHE with MPC controller.

to:

Observe the estimator performance as new measurements arrive. Is the estimator able to reconstruct the unmeasured model parameters from the input and output data?

The second part is to tune the estimator for improved controller performance. When tuning an estimator for model predictive control performance, it is important to have more consistent parameter values that do not rapidly change cycle to cycle. Repeat the above exercise of changing the K and \tau and run the MHE with the MPC controller. Tune the estimator to give more consistent value of the unknown parameters and achieve better setpoint tracking.

Changed line 397 from:

Observe the controller performance as the estimator provides updated parameters (K and tau). Follow the flow of signals around the control loop to understand the specific inputs and outputs from each block. How does the controller perform if there is a mismatch between the estimated values of K and tau used in the controller and the process K and tau?

to:

Observe the controller performance as the estimator provides updated parameters (K and \tau). Follow the flow of signals around the control loop to understand the specific inputs and outputs from each block. How does the controller perform if there is a mismatch between the estimated values of K and \tau used in the controller and the process K and \tau?

Changed line 27 from:

(:toggle hide gekko button show="Show GEKKO (Python) Code":)

to:

(:toggle hide gekko button show="Show MHE GEKKO (Python) Code":)

Changed line 181 from:

(:toggle hide gekko2 button show="Show GEKKO (Python) Code":)

to:

(:toggle hide gekko2 button show="Show MHE+MPC GEKKO (Python) Code":)

Added lines 26-171:

(:toggle hide gekko button show="Show GEKKO (Python) Code":) (:div id=gekko:) (:source lang=python:)

1. Import packages

import numpy as np from random import random from gekko import GEKKO import matplotlib.pyplot as plt

1. Process

p = GEKKO()

p.time = [0,.5]

1. Parameters

p.u = p.MV() p.K = p.Param(value=1) #gain p.tau = p.Param(value=5) #time constant

1. variable

p.y = p.SV() #measurement

1. Equations

p.Equation(p.tau * p.y.dt() == -p.y + p.K * p.u)

1. options

p.options.IMODE = 4

1. MHE Model

m = GEKKO()

m.time = np.linspace(0,20,41) #0-20 by 0.5 -- discretization must match simulation

1. Parameters

m.u = m.MV() #input m.K = m.FV(value=3, lb=1, ub=3) #gain m.tau = m.FV(value=4, lb=1, ub=10) #time constant

1. Variables

m.y = m.CV() #measurement

1. Equations

m.Equation(m.tau * m.y.dt() == -m.y + m.K*m.u)

1. Options

m.options.IMODE = 5 #MHE m.options.EV_TYPE = 1 m.options.DIAGLEVEL = 0

1. STATUS = 0, optimizer doesn't adjust value
2. STATUS = 1, optimizer can adjust

m.u.STATUS = 0 m.K.STATUS = 1 m.tau.STATUS = 1 m.y.STATUS = 1

1. FSTATUS = 0, no measurement
2. FSTATUS = 1, measurement used to update model

m.u.FSTATUS = 1 m.K.FSTATUS = 0 m.tau.FSTATUS = 0 m.y.FSTATUS = 1

1. DMAX = maximum movement each cycle

m.K.DMAX = 1 m.tau.DMAX = .1

1. MEAS_GAP = dead-band for measurement / model mismatch

m.y.MEAS_GAP = 0.25

m.y.TR_INIT = 1

1. problem configuration
2. number of cycles

cycles = 50

1. noise level

noise = 0.25

1. run process, estimator and control for cycles

y_meas = np.empty(cycles) y_est = np.empty(cycles) k_est = np.empty(cycles) tau_est = np.empty(cycles) u_cont = np.empty(cycles) u = 2.0

1. Create plot

plt.figure(figsize=(10,7)) plt.ion() plt.show()

for i in range(cycles):

    # change input (u)
if i==10:
u = 3.0
elif i==20:
u = 4.0
elif i==30:
u = 1.0
elif i==40:
u = 3.0
u_cont[i] = u

## process simulator
#load u value
p.u.MEAS = u_cont[i]
#simulate
p.solve()
#load output with white noise
y_meas[i] = p.y.MODEL + (random()-0.5)*noise

## estimator
#load input and measured output
m.u.MEAS = u_cont[i]
m.y.MEAS = y_meas[i]
#optimize parameters
m.solve()
#store results
y_est[i] = m.y.MODEL
k_est[i] = m.K.NEWVAL
tau_est[i] = m.tau.NEWVAL

plt.clf()
plt.subplot(4,1,1)
plt.plot(y_meas[0:i])
plt.plot(y_est[0:i])
plt.legend(('meas','pred'))
plt.ylabel('y')
plt.subplot(4,1,2)
plt.plot(np.ones(i)*p.K.value[0])
plt.plot(k_est[0:i])
plt.legend(('actual','pred'))
plt.ylabel('k')
plt.subplot(4,1,3)
plt.plot(np.ones(i)*p.tau.value[0])
plt.plot(tau_est[0:i])
plt.legend(('actual','pred'))
plt.ylabel('tau')
plt.subplot(4,1,4)
plt.plot(u_cont[0:i])
plt.legend('u')
plt.draw()
plt.pause(0.05)


(:sourceend:) (:divend:)

Changed line 108 from:
1. MHE Model
to:
1. MPC Model
Added lines 34-243:

(:toggle hide gekko2 button show="Show GEKKO (Python) Code":) (:div id=gekko2:) (:source lang=python:)

1. Import packages

import numpy as np from random import random from gekko import GEKKO import matplotlib.pyplot as plt

1. Process

p = GEKKO()

p.time = [0,.5]

1. Parameters

p.u = p.MV() p.K = p.Param(value=1) #gain p.tau = p.Param(value=5) #time constant

1. variable

p.y = p.SV() #measurement

1. Equations

p.Equation(p.tau * p.y.dt() == -p.y + p.K * p.u)

1. options

p.options.IMODE = 4

1. MHE Model

m = GEKKO()

m.time = np.linspace(0,20,41) #0-20 by 0.5 -- discretization must match simulation

1. Parameters

m.u = m.MV() #input m.K = m.FV(value=3, lb=1, ub=3) #gain m.tau = m.FV(value=4, lb=1, ub=10) #time constant

1. Variables

m.y = m.CV() #measurement

1. Equations

m.Equation(m.tau * m.y.dt() == -m.y + m.K*m.u)

1. Options

m.options.IMODE = 5 #MHE m.options.EV_TYPE = 1 m.options.DIAGLEVEL = 0

1. STATUS = 0, optimizer doesn't adjust value
2. STATUS = 1, optimizer can adjust

m.u.STATUS = 0 m.K.STATUS = 1 m.tau.STATUS = 1 m.y.STATUS = 1

1. FSTATUS = 0, no measurement
2. FSTATUS = 1, measurement used to update model

m.u.FSTATUS = 1 m.K.FSTATUS = 0 m.tau.FSTATUS = 0 m.y.FSTATUS = 1

1. DMAX = maximum movement each cycle

m.K.DMAX = 1 m.tau.DMAX = .1

1. MEAS_GAP = dead-band for measurement / model mismatch

m.y.MEAS_GAP = 0.25

m.y.TR_INIT = 1

1. MHE Model

c = GEKKO()

c.time = np.linspace(0,5,11) #0-5 by 0.5 -- discretization must match simulation

1. Parameters

c.u = c.MV(lb=-10,ub=10) #input c.K = c.FV(value=10, lb=1, ub=3) #gain c.tau = c.FV(value=1, lb=1, ub=10) #time constant

1. Variables

c.y = c.CV() #measurement

1. Equations

c.Equation(c.tau * c.y.dt() == -c.y + c.u * c.K)

1. Options

c.options.IMODE = 6 #MPC c.options.CV_TYPE = 1

1. STATUS = 0, optimizer doesn't adjust value
2. STATUS = 1, optimizer can adjust

c.u.STATUS = 1 c.K.STATUS = 0 c.tau.STATUS = 0 c.y.STATUS = 1

1. FSTATUS = 0, no measurement
2. FSTATUS = 1, measurement used to update model

c.u.FSTATUS = 0 c.K.FSTATUS = 1 c.tau.FSTATUS = 1 c.y.FSTATUS = 1

1. DMAX = maximum movement each cycle

c.u.DCOST = .1

1. y setpoint
2. if CV_TYPE = 1, use SPHI and SPLO

sp = 3.0 c.y.SPHI = sp + 0.1 c.y.SPLO = sp - 0.1

1. if CV_TYPE = 2, use SP
2. c.y.SP = 3

c.y.TR_INIT = 0

1. problem configuration
2. number of cycles

cycles = 100

1. noise level

noise = 0.25

1. run process, estimator and control for cycles

y_meas = np.empty(cycles) y_est = np.empty(cycles) k_est = np.empty(cycles) tau_est = np.empty(cycles) u_cont = np.empty(cycles) sp_store = np.empty(cycles)

1. Create plot

plt.figure(figsize=(10,7)) plt.ion() plt.show()

for i in range(cycles):

    # set point changes
if i==20:
sp = 5.0
elif i==40:
sp = 2.0
elif i==60:
sp = 4.0
elif i==80:
sp = 3.0
c.y.SPHI = sp + 0.1
c.y.SPLO = sp - 0.1
sp_store[i] = sp

## controller
#load
c.tau.MEAS = m.tau.NEWVAL
c.K.MEAS = m.K.NEWVAL
if p.options.SOLVESTATUS == 1:
c.y.MEAS = p.y.MODEL
#change setpoint at time 25
if i == 25:
c.y.SPHI = 6.1
c.y.SPLO = 5.9
c.solve()
u_cont[i] = c.u.NEWVAL

## process simulator
#load control move
p.u.MEAS = u_cont[i]
#simulate
p.solve()
#load output with white noise
y_meas[i] = p.y.MODEL + (random()-0.5)*noise

## estimator
#load input and measured output
m.u.MEAS = u_cont[i]
m.y.MEAS = y_meas[i]
#optimize parameters
m.solve()
#store results
y_est[i] = m.y.MODEL
k_est[i] = m.K.NEWVAL
tau_est[i] = m.tau.NEWVAL

plt.clf()
plt.subplot(4,1,1)
plt.plot(y_meas[0:i])
plt.plot(y_est[0:i])
plt.plot(sp_store[0:i])
plt.legend(('meas','pred','setpoint'))
plt.ylabel('y')
plt.subplot(4,1,2)
plt.plot(np.ones(i)*p.K.value[0])
plt.plot(k_est[0:i])
plt.legend(('actual','pred'))
plt.ylabel('k')
plt.subplot(4,1,3)
plt.plot(np.ones(i)*p.tau.value[0])
plt.plot(tau_est[0:i])
plt.legend(('actual','pred'))
plt.ylabel('tau')
plt.subplot(4,1,4)
plt.plot(u_cont[0:i])
plt.legend('u')
plt.draw()
plt.pause(0.05)


(:sourceend:) (:divend:)

Changed line 1 from:

(:title Estimation Introduction: Bias Update, Kalman Filter, and MHE:)

to:

(:title Estimation with Model Predictive Control:)

February 09, 2016, at 11:50 PM by 174.148.97.76 -
Changed line 17 from:

Objective: Implement MHE with an without Model Predictive Control (MPC). Understand the difference between tracking performance (agreement between model and measured values) and predictive performance (model parameters that predict into the future).

to:

Objective: Implement MHE with an without Model Predictive Control (MPC). Describe the difference between tracking performance (agreement between model and measured values) and predictive performance (model parameters that predict into the future). Tune the estimator to improve control performance. Show how overly aggressive tracking performance may degrade the control performance even though the estimator performance (difference between measured and modeled output) is acceptable. An overly aggressive estimator may give different parameter values (K and tau) for each cycle.

February 09, 2016, at 10:56 PM by 174.148.97.76 -
Added lines 21-22:
Added lines 29-30:
Changed line 9 from:

Kalman filtering produces estimates of variables and parameters from data and Bayesian model predictions. The data may include inaccuracies such as noise (random fluctuations), outliers, or other inaccuracies. The Kalman filter is a recursive algorithm where additional measurements are used to update states (x) and an uncertainty description as a covariance matrix (P).

to:

Kalman filtering produces estimates of variables and parameters from data and Bayesian model predictions. The data may include inaccuracies such as noise (random fluctuations), outliers, or other inaccuracies. The Kalman filter is a recursive algorithm where additional measurements are used to update states (x) and an uncertainty description as a covariance matrix (P).

May 15, 2015, at 03:43 AM by 45.56.3.184 -
Added lines 35-37:

(:html:) <iframe width="560" height="315" src="https://www.youtube.com/embed/KdGAfGRZSbs?rel=0" frameborder="0" allowfullscreen></iframe> (:htmlend:)

May 13, 2015, at 11:41 PM by 10.10.147.4 -
Changed line 1 from:

(:title Estimation: Bias and Kalman Filters:)

to:

(:title Estimation Introduction: Bias Update, Kalman Filter, and MHE:)

Added lines 12-13:

Moving horizon estimation (MHE) is an optimization approach for predicting states and parameters. Model predictions are matched to a horizon of prior measurements by adjusting parameters and the initial conditions of the model.

May 13, 2015, at 11:38 PM by 10.10.147.4 -
Changed line 17 from:

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10. For example, the gain is 5.0 and the time constant is to 10.0 in the model below.

to:

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10. For example, the gain is 5.0 and the time constant is 10.0 in the model below.

May 13, 2015, at 11:31 PM by 10.10.147.4 -
Changed line 17 from:

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10. For example, change the gain to 6.5 and the time constant to 2.0.

to:

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10. For example, the gain is 5.0 and the time constant is to 10.0 in the model below.

May 13, 2015, at 11:29 PM by 10.10.147.4 -
Changed lines 19-20 from:
to:
Added lines 26-27:
May 13, 2015, at 11:14 PM by 10.10.147.4 -
Changed line 17 from:

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10.

to:

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10. For example, change the gain to 6.5 and the time constant to 2.0.

May 13, 2015, at 11:13 PM by 10.10.147.4 -
Changed lines 11-29 from:

The extended Kalman filter (EKF) is the same as the Kalman filter but in applications that have a nonlinear model. The model is relinearized at every new prediction step to extend the linear methods for nonlinear cases. The unscented Kalman filter (UKF) shows improved performance for systems that are highly nonlinear or have non-uniform probability distributions for the estimates. UKF uses sigma points (sample points that are selected to represent the uncertainty of the states) that are propagated forward in time with the use of simulation. Points that are closest to the measured values are retained while points that are beyond a tolerance value are removed. Additional sigma points are added at each sampling instance to maintain a population of points that represent a collection of potential state values.

to:

The extended Kalman filter (EKF) is the same as the Kalman filter but in applications that have a nonlinear model. The model is relinearized at every new prediction step to extend the linear methods for nonlinear cases. The unscented Kalman filter (UKF) shows improved performance for systems that are highly nonlinear or have non-uniform probability distributions for the estimates. UKF uses sigma points (sample points that are selected to represent the uncertainty of the states) that are propagated forward in time with the use of simulation. Points that are closest to the measured values are retained while points that are beyond a tolerance value are removed. Additional sigma points are added at each sampling instance to maintain a population of points that represent a collection of potential state values.

#### Exercise

Objective: Implement MHE with an without Model Predictive Control (MPC). Understand the difference between tracking performance (agreement between model and measured values) and predictive performance (model parameters that predict into the future).

A linear, first order process is described by a gain K and time constant tau. In transfer function form, this model is expressed as K/(tau s+1) or in differential form as tau dy/dt = -y + K u. For this exercise, change the Simulink model gain and time constant to values between 1 and 10.

Observe the estimator performance as new measurements arrive. Is the estimator able to reconstruct the unmeasured model parameters from the input and output data? Repeat the above exercise of changing the K and tau and run the MHE with MPC controller.

Observe the controller performance as the estimator provides updated parameters (K and tau). Follow the flow of signals around the control loop to understand the specific inputs and outputs from each block. How does the controller perform if there is a mismatch between the estimated values of K and tau used in the controller and the process K and tau?

#### Solution

Changed line 1 from:

(:title Estimation with Biasing and Kalman Filtering:)

to:

(:title Estimation: Bias and Kalman Filters:)

Changed line 1 from:

(:title Dynamic Estimation: Biasing, EKF, UKF:)

to:

(:title Estimation with Biasing and Kalman Filtering:)

Changed lines 9-11 from:

Kalman filtering produces estimates of variables and parameters from data and Bayesian model predictions. The data may include inaccuracies such as noise (random fluctuations), outliers, or other inaccuracies. The Kalman filter is a recursive algorithm where additional measurements are used to update states (x) and an uncertainty description as a covariance matrix (P).

to:

Kalman filtering produces estimates of variables and parameters from data and Bayesian model predictions. The data may include inaccuracies such as noise (random fluctuations), outliers, or other inaccuracies. The Kalman filter is a recursive algorithm where additional measurements are used to update states (x) and an uncertainty description as a covariance matrix (P).

The extended Kalman filter (EKF) is the same as the Kalman filter but in applications that have a nonlinear model. The model is relinearized at every new prediction step to extend the linear methods for nonlinear cases. The unscented Kalman filter (UKF) shows improved performance for systems that are highly nonlinear or have non-uniform probability distributions for the estimates. UKF uses sigma points (sample points that are selected to represent the uncertainty of the states) that are propagated forward in time with the use of simulation. Points that are closest to the measured values are retained while points that are beyond a tolerance value are removed. Additional sigma points are added at each sampling instance to maintain a population of points that represent a collection of potential state values.

Added lines 1-9:

(:title Dynamic Estimation: Biasing, EKF, UKF:) (:keywords Kalman filter, unscented Kalman filter, extended Kalman filter, dynamic data, validation, estimation, simulation, modeling language, differential, algebraic, tutorial:) (:description Dynamic estimation with simple biasing, extended Kalman filter, or unscented Kalman filter:)

Simple biasing, Kalman filters, and Moving Horizon Estimation (MHE) are all approaches to align dynamic data with model predictions. Each method has particular advantages and disadvantages with a main trade-off being algorithmic complexity versus quality of the solution.

Biasing is a method to adjust output values with either an additive or multiplicative term. The additive or multiplicative bias is increased or decreased depending on the current difference between model prediction and measured value. This is the most basic form of model updates and it is prevalent in industrial base control and advanced control applications.

Kalman filtering produces estimates of variables and parameters from data and Bayesian model predictions. The data may include inaccuracies such as noise (random fluctuations), outliers, or other inaccuracies. The Kalman filter is a recursive algorithm where additional measurements are used to update states (x) and an uncertainty description as a covariance matrix (P).