Dynamic Estimation Tuning

Dynamic estimation tuning is the process of adjusting certain objective function terms to give more desirable solutions. As an example, a dynamic estimation application such as moving horizon estimation (MHE) may either track noisy data too closely or the updates may be too slow to catch unmeasured disturbances of interest. Tuning is the process of achieving acceptable estimator performance based on unique aspects of the application.

Common Tuning Parameters for MHE

Tuning typically involves adjustment of objective function terms or constraints that limit the rate of change (DMAX), penalize the rate of change (DCOST), or set absolute bounds (LOWER and UPPER). Measurement availability is indicated by the parameter (FSTATUS). The optimizer can also include (1=on) or exclude (0=off) a certain adjustable parameter (FV) or manipulated variable (MV) with STATUS. Another important tuning consideration is the time horizon length. Including more points in the time horizon allows the estimator to reconcile the model to more data but also increases computational time. Below are common application, FV, MV, and CV tuning constants that are adjusted to achieve desired model predictive control performance.

  • Application tuning
    • DIAGLEVEL = diagnostic level (0-10) for solution information
    • EV_TYPE = 1 for l1-norm and 2 for squared error objective
    • IMODE = 5 or 8 for moving horizon estimation
    • MAX_ITER = maximum iterations
    • MAX_TIME = maximum time before stopping
    • MV_TYPE = Set default MV type with 0=zero-order hold, 1=linear interpolation
  • Manipulated Variable (MV) tuning
    • COST = (+) minimize MV, (-) maximize MV
    • DCOST = penalty for MV movement
    • DMAX = maximum that MV can move each cycle
    • FSTATUS = feedback status with 1=measured, 0=off
    • LOWER = lower MV bound
    • MV_TYPE = MV type with 0=zero-order hold, 1=linear interpolation
    • STATUS = turn on (1) or off (0) MV
    • UPPER = upper MV bound
  • Controlled Variable (CV) tuning
    • COST = (+) minimize MV, (-) maximize MV
    • FSTATUS = feedback status with 1=measured, 0=off
    • MEAS_GAP = measurement gap for estimator dead-band

There are several ways to change the tuning values. Tuning values can either be specified before an application is initialized or while an application is running. To change a tuning value before the application is loaded, use the apm_option() function such as the following example to change the lower bound in MATLAB or Python for the FV named p.


The upper and lower measurement deadband for a CV named y are set to values around the measurement. In this case, an acceptable range for the model prediction is to intersect the measurement of 10.0 between 9.5 and 10.5 with a MEAS_GAP of 1.0 (or +/-0.5).


Application constants are modified by indicating that the constant belongs to the group nlc. IMODE is adjusted to either solve the MHE problem with a simultaneous (5) or sequential (8) method. In the case below, the application IMODE is changed to sequential mode.



Objective: Design an estimator to predict an unknown parameters so that a simple model is able to predict the response of a more complex process. Tune the estimator to achieve either tracking or predictive performance. Estimated time: 2 hours.

Design an estimator to predict K and tau of a 1st order model to predict the dynamic response of a 1st order, 2nd order, and 10th order process. For the 2nd and 10th order systems, there is process/model mismatch. This means that the structure of the model can never exactly match the actual process because the equations are inherently incorrect. The parameter values are adjusted to best approximate the process even though the model is deficient. The process order is adjusted in the file process.apm file in the Constants section.

   ! process model order
   n = 1  ! change to 1, 2, and 10

In each case, tune the estimator to favor either acceptable tracking or predictive performance. Tracking performance is the ability of the estimator to synchronize with measurements and is demonstrated with overall agreement between the model predictions and the measurements. Predictive performance sacrifices tracking performance to achieve more consistent values that are valid over a longer predictive horizon for model predictive control.