Quiz on Parameter Regression


1. Parameter regression finds the best parameters `p` to minimize an objective function of the predicted `x` and measured `z` values. The values of `x` are a function of the parameter values `x(p)`.

$$\min_p \sum{\left(x(p) - z\right)^2}$$

You may want the optimizer to only search for optimal values in a certain range.

$$0 \leq p \leq 5$$

What is the most efficient way to include these inequalities in the optimization problem?

A. An objective
Incorrect. The objective is the quantity that is minimized. You could include an extra objective term that penalizes how far the parameter value is outside of the range but this isn't as efficient as including bounds on the parameter values.
B. Bounds
Correct. Bounds are simple constraints on the range of a single variable. This is the most efficient way to include the parameter constraints.
C. An inequality constraint
Incorrect. It is typically easier for a solver to find a solution by placing a bound on a variable so it limits where the solver searches. If you add an inequality constraint as an equation then the solver checks the solution feasibility at each iteration. Because this is an example of a simple inequality constraint, it is better to bound the parameter upper and lower bound.

2. What role might fitting an FOPDT model using a graphical method play in the process of fitting a model using an optimization method? Choose the best answer.

A. It can provide a curve to compare with the final fitted result
Incorrect. While true, this comparison can only be made after and is not a part of fitting using the optimization method
B. There is no point in using both, as the optimization method is more accurate
Incorrect. The optimization method is more accurate, but the graphical method provides a good set of initial guess values.
C. The graphical method could prove to be more accurate
Incorrect. The possibility of estimating graphical fitting parameters beyond the precision and accuracy of an optimizer is negligible
D. The graphical method can provide an initial guess for the optimizer
Correct. The values from the graphical method will likely ensure that the initial guess values are feasible and will increase solver performance.

3. Run the FOPDT example with the three initial guess values for gain, time constant, and dead time set to one and select the resulting initial SSE objective from the options below.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy.optimize import minimize
from scipy.interpolate import interp1d

# Import CSV data file
url = 'http://apmonitor.com/pdc/uploads/Main/tclab_data3.txt'
data = pd.read_csv(url)
t = data['Time'].values
u = data['Q1'].values
yp = data['T1'].values
u0=u[0]; yp0=yp[0]

# specify number of steps
ns = len(t)
delta_t = t[1]-t[0]
# create linear interpolation of the u data versus time
uf = interp1d(t,u)

# define first-order plus dead-time approximation    
def fopdt(y,t,uf,Km,taum,thetam):
    # arguments
    #  y      = output
    #  t      = time
    #  uf     = input linear function (for time shift)
    #  Km     = model gain
    #  taum   = model time constant
    #  thetam = model time constant
    # time-shift u
    try:
        if (t-thetam) <= 0:
            um = uf(0.0)
        else:
            um = uf(t-thetam)
    except:
        #print('Error with time extrapolation: ' + str(t))
        um = u0
    # calculate derivative
    dydt = (-(y-yp0) + Km * (um-u0))/taum
    return dydt

# simulate FOPDT model with x=[Km,taum,thetam]
def sim_model(x):
    # input arguments
    Km = x[0]
    taum = x[1]
    thetam = x[2]
    # storage for model values
    ym = np.zeros(ns)  # model
    # initial condition
    ym[0] = yp0
    # loop through time steps    
    for i in range(0,ns-1):
        ts = [t[i],t[i+1]]
        y1 = odeint(fopdt,ym[i],ts,args=(uf,Km,taum,thetam))
        ym[i+1] = y1[-1]
    return ym

# define objective
def objective(x):
    # simulate model
    ym = sim_model(x)
    # calculate objective
    obj = 0.0
    for i in range(len(ym)):
        obj = obj + (ym[i]-yp[i])**2    
    # return result
    return obj

# initial guesses
x0 = np.zeros(3)
x0[0] = 1.0 # Km
x0[1] = 120.0 # taum
x0[2] = 5.0 # thetam

# show initial objective
print('Initial SSE Objective: ' + str(objective(x0)))

# optimize Km, taum, thetam
#solution = minimize(objective,x0)

# Solve with bounds on variables
bnds = ((0.4, 1.0), (50.0, 250.0), (0.0, 30.0))
solution = minimize(objective,x0,bounds=bnds,method='SLSQP')
x = solution.x

# show final objective
print('Final SSE Objective: ' + str(objective(x)))
A. 727922.66
Correct. Note that the worse initial guess than the original code (Answer B) values means that the solver requires more time and iterations to find the solution.
B. 67972.327
Incorrect. Change the values in the x0 array from the example and run the code
C. 25.153234
Incorrect. Change the values in the x0 array from the example and run the code
D. 15.23
Incorrect. Change the values in the x0 array from the example and run the code

As a follow-up, minimize the sum of squared error and determine the parameter values that provide the best fit. An Excel sheet shows how to optimize gain and time constant.

💬