System Identification Toolbox    

Subspace Methods for Estimating State-Space Models

The state-space matrices A, B, C, D, and K in (3-23) can be estimated directly, without first specifying any particular parametrization by efficient subspace methods. The idea behind this can be explained as follows: If the sequence of state vectors x(t) were known, together with y(t) and u(t), Eq. (3-23) would be a linear regression, and C and D could be estimated by the least squares method. Then e(t) could be determined, and treated as a known signal in (3-23), which then would be another linear regression model for A, B and K. (One could also treat (3-21) as a linear regression for A, B, C, and D with y(t) and x(t+1) as simultaneous outputs, and find the joint process and measurement noises as the residuals from this regression. The Kalman gain K could then be computed from the Riccati equation.) Thus, once the states are known, the estimation of the state-space matrices is easy.

How to find the states x(t)? All states in representations like (3-23) can be formed as linear combinations of the k-step ahead predicted outputs (k=1,2,...,n). It is thus a matter of finding these predictors, and then selecting a basis among them. The subspace methods form an efficient and numerically reliable way of determining the predictors by projections directly on the observed data sequences. See Sections 7.3 and 10.6 in Ljung (1999). For more details, see the references under n4sid in the "Command Reference" chapter.


  Estimating Parametric Models Data Representation and Nonparametric Model Estimation