Reliable Computations    

Transfer Function

Transfer function models, when expressed in terms of expanded polynomials, tend to be inherently ill-conditioned representations of LTI systems. For systems of order greater than 10, or with very large/small polynomial coefficients, difficulties can be encountered with functions like roots, conv, bode, step, or conversion functions like ss or zpk.

A major difficulty is the extreme sensitivity of the roots of a polynomial to its coefficients. This example is adapted from Wilkinson, [6] as an illustration. Consider the transfer function

The matrix of the companion realization of is

Despite the benign looking poles of the system (at -1,-2,..., -20) you are faced with a rather large range in the elements of , from 1 to . But the difficulties don't stop here. Suppose the coefficient of in the transfer function (or ) is perturbed from 210 to (). Then, computed on a VAX (IEEE arithmetic has enough mantissa for only ), the poles of the perturbed transfer function (equivalently, the eigenvalues of ) are

The problem here is not roundoff. Rather, high-order polynomials are simply intrinsically very sensitive, even when the zeros are well separated. In this case, a relative perturbation of the order of induced relative perturbations of the order of in some roots. But some of the roots changed very little. This is true in general. Different roots have different sensitivities to different perturbations. Computed roots may then be quite meaningless for a polynomial, particularly high-order, with imprecisely known coefficients.

Finding all the roots of a polynomial (equivalently, the poles of a transfer function or the eigenvalues of a matrix in controllable or observable canonical form) is often an intrinsically sensitive problem. For a clear and detailed treatment of the subject, including the tricky numerical problem of deflation, consult [6].

It is therefore preferable to work with the factored form of polynomials when available. To compute a state-space model of the transfer function defined above, for example, you could expand the denominator of , convert the transfer function model to state space, and extract the state-space data by

However, you should rather keep the denominator in factored form and work with the zero-pole-gain representation of .

Indeed, the resulting state matrix a2 is better conditioned.

and the conversion from zero-pole-gain to state space incurs no loss of accuracy in the poles.

There is another difficulty with transfer function models when realized in state-space form with ss. They may give rise to badly conditioned eigenvector matrices, even if the eigenvalues are well separated. For example, consider the normal matrix

Its eigenvectors and eigenvalues are given as follows.

The condition number (with respect to inversion) of the eigenvector matrix is

Now convert a state-space model with the above A matrix to transfer function form, and back again to state-space form.

The new A matrix is

Note that Ac is not a standard companion matrix and has already been balanced as part of the ss conversion (see ssbal for details).

Note also that the eigenvectors have changed.

The condition number of the new eigenvector matrix

is thirty times larger.

The phenomenon illustrated above is not unusual. Matrices in companion form or controllable/observable canonical form (like Ac) typically have worse-conditioned eigensystems than matrices in general state-space form (like A). This means that their eigenvalues and eigenvectors are more sensitive to perturbation. The problem generally gets far worse for higher-order systems. Working with high-order transfer function models and converting them back and forth to state space is numerically risky.

In summary, the main numerical problems to be aware of in dealing with transfer function models (and hence, calculations involving polynomials) are:

The above statements hold even for systems with distinct poles, but are particularly relevant when poles are multiple.


  State Space Zero-Pole-Gain Models