An algorithm is described which iteratively solves for the coefficients of successively higher-order, least-squares polynomial fits in terms of the results for the previous, lower-order polynomial fit. The technique takes advantage of the special properties of the least-squares or Hankel matrix, for which Ai,j = Ai+1,j−1. Only the first and last column vectors of the inverse matrix are needed at each stage to continue the iteration to the next higher stage.
An analogous procedure may be used to determine the inverse of such least-squares type matrices. The inverse of each square submatrix is determined from the inverse of the previous, lower-order submatrix.
The results using this algorithm are compared with the method of fitting orthogonal polynomials to data points. While the latter method gives higher accuracy when high-order polynomials are fitted to the data, it requires many more computations. The increased accuracy of the orthogonal-polynomial fit is valuable when high precision of fitting is required; however, for experimental data with inherent inaccuracies, the added computations outweigh the possible benefit derived from the more accurate fitting.
A Fortran listing of the algorithm is given.