As part of my fine exercise of Pioneer orbit calculations, I am using a polynomial extrapolation technique (actually, I believe it is called an acceleration technique) called Richardson extrapolation.

The basic idea is this: we compute something (for instance, a numerical integral) using an iteration step of size $h_1$. We repeat the computation several times using integration steps $h_2$, $h_3$, ..., $h_n$. We label successive results as $p_1$, $p_2$, ..., $p_n$. We now seek the polynomial $P$ of order $n-1$ such that $P(h_i)=p_i$. We then evaluate $P(0)$, which gives an estimate of the result that we would get, had we used an iteration step of 0.

In other words, we seek $a_1$, $a_2$, ..., $a_n$ such that

\begin{pmatrix}
1&h_1&h_1^2&...&h_1^{n-1}\\
1&h_2&h_2^2&...&h_2^{n-1}\\
.&.&.&...&.\\
1&h_n&h_n^2&...&h_n^{n-1}
\end{pmatrix}
\begin{pmatrix}
a_1\\
a_2\\~\\
\cdot\cdot\cdot\\
a_n
\end{pmatrix}
=
\begin{pmatrix}
p_1\\
p_2\\~\\
\cdot\cdot\cdot\\
p_n
\end{pmatrix}.

Naturally, the solution of this equation can be obtained by straight matrix inversion. However, matrix inversion is computationally costly. Under some circumstances, it may be possible to avoid matrix inversion altogether. One such case is when successive intervals are halved (or multiplied by any other factor $t$), such that $h_1=th_0$, $h_2=th_1$, etc. In this case, we have (writing $h=h_0$):

\begin{pmatrix}
1&th&(th)^2&...&(th)^{n-1}\\
1&t^2h&(t^2h)^2&...&(t^2h)^{n-1}\\
.&.&.&...&.\\
1&t^nh&(t^nh)^2&...&(t^nh)^{n-1}
\end{pmatrix}
\begin{pmatrix}
a_1\\
a_2\\~\\
\cdot\cdot\cdot\\
a_n
\end{pmatrix}
=
\begin{pmatrix}
p_1\\
p_2\\~\\
\cdot\cdot\cdot\\
p_n
\end{pmatrix}.

The matrix $M$ on the left hand side of this equation can be triangularized. The $i$th row, $j$th column of this matrix is

M_{ij}=(t^ih)^{j-1}.

We can now form the rows of a triangularized matrix $M^*$ as follows:

\begin{align}
M'_{kj}&=M_{kj}-M_{1j}~~~&(k=2,...,n),\\
M''_{kj}&=M'_{kj}-\frac{M'_{k2}}{M'_{22}}M'_{2j}~~~&(k=3,...,n),\\
M'''_{kj}&=M''_{kj}-\frac{M''_{k3}}{M''_{33}}M''_{3j}~~~&(k=4,...,n),\\
&...,\\
M^{(l)}_{kj}&=M^{(l-1)}_{kj}-\frac{M^{(l-1)}_{kl}}{M^{(l-1)}_{ll}}M^{(l-1)}_{lj}~~~&(k=l+1,...,n).
\end{align}

where we used the notation $M^{(0)}=M$, $M^{(1)}=M'$, $M^{(2)}=M''$, etc. Then:

M^*_{kj}=M^{(k-1)}_{kj}.

Correspondingly, we must also adjust the right-hand side. using similar notation for $p^{(0)}=p$, $p^{(1)}=p'$, etc.:

\begin{align}
p^{(l)}_k&=p^{(l-1)}_k-\frac{M^{(l-1)}_{kl}}{M^{(l-1)}_{ll}}p^{(l-1)}_l,\\
p^*_k&=p^{(k-1)}_k.
\end{align}

The solution for $a_i$ can then be written in the form:

\begin{align}
a_n&=\frac{p^*_n}{M^*_{nn}},\\
a_{n-1}&=\frac{p^*_{n-1}-M^*_{(n-1)n}a_n}{M^*_{(n-1)(n-1)}},\\
a_k&=\frac{p^*_k-\sum\limits_{l=k+1}^nM^*_{kl}a_l}{M^*_{kk}}.
\end{align}

Finally, we have

P(0)=a_1,

which is the result we sought. Simultaneously, we could also evaluate $P(h)$, which could then be compared against the next numerical iteration to see how well our polynomial approximation predicts its value.

These calculations can be trivially implemented using computer algebra code, such as the following lines in Maxima:

N:5;t:2;M:zeromatrix(N,N);for i thru N do for j thru N do M[i,j]:(t^i*h)^(j-1);P:zeromatrix(N,1);for i thru N do P[i,1]:concat(p,i);for k from 2 thru N do for i from k thru N do    (P[i]:P[i]-P[k-1]*M[i,k-1]/M[k-1,k-1],M[i]:M[i]-M[k-1]*M[i,k-1]/M[k-1,k-1]);M;factor(P);factor((M^^-1).P);