In the biconjugate gradient method, the residual vector can be regarded as the product of
and an
th degree polynomial in
, i.e.,
(1)
|
This same polynomial satisfies
(2)
|
so that
(3)
| |||
(4)
| |||
(5)
|
This suggests that if reduces
to a smaller vector
, then it might be advantageous to apply this "contraction"
operator twice, and compute
. The iteration coefficients can still be recovered
from these vectors (as shown above), and it turns out to be easy to find the corresponding
approximations for
. This approach is the conjugate gradient squared (CGS) method
(Sonneveld 1989).
Often one observes a speed of convergence for CGS that is about twice as fast as for the biconjugate gradient method,
which is in agreement with the observation that the same "contraction"
operator is applied twice. However, there is no reason that the contraction operator,
even if it really reduces the initial residual , should also reduce the once reduced vector
. This is evidenced by the often highly
irregular convergence behavior of CGS. One should be aware of the fact that local
corrections to the current solution may be so large that cancellation effects occur.
This may lead to a less accurate solution than suggested by the updated residual
(van der Vorst 1992). The method tends to diverge if the starting guess is close
to the solution.
CGS requires about the same number of operations per iteration as the biconjugate gradient method, but does not involve computations with . Hence, in circumstances where computation with
is impractical, CGS may be attractive.