In computational mathematics, an iterative method attempts to solve a problem (for example an equation or system of equations) by finding successive approximations to the solution starting from an initial guess. Computational mathematics involves mathematical research in areas of science where computing plays a central and essential role emphasizing algorithms numerical An approximation (represented by the symbol ≈ is an inexact representation of something that is still close enough to be useful This approach is in contrast to direct methods, which attempt to solve the problem by a finite sequence of operations, and, in the absence of rounding errors, would deliver an exact solution (like solving a linear system of equations Ax = b by Gaussian elimination). For the acrobatic movement roundoff see Roundoff. A round-off error, also called rounding error, is the difference between the In Linear algebra, Gaussian elimination is an efficient Algorithm for solving systems of linear equations, to find the rank of a matrix Iterative methods are usually the only choice for nonlinear equations. This article describes the use of the term nonlinearity in mathematics However, iterative methods are often useful even for linear problems involving a large number of variables (sometimes of the order of millions), where direct methods would be prohibitively expensive and in some cases impossible even with the best available computing power.

## Newton's method

One of the most familiar iterative methods is Newton's method. In Numerical analysis, Newton's method (also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson) is perhaps the

## Attractive fixed points

If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed point of the function f, then one may begin with a point x1 in the basin of attraction of x, and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x. In Mathematics, a fixed point (sometimes shortened to fixpoint) of a function is a point that is mapped to itself by the function

## Linear systems

In the case of a system of linear equations, the two main classes of iterative methods are the stationary iterative methods, and the more general Krylov subspace methods. In Mathematics, a system of linear equations (or linear system) is a collection of Linear equations involving the same set of Variables For example In Linear algebra the Krylov subspace generated by an n -by- n matrix A, and an n -vector b, is the subspace \mathcal{K}_n

### Stationary iterative methods

Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error (the residual), form a correction equation for which this process is repeated. In Mathematics, an operator is a function which operates on (or modifies another function While these methods are simple to derive, implement, and analyse, convergence is only guaranteed for a limited class of matrices. Examples of stationary iterative methods are the Jacobi method and the Gauss–Seidel method. The Jacobi method is an algorithm in Linear algebra for determining the solutions of a System of linear equations with largest absolute values in each row and column The Gauss–Seidel method is a technique used to solve a Linear system of equations.

### Krylov subspace methods

Krylov subspace methods form an orthogonal basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence). In Linear algebra the Krylov subspace generated by an n -by- n matrix A, and an n -vector b, is the subspace \mathcal{K}_n In Mathematics, an orthonormal basis of an Inner product space V (i The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method in this class is the conjugate gradient method (CG). In Mathematics, the conjugate gradient method is an Algorithm for the Numerical solution of particular systems of linear equations, namely those Other methods are the generalized minimal residual method (GMRES) and the biconjugate gradient method (BiCG). In mathematics the generalized minimal residual method (usually abbreviated GMRES) is an Iterative method for the numerical solution of a System of In Mathematics, more specifically in Numerical analysis, the biconjugate gradient method is an Algorithm to solve systems of linear equations

### Convergence

Since these methods form a basis, it is evident that the method converges in N iterations, where N is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice N can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator. In Functional analysis, the concept of the spectrum of an operator is a generalisation of the concept of Eigenvalues for matrices

### Preconditioners

The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as GMRES (alternatively, preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. In Computational mathematics, an iterative method attempts to solve a problem (for example an equation or system of equations by finding successive Approximations In mathematics the generalized minimal residual method (usually abbreviated GMRES) is an Iterative method for the numerical solution of a System of In Linear algebra and Numerical analysis, a preconditioner P of a matrix A is a matrix such that P &minus1 A The construction of preconditioners is a large research area.

### History

Probably the first iterative method for solving a linear system appeared in a letter of Gauss to a student of his. Johann Carl Friedrich Gauss (ˈɡaʊs, Gauß Carolus Fridericus Gauss ( 30 April 1777 – 23 February 1855) was a German He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest.

The theory of stationary iterative methods was solidly established with the work of D. M. Young starting in the 1950s. The Conjugate Gradient method was also invented in the 1950s, with independent developments by Cornelius Lanczos, Magnus Hestenes and Eduard Stiefel, but its nature and applicability were misunderstood at the time. In Mathematics, the conjugate gradient method is an Algorithm for the Numerical solution of particular systems of linear equations, namely those Cornelius Lanczos ( Lánczos Kornél) (approximate pronunciation 'LAHNT-sawsh') born Löwy Kornél ( February 2, 1893 Magnus Rudolph Hestenes (1906 - May 31, 1991) was an American mathematician Eduard L Stiefel (21 April 1909 &ndash 25 November 1978 was a mathematician Only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type. In Mathematics, partial differential equations ( PDE) are a type of Differential equation, i