In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and Numerical analysis is the study of Algorithms for the problems of continuous mathematics (as distinguished from Discrete mathematics) In Mathematics, Computing, Linguistics and related subjects an algorithm is a sequence of finite instructions often used for Calculation The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm.
Sometimes a single calculation can be achieved in several ways, all of which are algebraically equivalent in terms of ideal real or complex numbers, but in practice when performed on digital computers yield different results. Some calculations might damp out approximation errors that occur; others might magnify such errors. Calculations that do not magnify approximation errors are called numerically stable. One of the common tasks of numerical analysis is to try to select algorithms which are robust — that is to say, have good numerical stability.
As an example of an unstable algorithm, consider the task of adding an array of 100 numbers. To simplify things, assume our computer only has two digits of precision (for example, you can only represent numbers in the hundreds as 100, 110, 120, etc. ).
The obvious way to do this would be the following pseudo-code:
sum = 0 for i = 1 to 100 do sum = sum + a[i] end
That looks reasonable, but assume the first element in the array is 1. 0 and the other 99 elements are 0. 01. In pure math, the answer would be 1. 99. However, on our two-digit computer, once the 1. 0 was added into the sum variable, adding in 0. 01 would have no effect on the sum, and so the final answer would be 1. 0 – not a very good approximation of the real answer.
A stable algorithm would first sort the array by the absolute values of the elements in ascending order. This ensures that the numbers closest to zero will be taken into consideration first. Once that change is made, all of the 0. 01 elements will be added, giving 0. 99, and then the 1. 0 element will be added, yielding a rounded result of 2. 0 – a much better approximation of the real result.
There are different ways to formalize the concept of stability. The following definitions of forward, backward, and mixed stability are often used in numerical linear algebra. Numerical linear algebra is the study of Algorithms for performing Linear algebra computations most notably matrix operations on Computers
Consider the problem to be solved by the numerical algorithm as a function f mapping the data x to the solution y. The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function The result of the algorithm, say y*, will usually deviate from the "true" solution y. The main causes of error are round-off error, truncation error and data error. For the acrobatic movement roundoff see Roundoff. A round-off error, also called rounding error, is the difference between the Truncation error or Discretization error is error made by numerical algorithms that arises from taking finite number of steps in computation The forward error of the algorithm is the difference between the result and the solution; in this case, Δy = y* − y. The backward error is the smallest Δx such that f(x + Δx) = y*; in other words, the backward error tells us what problem the algorithm actually solved. The forward and backward error are related by the condition number: the forward error is at most as big in magnitude as the condition number multiplied by the magnitude of the backward error. In Numerical analysis, the condition number associated with a problem is a measure of that problem's amenability to digital computation that is hownumerically well-conditioned
In many cases, it is more natural to consider the relative error
instead of the absolute error Δx. In the mathematical field of Numerical analysis, the approximation error in some data is the discrepancy between an exact value and some approximation to it
The algorithm is said to be backward stable if the backward error is small for all inputs x. Of course, "small" is a relative term and its definition will depend on the context. Often, we want the error to be of the same order as, or perhaps only a few orders of magnitude bigger than, the unit round-off. An order of magnitude is the class of scale or magnitude of any amount where each class contains values of a fixed ratio to the class preceding it In Floating point arithmetic, the machine epsilon (also called macheps, machine precision or unit roundoff) is for a particular Floating
The usual definition of numerical stability uses a more general concept, called mixed stability, which combines the forward error and the backward error. An algorithm is stable in this sense if it solves a nearby problem approximately, i. e. , if there exists a Δx such that both Δx is small and f(x + Δx) − y* is small. Hence, a backward stable algorithm is always stable.
An algorithm is forward stable if its forward error divided by the condition number of the problem is small. This means that an algorithm is forward stable if it has a forward error of magnitude similar to some backward stable algorithm.
The above definitions are particularly relevant in situations where truncation errors are not important. In other contexts, for instance when solving differential equations, a different definition of numerical stability is used. A differential equation is a mathematical Equation for an unknown function of one or several variables that relates the values of the
In numerical ordinary differential equations, various concepts of numerical stability exist, for instance A-stability. Numerical ordinary differential equations is the part of Numerical analysis which studies the numerical solution of ordinary differential equations (ODEs They are related to some concept of stability in the dynamical systems sense, often Lyapunov stability. Dynamical systems theory is an area of Applied mathematics used to describe the behavior of complex Dynamical systems usually by employing Differential In Mathematics, the notion of Lyapunov stability occurs in the study of Dynamical systems In simple terms if all solutions of the dynamical system that start out It is important to use a stable method when solving a stiff equation. In Mathematics, a stiff equation is a Differential equation for which certain numerical methods for solving the equation are numerically unstable
Yet another definition is used in numerical partial differential equations. Numerical partial differential equations is the branch of Numerical analysis that studies the numerical solution of Partial differential equations (PDEs An algorithm for solving an evolutionary partial differential equation is stable if the numerical solution at a fixed time remains bounded as the step size goes to zero. In Mathematics, partial differential equations ( PDE) are a type of Differential equation, i The Lax equivalence theorem states that an algorithm converges if it is consistent and stable (in this sense). In Numerical analysis, the Lax equivalence theorem states a consistent finite difference approximation for a Well-posed linear Initial value problem Stability is sometimes achieved by including numerical diffusion. Numerical diffusion is a difficulty with Computer simulations of continuous systems such as Fluids or plasmas. Numerical diffusion is a mathematical term which ensures that roundoff and other errors in the calculation get spread out and do not add up to cause the calculation to "blow up".