# P r o j e c t s

N o t e s

# 1. Absoulte Error and Relative Error

• Absolute error = approximation - true value
• Relative error = absolute error / true value
• Precision is about the number of digits.
• Accuracy is about the number of correct significant digits.
• If pi is approximated to 3.252603764690804, then it is a highly precise number, but not accurate.

# 2. Data Error and Computational Error

• For a function f: mathbb{R} rightarrow mathbb{R}, let an input x be the true value, f(x) the true result. But we only know the approximate value hat{x}, not the true value x. In addition, we can only calculate \hat{f}, the approximation of f. Then the total error e is
\begin{align} e &= \hat{f}(\hat{x})- f(x) = (\hat{f}(\hat{x}) - f(\hat{x})) + (f(\hat{x}) - f(x)) \\ &=
\text{computational error} + \text{propagated data error}.
\end{align}
• The computational error comes from the difference between the true and approximation functions about the same value. The propagated data error comes from the difference between the true and approximation values about the same function.
• The computational error can be divided by the truncation error and rounding error.
• Suppose that sin(pi/8) is approximate to 0.3750 from sin(pi/8) approxsin(3/8) approx3/8=0.3750. From the first term, the values are different about the same function, so it represents the propagated data error. From the second term, the functions are changed f(x)=sin(x) to f(x)=x about the same value, so it represents the computational error.

# 3. Forward Error and Backward Error

• Assume that y is the output from the solution f of an input x. Then Delta x=hat{x}-x is the backward error where hat{x} is the approximation of x. Delta y=hat{y}-y is the forward error where hat{y}=f(hat{x}).
• As |Delta y| is very small, it is said that the original problem x is well estimated by the nearby problem hat{x}, and that the approximation solution hat{y} is good enough.
• As an approximation to y=f(x)=cos(x) for x=1, let hat{y}=hat{f}(x)=1-x^2/{2!} since cos(x)=1-x^2/{2!}+x^4/{4!}-x^6/{6!}+cdots. Then the forward and backward errors are as follows:
\begin{align} \Delta y &= \hat{y} - y = 1-\frac{x^2}{2!}-\cos (x) = 1 - \frac{1}{2} - \cos(1) \Rightarrow \text{forward error} \\
\Delta x &= \hat{x} - x = f^{-1}(\hat{y}) - x = \arccos \left(1 - \frac{1}{2}\right) - 1 \Rightarrow \text{backward error}
\end{align}

# 4. Condition Number

• When f(x) changes reasonably as x changes, it is said that the problem is insensitive, or well-conditioned.
• When f(x) changes much more largely as x changes, it is said that the problem is sensitive, or ill-conditioned.
• The condition number of a problem denotes the ratio of the relative change in the solution to the relative change in the input.
\begin{align} \text{condition number} &= \frac{| (f(\hat{x}) - f(x)) / f(x) |}{| (\hat{x} - x)/x |}=\frac{| (\hat{y} - y)/y |}{| (\hat{x} - x)/x |} = \frac{| \Delta y/y |}{| \Delta x/x |} \\ \\ &=
\frac{\text{relative forward error}}{\text{relative backward error}} = \text{amplication factor}
\end{align}
• The condition number, which is usually what we do not know, varies with the input.
• If hat{x} is close enough to x, Delta x goes to 0 and hat{x}=x+Delta x.
\begin{align}
\text{absolute forward error} &= f(x+\Delta x)-f(x) \approx f^{\prime}(x) \Delta x \\ \\
\text{relative forward error} &= \frac{f(x+\Delta x)-f(x)}{f(x)} \approx \frac{f^{\prime}(x) \Delta x}{f(x)} \\ \\
\text{condition number} &\approx \left| \frac{ f^{\prime}(x) \Delta x /f(x)}{\Delta x/x} \right| = \left| \frac{x f^{\prime}(x)}{f(x)} \right|
\end{align}

︎Reference

 Michael T. Heath, Scientific Computing: An Introductory Survey. 2nd Edition, McGraw-Hill Higher Education.

emoy.net