**Errors**

**1. Absoulte Error and Relative Error**

- Absolute error `=` approximation `-` true value
- Relative error `=` absolute error `/` true value
*Precision*is about the number of digits.*Accuracy*is about the number of correct significant digits.- If `pi` is approximated to `3.252603764690804`, then it is a highly precise number, but not accurate.

**2. Data Error and Computational Error**

- For a function `f: mathbb{R} rightarrow mathbb{R}`, let an input `x` be the true value, `f(x)` the true result. But we only know the approximate value `hat{x}`, not the true value `x`. In addition, we can only calculate `\hat{f}`, the approximation of `f`. Then the total error `e` is

\text{computational error} + \text{propagated data error}.

\end{align}

- The computational error comes from the difference between the true and approximation functions about the same value. The propagated data error comes from the difference between the true and approximation values about the same function.
- The computational error can be divided by the
*truncation error*and*rounding error*. - Suppose that `sin(pi/8)` is approximate to `0.3750` from `sin(pi/8) approx``sin(3/8) approx``3/8=0.3750`. From the first term, the values are different about the same function, so it represents the propagated data error. From the second term, the functions are changed `f(x)=sin(x)` to `f(x)=x` about the same value, so it represents the computational error.

**3. Forward Error and Backward Error**

- Assume that `y` is the output from the solution `f` of an input `x`. Then `Delta x=hat{x}-x` is the
*backward error*where `hat{x}` is the approximation of `x`. `Delta y=hat{y}-y` is the*forward error*where `hat{y}=f(hat{x})`. - As `|Delta y|` is very small, it is said that the original problem `x` is well estimated by the nearby problem `hat{x}`, and that the approximation solution `hat{y}` is good enough.
- As an approximation to `y=f(x)=cos(x)` for `x=1`, let `hat{y}=hat{f}(x)=1-x^2/{2!}` since `cos(x)=1-x^2/{2!}+x^4/{4!}-x^6/{6!}+cdots`. Then the forward and backward errors are as follows:

\Delta x &= \hat{x} - x = f^{-1}(\hat{y}) - x = \arccos \left(1 - \frac{1}{2}\right) - 1 \Rightarrow \text{backward error}

\end{align}

**4. Condition Number**

- When `f(x)` changes reasonably as `x` changes, it is said that the problem is
*insensitive*, or*well-conditioned*. - When `f(x)` changes much more largely as `x` changes, it is said that the problem is
*sensitive*, or*ill-conditioned*. - The
*condition number*of a problem denotes the ratio of the relative change in the solution to the relative change in the input.

\frac{\text{relative forward error}}{\text{relative backward error}} = \text{amplication factor}

\end{align}

- The condition number, which is usually what we do not know, varies with the input.
- If `hat{x}` is close enough to `x`, `Delta x` goes to `0` and `hat{x}=x+Delta x`.

\text{absolute forward error} &= f(x+\Delta x)-f(x) \approx f^{\prime}(x) \Delta x \\ \\

\text{relative forward error} &= \frac{f(x+\Delta x)-f(x)}{f(x)} \approx \frac{f^{\prime}(x) \Delta x}{f(x)} \\ \\

\text{condition number} &\approx \left| \frac{ f^{\prime}(x) \Delta x /f(x)}{\Delta x/x} \right| = \left| \frac{x f^{\prime}(x)}{f(x)} \right|

\end{align}

︎

[1] Michael T. Heath,

**Reference**[1] Michael T. Heath,

*Scientific Computing: An Introductory Survey*. 2nd Edition, McGraw-Hill Higher Education.emoy.net