Solutions of Equations in One variable

The most basic problems of Numerical approximation the root-finding problem. $ f(x) = 0 $

The Bisection Method

The Bisection technique based on the Intermediate Value Theorem, called the Binary-search method

First, suppose f is continuous function defined on the interval [a, b], with $f(a)\cdot f(b) <0$,The Intermediate Value Theorem tell us that $\exist p \in (a, b)$ with $f(p) = 0$.

To begin:

  • Set $a_1 = a$ and $b_1=b$, and $p_1 = \frac{a_1+b_1}{2}$
  • if $f(p_1) = 0$, So done.
  • If $f(p_1) \ne 0$, then $f(p_1)$ has the same sign as either $f(a)$ and $f(b)$
    • If $f(p_1)$ and $f(a)$ have the same sign, root in $(p_1, b_1)$, change: $a_2 = p_1$ and $b_2 = b_1$.
    • if $f(p_1)$ and $f(b)$ have the same sign, root in $(a_1, p_1)$, change: $a_2 = a_1$ and $b_2=p_1$.
using NumericalAnalysis: SEq1
SEq1.Bisection(x->cos(x)-x, 0.5, π/4)
0.7390851547668461

you can see steps output:

SEq1.Bisection(x->cos(x)-x, 0.5, π/4, output_seq=true)
22-element Array{Any,1}:
 [0.6426990816987241, 0.1577818772074514]
 [0.7140486225480862, 0.04166801647208607]
 [0.7497233929727672, -0.01784600602158104]
 [0.7318860077604268, 0.01202934872156003]
 [0.740804700366597, -0.0028789804030706634]
 [0.7363453540635119, 0.0045825511077876024]
 [0.7385750272150544, 0.000853623365055789]
 [0.7396898637908257, -0.0010122194823882769]
 [0.73913244550294, -7.918324114730702e-5]
 [0.7388537363589972, 0.0003872487736255037]
 ⋮
 [0.7390801875384507, 8.2771349952937e-6]
 [0.7390888971991989, -6.299454199609755e-6]
 [0.7390845423688248, 9.888474060693042e-7]
 [0.7390867197840119, -2.6553016447827815e-6]
 [0.7390856310764183, -8.332266813182443e-7]
 [0.7390850867226215, 7.781047195454249e-8]
 [0.7390853588995199, -3.777080772593422e-7]
 [0.7390852228110707, -1.4994879582452825e-7]
 [0.7390851547668461, -3.606916021414719e-8]


fixed_point Iteration

Given a root-finding problem $f(p) = 0$, we can define functions $g$ with a fixed point at $p$ in a number of ways, as:

\[g(x) = x - f(x)\]

Conversely, if the function $g$ has a fixed point at $p$, then the function defined by:

\[f(x) = x - g(x)\]

has a zero at $p$.

SEq1.fixed_point(x -> cos(x), 0.74)
0.739085164848387
SEq1.fixed_point(x -> cos(x), 0.74, output_seq=true)
26-element Array{Any,1}:
 0.74
 0.7384685587295879
 0.7395003246924028
 0.7388053915465002
 0.7392735416470748
 0.7389582059118497
 0.7391706270197427
 0.7390275408589867
 0.7391239268933188
 0.7390590007707607
 ⋮
 0.739084025431016
 0.7390858794314327
 0.7390846305546976
 0.7390854718132017
 0.7390849051314047
 0.7390852868551031
 0.7390850297214385
 0.7390852029297729
 0.7390850862545575


Newton's Method

Suppose that $f ∈ C^2[a, b]$. Let $p₀ ∈ [a, b]$ be an approximation to p such that $f'(p₀) ≠ 0$ and $|p − p₀|$ is “small.” Consider the first Taylor polynomial for $f(x)$ expanded about $p₀$ and evaluated at $x = p$.

\[f(p) = f(p₀)+(p - p₀)f'(p₀)+\frac{(p-p₀)^2}{2}f''(ξ(p₀))\]

where $ξ(p₀)$ lies between $p$ and $p₀$, since $f(p)=0$, Newton’s method is derived by assuming that since $|p − p₀|$ is small, the term involving $(p-p₀)^2$ is much smaller. So:

\[0 \approx f(p₀)+(p-p₀)f'(p₀)\]

So:

\[p \approx p₀ - \frac{f(p₀)}{f'(p₀)}\]

This sets the stage for Newton’s method, which starts with an initial approximation $p₀$, and generates the sequence $\{ p_n \}_{n=0}^{∞}$, by

\[p_n = p_{n-1} - \frac{f(p_{n-1})}{f'(p_{n-1})}\]
SEq1.Newton(x->cos(x) - x, 0.74)
0.7390851332151607
SEq1.Newton(x->cos(x) - x, 0.74, output_seq=true)
3-element Array{Any,1}:
 0.74
 0.7390853178477991
 0.7390851332151682


The Secant Method

To circumvent the problem of the derivative evaluation in Newton’s method, we introduce a slight variation. By definition,

\[f'(p_{n-1}) = \lim_{x \to p_{n-1}} \frac{f(x) - f(p_{n-1})}{x-p_{n-1}}\]

If $p_{n-2}$ is close to $p_{n-1}$, then

\[f'(p_{n-1}) \approx \frac{f(p_{n-2}) - f(p_{n-1})}{p_{n-2} - p_{n-1}}\]

So

\[p_{n} = p_{n-1} - \frac{f(p_{n-1})(p_{n-1} - p_{n-2})}{f(p_{n-1})- f(p_{n-2})}\]
SEq1.Secant(x -> cos(x) - x, 0.5, π/4)
0.7390851332150645
SEq1.Secant(x -> cos(x) - x, 0.5, π/4, output_seq=true)
6-element Array{Any,1}:
 0.5
 0.7853981633974483
 0.7363841388365822
 0.7390581392138897
 0.7390851493372764
 0.7390851332150645


False position

The method of False Position (also called Regula Falsi) generates approximations in the same manner as the Secant method, but it includes a test to ensure that the root is always bracketed between successive iterations

First choose initial approximations $p₀$ and $p₁$ with $f(p₀)·f(p₁) < 0$. The approximation $p₂$ is chosen in the same manner as in the Secant method, as the x-intercept of the line joining $(p₀, f(p₀))$ and $(p₁, f(p₁))$. To decide which secant line to use to compute $p₃$, consider $f(p₂) · f(p₁)$, or $sgn[f(p₂)]·sgn[f(p₁)]$.

  • If $sgn[f(p₂)]·sgn[f(p₁)] < 0$, then $p₁$ and $p₂$ bracket a root. Choose $p₃$ as the x-intercept of the line joining $(p₁,f(p₁))$ and $(p₂,f(p₂))$.

  • If not, choose $p₃$ as the x-intercept of the line joining $(p₀, f(p₀))$ and $(p₂, f(p₂))$, and then interchange the indices on $p₀$ and $p₁$.

SEq1.FalsePos(x->cos(x)-x, 0.5, π/4)
0.7390851331883289
SEq1.FalsePos(x->cos(x)-x, 0.5, π/4, output_seq=true)
7-element Array{Any,1}:
 0.5
 0.7853981633974483
 0.7363841388365822
 0.7390581392138897
 0.7390848638147098
 0.7390851305265789
 0.7390851331883289


Modified Newton's Way

Newton and ModifiedNewton, Both methods are rapidly convergent to the actual zero.

\[p_n = p_{n-1} - \frac{f(p_{n-1})f'(p_{n-1})}{f'(p_{n-1})^2 - f(p_{n-1})f''(p_{n-1})}\]
SEq1.ModifiedNewton(x -> cos(x) - x, 0.5)
0.7390851332151606
SEq1.ModifiedNewton(x -> cos(x) - x, 0.5, output_seq=true)
5-element Array{Any,1}:
 0.5
 0.7216635041043579
 0.7390161853007075
 0.7390851321653727
 0.7390851332151606


Muller method

find complex Zeros.

search in Google for more about Müller’s Method

SEq1.Muller(x->x^4-3x^3+x^2+x+1, 0.5, -0.5, 0.0)
-0.33909283776171 - 0.44663009999751735im
SEq1.Muller(x->x^4-3x^3+x^2+x+1, 0.5, -0.5, 0.0, output_seq=true)
7-element Array{Any,1}:
 -0.09999999999999999 - 0.8888194417315588im
  -0.4921457099193255 - 0.44703069998642414im
 -0.35222571260053315 - 0.4841324441587318im
 -0.34022857046791327 - 0.44303562738012897im
  -0.3390946788151066 - 0.44665648899530574im
  -0.3390928333840288 - 0.44663010055724217im
    -0.33909283776171 - 0.44663009999751735im

Method

NumericalAnalysis.SEq1Module
SEq1

Solutions of Equations in One Variable here are functions to find root:

  1. Bisection
  2. Fixed-Point Iteration
  3. Newton Method
  4. The Secant Method
  5. The False Position Method
  6. Müller’s Method
source
NumericalAnalysis.SEq1.BisectionMethod
Bisection(f::Function, a::Real, b::Real; TOL::Float64=0.0000001, N::Int=40, output_seq::Bool=false)

Use Bisection To find a solution to.

$f(x) = 0$

You should use the function, and a interval $[a, b]$, tolerance TOL, maximum number of iterations N.

Then you will get approximate solution $p$ or can't find in maximum N and return -1.

you can set output_seq=true to output sequence.

source
NumericalAnalysis.SEq1.FalsePosMethod
FalsePos(f::Function, p₀::Real, p₁::Real; TOL::Float64=0.00000001, N::Int=40, output_seq::Bool=false)

Use the false position method to find root. a function and interval $[a, b]$, then you can set Args or use default.

source
NumericalAnalysis.SEq1.ModifiedNewtonMethod
ModifiedNewton(f::Function, p₀::Real; TOL::Float64=0.00000001, N::Int=20, output_seq::Bool=false)

Newton and ModifiedNewton, Both methods are rapidly convergent to the actual zero.

source
NumericalAnalysis.SEq1.NewtonMethod
Newton(f::Function, p₀::Real; TOL::Float64=0.0000001, N::Int=20, output_seq=false)

Use Newton method to find root.

you should use a function and initial p₀, then you can set Args or use default.

source
NumericalAnalysis.SEq1.SecantMethod
Secant(f::Function, p₀::Real, )

Use the Secant Method to find root A function and p₀, p₁(p₀<p₁), then you can set Args or use default.

source
NumericalAnalysis.SEq1.fixed_pointMethod
fixed_point(f::Function, p₀::Real; TOL::Float64=0.0000001, N::Int=30, output_seq::Bool=false)

Use fixed point Iteration to find root.

The number p is a fixed point for a given function g if $g(p) = p$.

input a function, initial p₀, optional Args: tolerancs TOL, maximum number of iterations N, output_seq.

Output a root p or sequence.

source