程序代写案例-MATH256
时间:2021-04-07
MATH256 2020–21 Numerical Methods (5 Parts)
Contents
3 Root finding algorithms 3-1
3.1 Errors in root finding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.2 Testing for roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.3 The bisector method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
3.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
3.3.2 General method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
3.3.3 Convergence analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
3.3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
3.4 The method of false position (regula falsi) . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
3.4.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
3.5 The secant method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
3.6 The Newton–Raphson method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
3.6.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
3.6.2 Taylor series derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
3.7 Fixed point iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
3.7.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
3.8 Constructing fixed point schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
3.8.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
3.8.2 Example: Derivation of Newton–Raphson as a fixed point scheme . . . . . . . . . 3-12
3.9 The Newton–Raphson method in two dimensions . . . . . . . . . . . . . . . . . . . . . . 3-12
3.9.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
3.10 The Newton–Raphson method in n dimensions . . . . . . . . . . . . . . . . . . . . . . . 3-15
3.11 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
3.12 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
* Download and save these files before reading on
§3.3 bisector.mw bisector_V1.mw
§3.4 false_position.mw false_position_V1.mw
§3.5 secant.mw secant_V1.mw
§3.6 newton_1d.mw newton_1d_V1.mw
§3.7 fixed_point.mw fixed_point_V1.mw
§3.9-3.10 newton_2d.mw, newton_3d.mw [OFXUPO@E@7NX]
3 Root finding algorithms
In this chapter, we consider the problem of solving the equation
f(x) = 0,
forarbitraryfunctionsf(x).WecansolvetheequationH(x)=kbydefiningG(x)=H(x)≠kandsolving
G(x)=0 TP UIF BCPWF OPUBUJPO JT HFOFSBM Manyproblems inappliedmathematicscanbe reduced to this
form,butthereare relatively fewcases inwhichanalyticsolutionsareavailable.
3-1
[V1 files contain more graphics]

MATH256 2020–21 Numerical Methods (5 Parts)
3.1 Errors in root finding
Solutions to nonlinear equations are often irrational, so we cannot compute them exactly, no matter how
many significant digits we use. There are two types of error that we may try to bound when calculating an
approximate root.
(i) Error in f(x): choose ‘ > 0 and search for c˜ such that |f(c˜)| < ‘.
(ii) Error in x: choose ” > 0 and search for c˜ such that |c˜≠ c| < ”.
Here x = c is the exact location of the root. Note that (i) does not guarantee that there is a root close to
c˜, or even that a root exists (e.g. if f(x) = x2 + ‘/2 then |f(x)| < ‘ for |x| < ‘/2 but there is no (real)
root).
3.1.1 Example
Suppose we wish to find the real root of
f(x) = 4e≠x ≠ x.
Since
f(0) = 4 and f(2) = 4e2 ≠ 2 ¥ ≠1.46,
we know that this lies in the interval (0, 2). In addition,
f Õ(x) = ≠4e≠x ≠ 1 < 0
so there are no turning points; hence |f(x)| < 4 for x œ (0, 2). An absolute error in x or f(x) is suitable
for this problem.
Remarks
• Absolute errors are acceptable for root-finding in cases where the interval to be searched and/or the
range of function values within that interval are ‘reasonable’.
• For wide search intervals we can use a relative error in x. If c œ [a, b], we could choose
” = (b≠ a)/M,
for a suitably large value M , and find c˜ such that
|c≠ c˜| < ”.
3.2 Testing for roots
Let f(x) be continuous for a Æ x Æ b.
• If f(a) and f(b) are of dierent sign (one positive and one negative) then there is a point c between
a and b such that f(c) = 0; see figure 3.1(a).
• It may be that there is more than one root, as in figure 3.1(b).
• We can’t say anything about the existence of roots if f(a) and f(b) are both positive, or both
negative; there may not be a root, but there may be one, or more than one, as shown in figure 3.1(c).
3-2
MATH256 2020–21 Numerical Methods (5 Parts)
a b x
f(x) (a)
a x
f(x)
b
(b)
a x
f(x)
b
(c)
Figure 3.1: (a) If f(a) > 0 and f(b) < 0 or f(a) < 0 and f(b) > 0 then f(x) has a root between a and b. (b) There may be
more than one root. (c) There may be one or more roots between a and b if f(a)f(b) > 0.
In general, if f is a continuous function, a sucient but not necessary condition for the existence of a
root between a and b is that
f(a)f(b) < 0.
3.3 The bisector method
3.3.1 Example
Consider again the function f(x) = 4e≠x ≠ x, which has a root between 0 and 2, because f(0) = 4 and
f(2) ¥ ≠1.46.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.1>
Now check the middle of the search interval: we have
f(1) = 4e ≠ 1 ¥ 0.47 > 0,
so the root must lie between 1 and 2. Now we repeat the process.
Step 1 2 3 4 5
a 0 1 1 1 1.125
h = 12(a+ b) 1 1.5 1.25 1.125 1.1875
b 2 2 1.5 1.25 1.25
f(a) 4 0.47 0.47 0.47 0.17
f(h) 0.47 ≠0.61 ≠0.10 0.17 0.032
f(b) ≠1.46 ≠1.46 ≠0.61 ≠0.1 ≠0.1
After five steps, we see that the root lies between 1.1875 and 1.25. We choose our estimate to be
c˜ = 1.1875 + 1.252 = 1.21875,
so a bound on the maximum possible error in x is
1.25≠ 1.21875 = 1.21875≠ 1.1875 = 0.03125
i.e. half the size of the remaining search interval. The actual solution is c = 1.2022 to 5 s.f. so the absolute
error is approximately 0.0166.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.1>
3-3
MATH256 2020–21 Numerical Methods (5 Parts)
a
b
h x
f(a)
f(b)
f(h)
f(x)
Figure 3.2: The bisector method. Since f(h)f(b) < 0, the interval (h, b) must contain a root. The interval (a, h) is discarded.
3.3.2 General method
Suppose that f(x) is continuous for x œ [a, b], and f(a)f(b) < 0 so that there is a root between a and b.
The bisector method proceeds as follows. Let h be the midpoint of [a, b], i.e.
h = (b+ a)/2.
• If f(a)f(h) < 0, the interval (a, h) contains a root. Form a new search interval by replacing b with h
and repeat if necessary.
• If f(a)f(h) > 0, then f(b)f(h) < 0 so (h, b) contains a root. Form a new search interval by replacing
a with h and repeat if necessary.
• If f(h) = 0 (unlikely, but it can happen) then h is the best possible approximation to the root, so the
search is complete.
• At the end, the maximum possible error in x is minimised by using the midpoint of the remaining
search interval as the approximate root.
By bisecting enough times, we can achieve any required degree of accuracy.
3.3.3 Convergence analysis
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.2>
Let Lj be the length of the search interval remaining after j bisections. Then L0 = b≠ a, and since each
bisection halves the remaining search range, it follows that
L1 =
L0
2 , L2 =
L1
2 =
L0
4
and in general,
Lj =
L0
2j =
b≠ a
2j .
3-4
MATH256 2020–21 Numerical Methods (5 Parts)
If we bisect n times and take the final estimate cn to be the midpoint of the remaining interval, then
|cn ≠ c| < Ln2 =
b≠ a
2n+1 .
Given ” as a target for the error bound, choose n to be the smallest integer such that
b≠ a
2n+1 Æ ”, i.e. 2
n+1 Ø b≠ a

.
Taking logarithms we obtain
(n+ 1) ln 2 Ø ln(b≠ a)≠ ln ”,
meaning the number of steps required is
n =
9 ln(b≠ a)≠ ln ”
ln 2
:
≠ 1,
where Á·Ë means ‘round up’ (ceil [short for ‘ceiling’] in Maple).
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.2>
Notes
• Since the error bound on each iterate is proportional to the previous error bound, we say that that
the bisector method has linear convergence.
• The rate of convergence is the same for all functions.
• The bisector method is robust; nothing can go wrong if the initial conditions (continuity of f(x) and
f(a)f(b) < 0) are satisfied.
• The bisector method can easily be adapted to work with errors in |f(x)|, but in that case we cannot
predict the number of steps required.
3.3.4 Examples
See bisector.mw.
3.4 The method of false position (regula falsi)
This strangely named technique is closely related to the bisector method. Rather than dividing the interval
in half at each iteration, we use the point where the straight line from (a, f(a)) to (b, f(b)) intersects the x
axis (see figure 3.3) as the next estimate.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.3>
The gradient of this line is
y
x =
f(b)≠ f(a)
b≠ a ,
so its equation can be written in the form
y = f(b)≠ f(a)
b≠ a (x≠ a) + f(a).
3-5
MATH256 2020–21 Numerical Methods (5 Parts)
(a)
a
b
h x
f(a)
f(b)
f(h)
f(x)
Figure 3.3: The method of false position.
Setting x = a or x = b yields y = f(a) or y = f(b), respectively, so we can see that this is correct. Finally,
setting y = 0 and x = h yields
h≠ a
b≠ a [f(b)≠ f(a)] = ≠f(a),
i.e.
(h≠ a) [f(b)≠ f(a)] = (a≠ b)f(a),
and so
h = a+ f(a)(a≠ b)
f(b)≠ f(a)
= a f(b)≠ f(a)
f(b)≠ f(a) +
f(a)(a≠ b)
f(b)≠ f(a)
= af(b)≠ bf(a)
f(b)≠ f(a) .
Once h is obtained, we check the sign of f(a)f(h), discard either (a, h) or (h, b) as appropriate and repeat
the iteration.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.3>
Unlike bisection, the behaviour of this method depends on the function f(x). It may converge more quickly,
but there is no way to predict how many steps will be needed. Furthermore, the convergence is slow in
cases where one of the interval ends remains fixed for several steps, and the method may fail to converge
when this happens.
3.4.1 Examples
See false_position.mw.
3-6
MATH256 2020–21 Numerical Methods (5 Parts)
f(x) (a)
x
cj
cj1
f(x) (b)
x
cj
cj1
cj+1
f(cj)
f(cj1)
Figure 3.4: Risk with the secant method; the root lies between cj≠1 and cj (a), but not between cj and cj+1 (b).
3.5 The secant method
Suppose that f(c) = 0, and we have two initial approximations for c, c0 and c1, say. Construct a sequence
of iterates by defining
cj+1 =
cj≠1f(cj)≠ cjf(cj≠1)
f(cj)≠ f(cj≠1) , j = 1, 2, . . .
This is the same as the equation for false position, with cj≠1 and cj replacing a and b. However, there is a
crucial dierence: at each step (after the first), the oldest estimate is discarded, regardless of the sign of
f(cj)f(cj+1). This means situations in which the iterates become fixed are unlikely, but there is a trade-o.
There is no guarantee that the root lies between cj and cj+1 (figure 3.4). Consequently, the sequence of
iterates may diverge if the initial estimates are poor.
For good initial estimates it can be shown that
|cj+1 ≠ c| = K |cj ≠ c|—
where — = (1 +
Ô
5)/2 (the ‘golden ratio’) and K is a constant that depends on the function f . This is
better than linear convergence (which corresponds to — = 1 with |K| < 1), but not as rapid as quadratic
convergence (— = 2). See secant.mw for examples.
3.6 The Newton–Raphson method
Consider the secant method with cj≠1 close to cj (which will happen if the method converges).
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.4>
Writing cj≠1 = cj + h, the iteration formula becomes
cj+1 =
(cj + h)f(cj)≠ cjf(cj + h)
f(cj)≠ f(cj + h)
= cjf(cj)≠ cjf(cj + h)
f(cj)≠ f(cj + h) +
f(cj)h
f(cj)≠ f(cj + h)
= cj ≠ f(cj) h
f(cj + h)≠ f(cj) .
3-7
MATH256 2020–21 Numerical Methods (5 Parts)
f(x
cjcj1 cj+1
f(cj)
f(cj1)
x
f(x)
j
x
f(x)
f(cj)
f(cj1)
cjcj1
cj+1
Figure 3.5: The secant method in the limit cj≠1 æ cj (i.e. h æ 0). The line through
!
cj , f(cj)
"
and
!
cj+1, f(cj+1)
"
approaches the tangent to f(x) at x = cj .
In the limit as hæ 0, we obtain
cj+1 = cj ≠ f(cj)
f Õ(cj)
.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.4>
This is the iteration scheme for the Newton–Raphson method. There is a geometrical interpretation for this:
as hæ 0, the line used to construct cj+1 approaches the tangent to f(x) at x = cj (figure 3.5).
3.6.1 Examples
See newton_1d.mw. If c0 is a good estimate for a root of f then the Newton–Raphson method converges
very rapidly.
Notes
• Depending on the accuracy of the initial estimate, the sequence of iterates generated by the Newton–
Raphson method may converge, diverge toward infinity (rare) or enter a never ending cycle (very
rare).
• The Newton–Raphson method requires only one initial estimate, whereas the secant method requires
two, and the bisector and false position methods require an interval that is known to contain a root.
• Finding f Õ(x) is not practical in all cases (e.g. f(x) = detM(x) where M(x) is a 50◊ 50 matrix, in
which every element is a function of x).
• Approximating the derivative using f Õ(x) ¥ !f(x+ h)≠ f(x)"/h simply retrieves the secant method.
More generally, using a finite dierence approximation for the derivative will reduce accuracy if h is
large, but can cause cancellation if h is small.
3-8
MATH256 2020–21 Numerical Methods (5 Parts)
3.6.2 Taylor series derivation
The experiments in newton_1d.mw suggest that, when the iteration converges, each error in the Newton–
Raphson method is (roughly) proportional to the square of the previous error, i.e.
|cj+1 ≠ c| ¥ k|cj ≠ c|2,
for some constant k, which depends on f . In other words, the scheme converges quadratically for good
initial estimates. However, at we do not yet have a theoretical basis for this claim. We can rectify using a
dierent derivation.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.5>
Consider the Taylor series
f(c) = f(x) + (c≠ x)f Õ(x) + (c≠ x)
2
2! f
ÕÕ(x) + (c≠ x)
3
3! f
ÕÕÕ(x) + · · ·
If f(c) = 0 we can rearrange this to obtain
≠ f(x)
f Õ(x) = c≠ x+
(c≠ x)2
2!
f ÕÕ(x)
f Õ(x) +
(c≠ x)3
3!
f ÕÕÕ(x)
f Õ(x) + · · ·
and hence if x ¥ c, then
c = x≠ f(x)
f Õ(x) +O
!
(c≠ x)2".
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.5>
The first two terms on the right-hand side produce a new estimate for c with an error proportional to (c≠x)2.
This shows that the Newton–Raphson method has quadratic convergence. The constant multiplying (c≠x)2
is now identified as f ÕÕ(x)/(2f Õ(x)) (it’s not usually worth trying to calculate this).
Remark Note the slight ‘trick’ here: we write f(c) as an expansion about x (not the other way around)
so that we don’t get f Õ(c), f ÕÕ(c), . . . in the expansion — we can’t compute these without knowing c.
3.7 Fixed point iteration
The problem of root finding can be recast as a fixed point iteration. We try to find the fixed points of
the function „(x), at which
„(x) = x.
Suppose we take an initial estimate c0, and define a sequence of iterates via
c1 = „(c0), c2 = „(c1) = „(„(c0)), . . .
If this sequence converges, it must converge towards a fixed point, but it may not converge at all; predicting
its behaviour for a general starting point c0 is very dicult. However, if the initial estimate is good (i.e.
close to a fixed point), we can analyse its rate of convergence.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.6>
We start with the Taylor series for „(x) about the fixed point x = c; thus
„(x) = „(c) + (x≠ c)„Õ(c) + (x≠ c)
2
2! „
ÕÕ(c) + (x≠ c)
3
3! „
ÕÕÕ(c) + · · ·
3-9
MATH256 2020–21 Numerical Methods (5 Parts)
Setting x = cj , and using the fact that „(c) = c, this becomes
„(cj) = c+ (cj ≠ c)„Õ(c) + (cj ≠ c)
2
2! „
ÕÕ(c) + (cj ≠ c)
3
3! „
ÕÕÕ(c) + · · ·
By definition „(cj) = cj+1, so we can rearrange and take the modulus to obtain
|cj+1 ≠ c| = |cj ≠ c|◊
-----„Õ(c) + cj ≠ c2! „ÕÕ(c) + (cj ≠ c)23! „ÕÕÕ(c) + · · ·
----- .
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.6>
This is the key equation for all fixed point methods; it relates the error in cj+1 to the error in cj .
Assuming the initial estimate is good, we need only be concerned with the lowest power of |cj ≠ c| on the
right-hand side (the ‘leading-order’ term). If |„Õ(c)| < 1 the iteration converges at a linear rate, because
|cj+1 ≠ c| ¥ |cj ≠ c| |„Õ(c)|.
If „Õ(c) = 0 then
|cj+1 ≠ c| ¥ |cj ≠ c|2 |„
ÕÕ(c)|
2 ,
so that we have quadratic convergence. Similarly, if „Õ(c) = „ÕÕ(c) = 0 then the convergence will be (at
least) cubic, etc.
3.7.1 Example
Fixed point iterations can be used to approximate radicals (square roots, cube roots, etc.) using only
elementary arithmetic operations. Consider the function
„(x) = x8
!
10≠ x4".
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.7>
Since
„(21/4) = 2
1/4
8 (10≠ 2) = 2
1/4,
there is a fixed point at x = 21/4. Furthermore,
„Õ(x) = 18
!
10≠ 5x4",
so „Õ(21/4) = 0. Finally,
„ÕÕ(x) = ≠ 5x
3
2 ,
so „ÕÕ(21/4) = ≠5◊ 2≠1/4 ”= 0. Therefore, if c0 is a good estimate, then
|cj+1 ≠ c| ¥ |cj ≠ c|2
--„ÕÕ(21/4)--
2 ¥ |cj ≠ c|
2 ◊ 5◊ 2≠5/4.
and the iteration will converge quadratically toward 21/4. Experimenting in Maple shows that this is indeed
the case (fixed_point.mw).
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.7>
3-10
MATH256 2020–21 Numerical Methods (5 Parts)
3.8 Constructing fixed point schemes
Many fixed point schemes can be constructed by starting with „(x) = x and adding extra terms to the
right-hand side. These terms must vanish at x = c so that „(c) = c.
3.8.1 Example
Suppose we wish to find a fixed point scheme to approximate c =
Ô
2.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.8>
We write
„(x) = x+ u(x)(x2 ≠ 2).
where u(x) is a function that we can choose. Clearly „(c) = c, so we do have a fixed point. Now
„Õ(x) = 1 + uÕ(x)(x2 ≠ 2) + 2xu(x)
so „Õ(c) = 1 + 2cu(c). Consequently, if we choose u(x) = ≠1/(2x) then we will have „Õ(c) = 0 and the
scheme converges quadratically. One possible iteration function is therefore
„(x) = x≠ x
2 ≠ 2
2x =
x2 + 2
2x .
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.8>
Remarks
• We would obtain the same iteration scheme by applying the Newton–Raphson method to f(x) = x2≠2.
• We could also choose u(x) = ≠1/x3 because then
1 + 2xu(x) = 1≠ 2
x2
,
which vanishes at x = c. A second possible iteration function is therefore
„2(x) = x≠ x
2 ≠ 2
x3
.
• However, we cannot choose u(x) = 1/(2c) (a constant) because this involves
Ô
2, which is the value
we are trying to find.
Experimenting in Maple shows that both „(x) and „2(x) do indeed produce iteration schemes with quadratic
convergence (fixed_point.mw).
3-11
MATH256 2020–21 Numerical Methods (5 Parts)
3.8.2 Example: Derivation of Newton–Raphson as a fixed point scheme
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.9>
The Newton–Raphson scheme can be alternatively derived as a fixed point method from writing
„(x) = x+ u(x)f(x).
The idea is that though the above „(x) is the iteration function for equation f(x) = 0 for any u(x) i.e. if
f(c) = 0 then we have „(c) = c, we like to make a ‘good’ choice of u(x) for achieving fast convergence:
• if |„Õ(c)| < 1, we have linear convergence; (see §3.7)
• if „Õ(c) = 0, we have quadratic convergence! We aim to choose u(x) such that „Õ(c) = 0.
Now dierentiation gives, using f(c) = 0,
„Õ(c) = 1 + uÕ(c)f(c) + u(c)f Õ(c) = 1 + u(c)f Õ(c).
Setting „Õ(c) = 0 leads to u(c) = ≠1/f Õ(c) which is of course satisfied by choosing u(x) = ≠1/f Õ(x).
Hence this anticipated ‘good’ choice of u(x) defines the iteration function
„(x) = x≠ f(x)
f Õ(x) .
Then we see that this fixed point method is the Newton–Raphson method (seen earlier)
cj+1 = „(cj) = cj ≠ f(cj)
f Õ(cj)
,
and the leading order error approximation (see §3.7) is
|cj+1 ≠ c| ¥ |cj ≠ c|2
----„ÕÕ(c)2!
---- .
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.9>
Notes
• This derivation isn’t valid if f Õ(c) = 0. The behaviour of the Newton–Raphson method is dierent in
this case; see Problem sheet 5 Question 5.6.
• If f Õ(c) ”= 0 then we again see that the convergence is quadratic because the error in cj+1 is
proportional to the square of the error in cj .
3.9 The Newton–Raphson method in two dimensions
The Newton–Raphson method can be applied in more than one dimension. Suppose we have two functions
f(x, y) and g(x, y) and that we wish to find a point (c, d) such that
f(c, d) = g(c, d) = 0.
We can use the ‘trick’ from section 3.6.2, and expand f about (c, d) = (x, y).
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.10>
3-12
MATH256 2020–21 Numerical Methods (5 Parts)
In this way we obtain
f(c, d) = f(x, y) + (c≠ x)fx(x, y) + (d≠ y)fy(x, y)
+ (c≠ x)
2
2! fxx(x, y) +
(d≠ y)2
2! fyy(x, y) + (c≠ x)(d≠ y)fxy(x, y) + · · ·
where subscripts denote partial derivatives. Assuming x is close to c and y is close to d, we now discard
quadratic and higher order terms. Since f(c, d) = 0, we then have
≠f(x, y) ¥ (c≠ x)fx(x, y) + (d≠ y)fy(x, y)
and similarly for g:
≠g(x, y) ¥ (c≠ x)gx(x, y) + (d≠ y)gy(x, y).
We can write this system in matrix form; thus

C
f(x, y)
g(x, y)
D
¥
C
fx(x, y) fy(x, y)
gx(x, y) gy(x, y)
D C
(c≠ x)
(d≠ y)
D
.
The matrix of derivatives that appears here is called the Jacobian, which we will denote by J (x, y). Taking
the inverse, we obtain C
c
d
D
¥
C
x
y
D
≠ J ≠1(x, y)
C
f(x, y)
g(x, y)
D
.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.10>
To obtain this, we discarded quadratic terms (i.e. (x ≠ c)2, (y ≠ d)2 and (x ≠ c)(y ≠ d)). In general, it
won’t give us c and d exactly, but if x ¥ c and y ¥ d it should produce improved approximations. In fact,
this is the Newton–Raphson scheme in 2D:
cj+1 = „(cj , dj) and dj+1 = Â(cj , dj)
where C
„(x, y)
Â(x, y)
D
=
C
x
y
D
≠ J ≠1(x, y)
C
f(x, y)
g(x, y)
D
.
Remarks
• Since f(c, d) = g(c, d) = 0, we have „(c, d) = c and Â(c, d) = d, so this is a two-dimensional fixed
point scheme.
• Provided that detJ (c, d) ”= 0, the partial derivatives „x, Âx, „y and Ây all vanish at (x, y) = (c, d)
(see problem Problem sheet 5 Question 5.8). This is the condition for quadratic convergence in a 2D
fixed point scheme.
3.9.1 Example
Obtain the two-dimensional Newton–Raphson iteration for the nonlinear system
f(x, y) = (x≠ 1)2 + 4y2 ≠ 1 = 0
g(x, y) = (x≠ 12)2 + (y ≠ 12)2 ≠ 19 = 0
J
. (ú)
(The solutions are the two points at which the circle and ellipse shown in figure 3.6 intersect.)
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.11>
3-13
MATH256 2020–21 Numerical Methods (5 Parts)
1
0
1
x
y
0 1 2
(x 1)2 + 4y2 = 1
(x 12 )2 + (y 12 )2 = 19
Figure 3.6: Geometric representation of the nonlinear system (ú). Approximate solutions can be read o as x ¥ y ¥ 0.25 and
x ¥ 0.9, y ¥ 0.3; these can be used as initial estimates for the Newton–Raphson iteration.
For the Jacobian, we have
J =
SWWU
ˆf
ˆx
ˆf
ˆy
ˆg
ˆx
ˆg
ˆy
TXXV =
C
2(x≠ 1) 8y
2x≠ 1 2y ≠ 1
D
,
and the inverse of a general 2◊ 2 matrix is given byC
a b
c d
D≠1
= 1
ad≠ bc
C
d ≠b
≠c a
D
.
In this case the determinant is
detJ = 2(x≠ 1)(2y ≠ 1)≠ 8y(2x≠ 1)
= ≠2(6xy + x≠ 2y ≠ 1),
so the inverse is
J ≠1(x, y) = ≠ 12(6xy + x≠ 2y ≠ 1)
C
2y ≠ 1 ≠8y
≠2x+ 1 2(x≠ 1)
D
.
Finally, the iteration is C
cj+1
dj+1
D
=
C
cj
dj
D
≠ J ≠1(cj , dj)
C
f(cj , dj)
g(cj , dj)
D
.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · <3.11>
This is easily implemented in Maple (see newton_2d.mw).
3-14
MATH256 2020–21 Numerical Methods (5 Parts)
3.10 The Newton–Raphson method in n dimensions
The derivation from section 3.9 works in n dimensions. We can deduce the result without writing out the
intermediate steps. Thus, if x is an n-dimensional vector with elements x1, . . . , xn and f is a vector-valued
function, i.e.
f(x) =
SWWWWU
f1(x)
f2(x)
...
fn(x)
TXXXXV ,
then the Newton–Raphson iteration is
„(x) = x≠ J ≠1f(x),
where
J =
SWWWWWWWWWWU
ˆf1
ˆx1
ˆf1
ˆx2
· · · ˆf1
ˆxn
ˆf2
ˆx1
ˆf2
ˆx2
ˆf2
ˆxn... . . . ...
ˆfn
ˆx1
ˆfn
ˆx2
· · · ˆfn
ˆxn
TXXXXXXXXXXV
.
Remark For systems with more than two equations, it is not practical to invert J analytically. Instead,
we evaluate the elements of J and f at each step, and write
J ≠1f = h,
so that „(x) = x≠ h, and
Jh = f .
We can solve this system for h using Gaussian elimination. (Not by hand! LinearAlgebra[LinearSolve]
in Maple.)
3.11 Example
Consider the 3◊ 3 system of equations
f1(x1, x2, x3) = 3x1x2x3 + e≠(x1x2)
2 + sin(x3) = 0,
f2(x1, x2, x3) = x33 + x3 + cos(x1x2) + sin(1 + x21 + x22) = 0,
f3(x1, x2, x3) = x31 + x42 ≠ 8x23 + x1x22 = 0.
3-15
MATH256 2020–21 Numerical Methods (5 Parts)
The entries in the Jacobian are
J11 = 3x2x3 ≠ 2x1x22e≠(x1x2)
2
,
J12 = 3x1x3 ≠ 2x21x2e≠(x1x2)
2
,
J13 = 3x1x2 + cos(x3),
J21 = 2x1 cos(x21 + x22 + 1)≠ x2 sin(x1x2),
J22 = 2x2 cos(x21 + x22 + 1)≠ x1 sin(x1x2),
J23 = 1 + 3x23,
J31 = 3x21 + x22,
J32 = 2x2(x1 + 2x22)
and
J33 = ≠16x3.
Trying to find J ≠1 for general values of x1, x2 and x3 will produce a very large expression. However, we
can still apply the Newton–Raphson method as follows.
• Let
x =
SWUx1x2
x3
TXV and f(x) =
SWUf1(x1, x2, x3)f2(x1, x2, x3)
f3(x1, x2, x3)
TXV ,
and take x = x(0) as the initial estimate.
• For each x(j):
I Evaluate J and f(x) at x = x(j).
I Solve the linear system Jh = f to find the vector h.
I Update using
x(j+1) = x(j) ≠ h (since J ≠1f = h).
It’s fairly easy to implement this in Maple (see newton_3d.mw), but care must be taken to apply the steps
in the correct order.
3.12 Final remarks
There are many ways to solve nonlinear equations, but the more complex the scheme the more potential
problems we may encounter. The best programs typically use a rapidly convergent method and fall back on
bisection when things go wrong.
We have not considered the problems of finding multiple roots or finding complex roots, both of which are
generally much more dicult.
3-16














































































































































































































































































































































































































































































































































































































































































































































































学霸联盟


essay、essay代写