Python代写-ENG2079/ENG5322
时间:2021-03-02

Course assignment
ENG2079/ENG5322: Introduction to computer
programming (Additional Hints Provided)
February 21, 2021
Instructions
You should submit the following through the ENG2079/ENG5322 Moodle submission system by
4pm, 26th March 2021:
1. Python notebook (.ipynb format) used for to solve each of the assignment problems.
This can be a separate notebook for each problem or a single file.
You will be marked according to the following criteria:
• Program correctness (60%): Does your program solve the problem given to you?
• Program readability (20%): Have you written your program in such a way that it is
easy to follow? Have you included appropriate comments?
• Testing (20%): Have you done appropriate testing to show your program works as
intended?
You are given two problems to solve for your project assignment which will be outlined in more
detail shortly. The two problems are focussed on:
1. Newton-Raphson method
2. Conjugate gradient method
Notebook format
I expect to see you explain your code throughout your notebook both using text and images,
much like I have down in the lectures during this course. As a quick guide, the default type of a
cell is code, but if you change it to ’markdown’ (look in the toolbar) and then hit shift and enter
you will see it become text in the notebook.
1
1 PROBLEM 1: NEWTON RAPHSON METHOD
1. Testing: You should write appropriate code that performs tests on your program.
1 Problem 1: Newton Raphson method
A common problem in engineering analysis is this: given a function f(x) determine the values
of x for which f(x) = 0. The solutions (values of x) are known as the roots of the equation
f(x) = 0, or the zeroes of the function f(x). The equation
y = f(x) (1)
contains three elements: an input value x, an output value y, and the rule f for computing y.
The Newton-Raphson algorithm is the best-known method of finding roots for a good reason: it
is simple and fast. The only drawback of the method is that it uses the derivative f ′(x) of the
function as well as the function f(x) itself. Therefore, the Newton-Raphson method is usable
only in problems where f ′(x) can be readily computed.
The Newton-Raphson formula can be derived from the Taylor series expansion of f(x) about x:
f(xi+1) = f(xi) + f
′(xi)(xi+1 − xi) +O(xi+1 − xi)2 (2)
where O(z) is to be read as ”of the order of z”, i.e., the higher order terms. If xi+1 is a root of
f(x) = 0, the above equation becomes:
0 = f(xi) + f
′(xi)(xi+1 − xi) +O(xi+1 − xi)2 (3)
Assuming that xi is a close to xi+1, we can drop the last term and solve for xi+1. The result is
the Newton-Raphson formula:
xi+1 = xi − f(xi)
f ′(xi)
(4)
If x denotes the true value of the root, the error in xi is Ei = x − xi. It can be shown that if
xi+1 is computed from Eq. 4, the corresponding error is:
Ei+1 = − f
′′(xi)
2f ′(xi)
E2i (5)
indicating that Newton-Raphson method converges quadratically (the error is the square of the
error in the previous step). As a consequence, the number of significant figures is roughly doubled
in every iteration, provided that xi is close to the root. The graphical representation is shown in
Fig. 1.
The algorithm for the Newton-Raphson is simple and can be easily implemented in Python. It
repeatedly applies Eq. 4, starting with an initial value x0, until the convergence criteria is reached:
|xi+1 − xi| < (6)
where being the error tolerance. The algorithm for Newton-Raphson is then simple:
2
1 PROBLEM 1: NEWTON RAPHSON METHOD
Figure 1: Graphical interpretation of the Newton-Raphson formula which approproximates f(x)
by the straight line that is tangent to the curve at xi, and in doing so, xi+1 is aat the intersection
of the x-axis and the tangent line.
Figure 2: The pseudo-algorithm for Newton-Raphson method.
You are going to compute the root of f(x) = x3 − 10x2 + 5 = 0 that lies close to x = 0.7. The
derivative of the function is f ′(x) = 3x2 − 20(x), so that the Newton-Raphson formula in Eq. 4
is
x← x− f(x)
f ′(x)
= x− x
3 − 10x2 + 5
3x2 − 20x =
2x3 − 10x2 − 5
x(3x− 20) (7)
It takes only two iterations to reach five decimal place accuracy:
x← 2(0.7)3−10(0.7)2−5
0.7[3(0.7)−20] = 0.73536
x← 2(0.73536)3−10(0.73536)2−5
0.73536[3(0.73536)−20] = 0.73460
(8)
In addition, you are required to find the roots of the following equations,
f(x) = x3 − 2x− 5 = 0 (9)
that has a root near x = 2. And,
f(x) = x− 2sinx = 0 (10)
that has a root near x = 1.5. In the second case, show a scenario where the Newton-Raphson
will not work by choosing a different starting point.You can improve solving Eq. 10 by rewriting
2/x − 1/sinx = 0, and if x0 is any number in (0, pi), Newton quickly takes us to the root.
Reformulation as (sinx)/x− 1/2 = 0 also works nicely.
3
1.1 Bonus question for extra credit 2 PROBLEM 2: CONJUGATE GRADIENT METHOD
1.1 Bonus question for extra credit
A loan of A pounds is repaid by making n equal monthly payments of M pounds, starting a
month after the loan is made. It can be shown that if the monthly interest rate is r, then
Ar = M
(
1− 1
(1 + r)n
)
(11)
A car loan of £10,000 was repaid in 60 monthly payments of £250. Use the Newton Method
to find the monthly interest rate correct to 4 significant figures.You can seek help from https:
//www.mathway.com/popular-problems/Calculus/526998 to calculate the derivative.
1.2 Hint
To program this in python, you can go about defining two function:
def f(x):
return
def df(x):
return
And can then make a function,
def newtonRaphson(x,tol=1.0e-9):

Additionally, you can plot f(x) and show that the function is converging.
2 Problem 2: Conjugate gradient method
Overview of task: Write a program that solves equations of the form Ax = b using Conjugate
Gradient Method. Use this to solve the equation:
 4 −1 1−1 4 −2
1 −2 4
x1x2
x3
 =
12−1
5
 (12)
2.1 Derivation
The problem involves finding the vector x that minimizes the scalar function
f(x) =
1
2
xTAx− bTx (13)
4
2.1 Derivation 2 PROBLEM 2: CONJUGATE GRADIENT METHOD
where the matrix A is symmetric and positive definite. Because f(x) is minimized when its
gradient ∇f = Ax− b, we see that the minimization is equivalent to solving
Ax = b (14)
Gradient methods solve this problem of minimization through iteration starting with an initial
estimate of vector x0 and then iterate for k computations using
xk+1 = xk + αksk (15)
such that the step length αk is chosen so that xk+1 minimizes f(xk+1) in the search direction sk
such that xk+1 satisfies
A(xk + αksk) = b (16)
If we now introduce a residual
rk = b−Axk (17)
and pre-multiplying both sides by sTk and solving for αk gives
αk =
sTk rk
sTkAsk
(18)
While intuitively we can choose sk = −∇f = rk as this is the direction of the largest negative
change in f(x) (called method of steepest descent), it converges very slow. A better choice is
to use conjugate gradient method that uses the search direction
sk+1 = rk+1 + βksk (19)
The constant βk is then chosen so that the two successive search directions are conjugate to
each other, i.e.,
sTk+1Ask = 0 (20)
Subsituiting Eq. 20 into Eq. 19 gives:
βk = −
rTk+1Ask
sTkAsk
(21)
The algorithm is then as follows:
It can be shown that the residual vectors r1, r2, r3, ... are mutually orthogonal i.e. ri.rj = 0, i 6= j
Please note that while writing the algorithm, if you want to multiply a transpose of a vector
with a regular vector, say xTy, you can use in numpy, np.dot(x,y). While implementing the
algorithm, set the tolerance to very low tol=1.0e-9.
5
2.2 Hint 2 PROBLEM 2: CONJUGATE GRADIENT METHOD
91 ∗2.7 Iterative Methods
Here is the outline of the conjugate gradient algorithm:
Choose x0 (any vector will do).
Let r0 ← b− Ax0.
Let s0 ← r0 (lacking a previous search direction,
choose the direction of steepest descent).
do with k = 0, 1, 2, . . .:
αk ← s
T
k rk
sTkAsk
.
xk+1 ← xk + αksk .
rk+1 ← b− Axk+1.
if |rk+1| ≤ ε exit loop (ε is the error tolerance).
βk ←−
rTk+1Ask
sTkAsk
.
sk+1 ← rk+1 + βksk .
It can be shown that the residual vectors r1, r2, r3, . . . produced by the algorithm
are mutually orthogonal; that is, ri · r j = 0, i ̸= j . Now suppose that we have carried
out enough iterations to have computed thewhole set ofn residual vectors. The resid-
ual resulting from the next iteration must be a null vector (rn+1 = 0), indicating that
the solution has been obtained. It thus appears that the conjugate gradient algorithm
is not an iterative method at all, because it reaches the exact solution after n compu-
tational cycles. In practice, however, convergence is usually achieved in less than n
iterations.
The conjugate gradient method is not competitive with direct methods in the
solution of small sets of equations. Its strength lies in the handling of large, sparse
systems (where most elements of A are zero). It is important to note that A enters the
algorithm only through its multiplication by a vector; that is, in the form Av, where v
is a vector (either xk+1 or sk). If A is sparse, it is possible to write an efficient subrou-
tine for the multiplication and pass it, rather than A itself, to the conjugate gradient
algorithm.
! conjGrad
The function conjGrad shown next implements the conjugate gradient algorithm.
The maximum allowable number of iterations is set to n (the number of unknowns).
Note that conjGrad calls the function Av that returns the product Av. This func-
tion must be supplied by the user (see Example 2.18). The user must also supply the
starting vector x0 and the constant (right-hand-side) vector b. The function returns
the solution vector x and the number of iterations.
## module conjGrad
’’’ x, numIter = conjGrad(Av,x,b,tol=1.0e-9)
Conjugate gradient method for solving [A]{x} = {b}.
Figure 3: Pseudo algorithm for conjugate gradient method.
2.2 Hint
The algorithm requires calculating r, α, and β (if you can work back from the worked example)
along with modulus of r, then you can break down this whole algorithm to very simple ma-
trix/vector multiplications/addition/subtraction that you have to do in a loop whether it is for
or a while loop:
a) Multiplying a matrix A with x and subtracting from a vector b to get r. This can all be done
using numpy arrays,
A = np.array([[ 4, -1 ,1], [ -1, 4 ,-2], [ 1, -2 ,4]])
b = np.array([12, -1, 5])
x = np.ones(3)
then r = Ax− b is nothing but
r = np.dot(A,x) - b
b) You also have to solve for α and β. If I take α, then the numerator bit is nothing but
multiplying the transpose of s with r (can be done using np.dot). The denominator is transpose
of s with As which you can again solve using np.dot provided if you can precalculate A times s
using np.dot.
numerator = np.dot(s,r)
As = np.dot(A,s)
denominator = np.dot(s,As)
alpha = numerator / denominator
You can similarly take hints from above to solve for β
c) xk+1 = xk + αksk, i.e., calculating next value of x is nothing but
x = x + alpha * s
or in short form
6
2.2 Hint 2 PROBLEM 2: CONJUGATE GRADIENT METHOD
x+= alpha * s
We have also covered the += operator in the lectures. You can similarly solve for s.
d) To calculate |r|, you can simply use
rnorm = np.dot(r,r)
and then use an if statement to see if rnorm is less than a tolerance say 1e-5 to break from the
loop.
If I have said not to use numpy library function then it means not to use an equivalent function
such as np.linalg.solve() to cheat which, we already covered in the lectures and act as a one
line solution
def conjugate_gradient_cheating(A,b)
x=np.linalg.solve(A,b)
return x
instead I want you to implement your own algorithm:
def conjugate_gradient_snazzy(A,b)

return x
7

essay、essay代写