ME3255--无代写
时间:2023-04-25
Lecture 12 – Numerical Differentiation
Julia´n Norato
Associate Professor
Department of Mechanical Engineering
University of Connecticut
julian.norato@uconn.edu
ME3255-001 – Computational Mechanics
Spring 2023
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 1 / 15
Outline
1 Finite Differences
2 Richardson Extrapolation
3 Unequally Spaced Data
4 Partial Derivatives
5 NumPy functions
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 2 / 15
Finite Differences
Recall finite differences:
Derivatives f (n) derived from Taylor series of f at x∗
Forward, backward and central
Error:
Truncation as h increases (from Taylor series)
Roundoff (overflow) as h decreases
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 3 / 15
Finite Difference Formulas
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 4 / 15
Finite Differences for Data with Errors
Caution must be taken when the available data have errors, as finite
difference derivatives are very sensitive to data errors:
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 5 / 15
Richardson Extrapolation
We can make an improved estimate of the derivative by using Richardson
extrapolation (which we will revisit when studying numerical integration):
Richardson Extrapolation
Specifically, if two finite difference approximations D(h2) and D(h1) with
steps h1 and h2 = h1/2 = h/2 respectively are available, then an improved
estimate of the derivative is given by
D ≈ 4
3
D(h2)− 1
3
D(h1)
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 6 / 15
Richardson Extrapolation
Example
Using Richardson extrapolation, obtain an improved estimate of f ′(x) at
x = 0.5 for the function
f(x) = −0.1x4 − 0.5x2 − 0.15x3 − 0.25x + 1.2
using centered finite differences with steps h1 = 0.5 and h2 = 0.25. Given
that the exact value is f ′(0.5) = −0.9125, compute the true error.
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 7 / 15
Unequally Spaced Data
If data are unequally spaced, a common strategy is to use Lagrange
interpolation.
Since we know the form of the Lagrange interpolant, we can directly
differentiate.
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 8 / 15
Unequally Spaced Data–Example
Example
A temperature gradient can be measured down into
the soil. The heat flux at the soil-air interface can be
computed with Fourier’s law:
q(z = 0) = −kdT
dz
|z=0
where q(z) is the heat flux (W/m 2 ), k is the
coefficient of thermal conductivity of the soil [= 0.5
W/ (m · K)], T is the temperature (K), and z is the
distance measured down from the surface into the
soil (m). Note that a positive value for flux means
that heat is transferred from the air to the soil. Use
numerical differentiation to evaluate the gradient at
the soil-air interface and employ this estimate to
determine the heat flux into the ground.
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 9 / 15
Partial Derivatives
Consider the definition of partial derivative:
∂f
∂xi
(x1, . . . , xn) := lim
h→0
f(x1, . . . , xi + h, . . . , xn)f(x1, . . . , xi, . . . , xn)
h
Following this definition, we can use finite differences by perturbing
one variable at a time; for example, using centered finite differences:
∂f
∂x
=
f(x + ∆x, y)− f(x−∆x, y)
2∆x
∂f
∂y
=
f(x, y + ∆y)− f(x, y −∆y)
2∆y
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 10 / 15
Partial Derivatives
Consider the definition of partial derivative:
∂f
∂xi
(x1, . . . , xn) := lim
h→0
f(x1, . . . , xi + h, . . . , xn)f(x1, . . . , xi, . . . , xn)
h
Following this definition, we can use finite differences by perturbing
one variable at a time; for example, using centered finite differences:
∂f
∂x
=
f(x + ∆x, y)− f(x−∆x, y)
2∆x
∂f
∂y
=
f(x, y + ∆y)− f(x, y −∆y)
2∆y
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 10 / 15
Mixed Partial Derivatives
∂2f
∂x∂y
=

∂x
(
∂f
∂y
)

∂f
∂y (x + ∆x, y)− ∂f∂y (x−∆x, y)
2∆x
=
1
2∆x
[
f(x + ∆x, y + ∆y)− f(x + ∆x, y −∆y)
2∆y
− f(x−∆x, y + ∆y)− f(x−∆x, y −∆y)
2∆y
]
=
1
4∆x∆y
[f(x + ∆x, y + ∆y)− f(x + ∆x, y −∆y)
−f(x−∆x, y + ∆y) + f(x−∆x, y −∆y)]
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 11 / 15
NumPy functions
numpy.diff(a,n=1, axis=-1) computes the n-th discrete differece
along the given axis of the array a.
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 12 / 15
numpy.diff example
Example
Use the numpy.diff to differentiate the function
f(x) = 0.2 + 25x− 200x2 + 675x3 − 900x4 + 400x5
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 13 / 15
NumPy functions
numpy.gradient(f,h) where f is a vector, returns the one-dimensional
numerical gradient of f with spacing between points h.
The gradient is computed using second-order central
differences for interior points, and first- or second-order
one-sided differences at the boundaries.
If f is an N-dimensional array, then the gradient is
computed along each of the dimensions (e.g., rows and
columns for a 2d-array). If h is a single scalar, uniform
spacing is used; h can also be a vector or N-dimensional
array for non-uniform spacing.
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 14 / 15
numpy.gradient example
Example
1 import numpy as np
2 # 1-d array
3 f = np.array([1, 2, 4, 7, 11, 16], dtype=float)
4 np.gradient(f)
5 np.gradient(f, 2)
6 # 2-d array
7 A = np.array ([[1, 2, 6], [3, 4, 5]], dtype=float)
8 np.gradient(A)
9
J. Norato (UConn) Numerical Differentiation ME3255-001 – Spring 2023 15 / 15

学霸联盟
essay、essay代写