APMA 2070: -无代写
时间:2025-05-08
APMA 2070: Deep Learning for Scientists and Engineers
Instructors: George Em Karniadakis and Khemraj Shukla
Division of Applied Mathematics and School of Engineering
Brown University
GitHub Page
1 Introduction
C
o
u
rs
e
le
ar
n
in
g
ra
te
Algorithm
PyTorch + TensorFlow + JAX + MATLAB
Python + MATLAB
MPI + GPU
DeepXDE/MODULUS
Module I Module IV + Adv. Topic
Figure 1: Learning curves for five primary objectives of the course demonstrat-
ing different teaching rates.
The main objective of this course
is to teach concepts and imple-
mentation of deep learning tech-
niques for scientific and engineer-
ing problems to first year gradu-
ate students. This course entails
various methods, including theory
and implementation of deep learn-
ing techniques to solve a broad
range of computational problems
frequently encountered in solid me-
chanics, fluid mechanics, non de-
structive evaluation of materials,
systems biology, chemistry, and
non-linear dynamics. At the end of
the course participants will be able
to:
1. Understand the underlying
theory and mathematics of
deep learning.
2. Analyze and synthesize data
in order to model physical,
chemical, biological, and engi-
neering systems.
3. Apply physics-informed neu-
ral networks (PINNs) to model
and simulate multiphysics sys-
tems.
2 Workload and Grading
1. 6 Homeworks: 40% of grade
2. 1 Final Project: 50% of grade
3. In-class interaction: 10% of grade
Late homeworks will not be graded.
1
Total work hours
Over the 13 weeks of this course (including reading period), students will spend three hours in class per week (∼39
hours total). A reasonable estimate to support this course’s learning outcomes is 180 hours total. Project based
homework assignments may take ∼60 hours, and students are expected to allocate ∼80 hours to the final project.
3 Course Content
Module 1: Basics
I. Introduction
i. History of deep learning
ii. Scientific machine learning
iii. Rosenblatt’s perceptron
iv. Artificial and biological neurons
v. Building neural networks
vi. Course objectives
vii. PINNs: data + physical laws
viii. Example: non-destructive evaluation of materials
ix. Example: Heat transfer
x. Example: Hidden fluid mechanics
xi. Example: Rheology of shampoo
xii. Example: Reinforcement learning in fluids
xiii. Different types of deep learning
xiv. Course roadmap
xv. The four pillars of scientific methods
II. A primer on Python, NumPy, SciPy and jupyter notebooks
i. Getting familiar with programming environment of the course
ii. Introduction of jupyter notebook and setting it up on your machine
iii. Basics of data structure and operation in NumpPy and SciPy
iv. Installation of deep learning frameworks TensorFlow and PyTorch
v. Introduction to Nvidia’s deep learning container and installation
III. Deep Learning Networks
i. Workflow in training a deep neural network
ii. Basic concepts and terminology
iii. Regression versus classification
iv. Universal approximation theorem for functions and functionals
v. Example of a regression of a discontinuous/oscillatory function
2
vi. Fundamental approximation theory for shallow and deep neural networks
vii. Activation functions and adaptivity
viii. Loss functions (simple and advanced)
ix. Forward/backpropagation and automatic differentiation
x. Connecting neural networks with finite elements
IV. A primer on TensorFlow, PyTorch and JAX
i. Brief introduction of tensors and algebraic operations on tensors using TensorFlow, PyTorch and JAX.
ii. A brief introduction on preparing data for training and testing processes.
iii. An example of implementation of regression problem in python with and without TensorFlow, PyTorch and
JAX.
iv. Demonstration on implementation of feed-forward fully-connected network in TensorFlow, PyTorch and JAX.
v. Demonstration on implementation of AD process in TensorFlow, PyTorch and JAX.
V. Training and Optimization
i. Definition of optimization problem; types of stationary points
ii. Bad minima and degenerate saddle points
iii. Gradient Descent (GD) versus stochastic GD (SGD)
iv. Effect of learning rate
v. Practical tips in training a DNN
vi. Overfitting versus underfitting
vii. Vanishing and exploding gradients
viii. Xavier and He initializations
ix. Data normalization
x. Batch normalization
xi. What optimizer to use?
xii. First-order optimizers
xiii. Second-order optimizers
xiv. Learning rate scheduling
xv. Hybrid Least Squares – GD (LSGD)
xvi. L2, L1 Regularization and Dropout
xvii. Information bottleneck theory
xviii. Dying ReLu DNN
VI. Neural Network Architectures
i. Convolution Neural Networks (CNN), Generative adversarial networks (GAN), Residual Neural Network (ResNet),
Recurrent Neural Network (RNN), Long-short Time Memory Network (LSTM)
ii. Demonstration on implementation of CNN, GAN, ResNet and LSTM in PyTorch and TensorFlow
3
Module 2: Neural Differential Equations
I. Discovering Differential Equations
i. Problem Setup
ii. Neural ODEs
iii. Multistep neural networks
iv. Recurrent neural networks – comparisons
v. Seq2Seq
vi. Structure-preserving neural networks
vii. Symplectic and Poisson nets
viii. The GENERIC framework
II. Physics-Informed Neural Networks (PINNs)- Part I
i. Data + Physical Laws
ii. Data + Physical Laws + Neural Networks
iii. What is a PINN and Why PINNs
iv. PINN for Burgers Equation
v. PINN for Boundary Value Problems
vi. Soft Constraints and Weights
vii. Hard Constraints: Boundary Conditions
viii. Linearly Constrained Neural Networks
ix. Hard Constraints: Design and Optimization
x. Weighted Residual Methods
xi. hp-VPINNs: Domain Decomposition
xii. Variational Neural Networks
xiii. Convergence Theory of PINNs
xiv. Convergence Theory of hp-VPINNs
xv. Error Decomposition
xvi. Error estimates of PINNs based on Quadrature
xvii. PINNs vs DRM (Deep Ritz Method)
III. Physics-Informed Neural Networks (PINNs)- Part II
i. An alphabet of PINNs – an overview
ii. Gradient-enhanced PINNs: gPINNs
iii. Conservative PINNs via domain decomposition: cPINNs
iv. Extended PINNs via domain decomposition: xPINNs
v. PINNs for fractional PDEs: fPINNs
vi. PINNs for stochastic PDEs: sPINNs
4
Module 3: Neural Operators
I. Deep Operator Networks (DeepONet)
i. Universal approximation theorem for functionals
ii. From function to operators
iii. Universal approximation theorem for operators
iv. DeepOnet: branch and trunk nets
v. Theory of DeepOnet
vi. Learning integral and fractional operators
vii. Exponential convergence of DeepOnet
viii. Stochastic ODEs & PDEs
ix. DeepOnet as LSTM
x. Multiscale DeepOnet
xi. Physics-informed DeepOnet
xii. Variational physics-informed DeepOnet
xiii. DeepOnet for high-speed flows
xiv. Extensions of DeepOnet
xv. DeepM&Mnet concept
II. Fourier Neural Operator (FNO)
i. Extensions of FNO: dFNO+ and gFNO+
ii. DeepOnet vs FNO: theory
iii. DeepOnet vs FNO: applications
Module 4: SciML Uncertainty Quantification (SciML-UQ)
I. Machine Learning using Multi-Fidelity Data
i. What is multi-fidelity data: Ocean acidification example
ii. Gaussian process (GP) regression
iii. Data assimilation from noisy measurements: bathymetry and eel grass modeling
iv. Physics-informed kernels for GP
v. Multi-fidelity GP/co-Kriging modeling
vi. Example: Sea Surface Temperature in Massachusetts Bay
vii. Nonlinear multi-fidelity GP and examples
viii. Acquisition functions – active learning
ix. NN-induced GP kernels
x. Deep multi-fidelity GP
xi. Diffusion-manifold driven GP
5
xii. Composite multi-fidelity neural networks
xiii. Example: Nano-indentation of 3D printed materials
xiv. Multi-fidelity in modal space
xv. Example: vortex-induced vibrations of marine risers
II. Uncertainty Quantification(UQ) in Scientific Machine Learning
i. Total uncertainty
ii. Bayes’ theorem
iii. Bayesian model average
iv. Methods for UQ
v. BNNs and BPINNs
vi. Multi-fidelity B-PINNs
vii. Functional priors
viii. UQ in Scientific Machine Learning: A unified view
ix. Deep Ensembles (DEns)
x. Snapshot Ensembles (SEns)
xi. Stochastic Weight Averaging Gaussian (SWAG)
xii. Methods for UQ: Functional Prior
xiii. A Unified view of UQ for neural networks
xiv. Metrics of comparison
xv. Calibration
xvi. Function approximation with unknown and varying noise
xvii. UQ in PINNs
xviii. UQ in DeepOnet
xix. Detection of Out-Of-Distribution (OOD)
xx. NeuralUQ in TensorFlow and JAX
Advanced Topics
I. MODULUS Library
i. Introduction to NVIDIA’s MODULUS package for PINNs and Design pattern
ii. Building MODULUS by using NVIDIA GPU CLOUD (NGC) containers
iii. Demonstration on implementation of PINNs using MODULUS
iv. Implementation of computational graph compilation, XLA based (Accelerated Linear Algebra) compilation,
choice of precision for floats (TF32, BF16 (bFloat), FP16, FP32, FP.) vis-a-vis training cost on Nivida’s latest
GPU based on Ampere architecture (A100 and RTX3090) using MODULUS
6
II. Multi-GPU Scientific Machine Learning
i. Introduction of data parallel approach for PINN on distributed (multi-GPU) computing platform
ii. Introduction of domain decomposition methods: Conservative PINN (cPINN) and Extended PINN (XPINN)
iii. Sampling of training and testing data for PINNs on simple, complex and irregular geometries
iv. Implementation of multi-GPU PINNs using data parallel approach in SimNet (TensorFlow) and PyTorch and
MODULUS
v. Demonstration on implementation of multi-GPU cPINN and XPINN using PyTorch and TensorFlow
4 Term Projects
I. Biomedicine
i. Solving forward and inverse problems in mathematical modeling of blood coagulation
Refer Module 2: PINNs-Part I and Part II
ii. Predicting drug absorption using a physics-informed neural network
Refer Module 2: PINNs-Part I and Part II
iii. Parameter identification in Glucose-Insulin interaction
Refer Module 2: PINNs-Part I and Part II
iv. Parameter Estimation in Thrombus Formation
Refer Module 2: PINNs-Part I and Part II
II. Dynamical Systems
i. Charged particle in a electromagnetic field
Refer Module 2: Lecture I
ii. Learning dynamical systems from data
Refer Module 2: Lecture I
iii. Stiff ODE systems
Refer Module 3: Lecture I & II
III. Engines
i. Learning engine parameters
Refer Module 2: PINNs-Part I and Part II
IV. Fluid Mechanics
i. Modeling Bubble Growth Dynamics
Refer Module 3: Lecture I & II
ii. Reconstruction of flow past a cylinder
Refer Module 2: PINNs-Part I and Part II
iii. Reconstruction of flow field for a lid driven cavity flow
Refer Module 2: PINNs-Part I and Part II
iv. Solving forward and inverse problems in mathematical modeling of wave propagation
Refer Module 2: PINNs-Part I and Part II
v. Solving 1D time dependent Boussinesq equation.
Refer Module 2: PINNs-Part I and Part II
7
V. Geophysics
i. Microseismic hypocenter localization using PINNs
Refer Module 2: PINNs-Part I and Part II
ii. Diffusion-Reaction in porous media
Refer Module 2: PINNs-Part I and Part II
iii. Estimating sea-surface temperature using multi-fidelity data
Refer Module 4: I and II
VI. Heat Transfer
i. Inverse heat transfer problem
Refer Module 2: PINNs-Part I and Part II
ii. Steady state non-linear inverse heat conduction problem
Refer Module 2: PINNs-Part I and Part II
iii. Heat Conduction in Double Layered Structures exposed to Ultra-short Pulsed Laser
Refer Module 2: PINNs-Part I and Part II
VII. Materials
i. Inverse Problem on Modulus Identification of Hyperelastic Material
Refer Module 2: PINNs-Part I and Part II
ii. Characterizing surface breaking crack using ultrasound data and PINNs
Refer Module 2: PINNs-Part I and Part II
8

学霸联盟
essay、essay代写