SECU0057 9-R代写
时间:2023-05-01
DEPARTMENT OF SECURITY & CRIME SCIENCES
An Introduction to Neural
Networks
SECU0057 9 March 2023
Learning objectives
• Understand main concepts of neural networks for machine learning
• Training neural networks
• Types of neural networks
• Neural networks in crime sciences
• Neural networks in R
Your lecturer for today...
Sarah Ying Zheng:
• 3rd year PhD student
• Tools to help people detect online scams, e.g. phishing
• Ex-data scientist & AI application consultant (IBM, fintech)
• Credit card fraud detection
• AI in cyberwarfare
• MSc Neuroscience, BSc Psychology

How can we create intelligent
machines?
Geoffrey
Hinton
Neural network > Support Vector
Machine (SVM) for image classifcation
Since 2000s: more computing power, Graphics
Processing Units (GPUs), storage space, more
data and open-source programming libraries
How do we learn?
Input
Data: puzzle pieces
Processing
What is this?
What does it mean?
What to do?
Output
Shape of puzzle,
where to place the
puzzle piece
Detect statistical
patterns, using
model algorithm
of choice
Model prediction
Minimise difference
between predicted
vs. actual output
How do neural networks learn?
Input
Data: puzzle pieces
Processing
What is this?
What does it mean?
What to do?
Output
Shape of puzzle,
where to place the
puzzle piece
Digits
Detect statistical
patterns, using
model algorithm
of choiceMinimise difference
between predicted
vs. actual output
Network of
artificial “neurons”
Update connection
weights through
”backpropagation”
Model prediction
What is a neuron?
Input
dendrites
Processing
nucleus
Output
axon &
synapse
Image source:
Processing: w1*x1+ w2*x2...+wm*xm
Output: defined by activation function
Step function
0 if x < 0
1 if x ≥ 0
How do artificial neurons learn?
Processing: w1*x1+ w2*x2...+wm*xm
Output: defined by activation function
Objective: minimise difference between output and ground
truth by changing weights and biases by “backpropagating”
how much each weight needs to change → gradient descent
Error: defined by a loss function, e.g. mean squared error
Learning rate: how much the weight updates are
incorporated by each neuron
What neural networks look like
Convolutional Neural Networks (CNNs)Recurrent Neural Networks (RNNs)
Generative Adversarial
Networks (GANs)
Often used in natural language
processing, e.g. speech recognition
Commonly used for
image recognition
Commonly used to
generate ‘deep fakes’
Deep learning: many hidden layers
in neural network (vs. shallow)
More layers != always better
Single-layer perceptron
What neural networks look like
Word2Vec for word embeddings (Week 5) – predicts word similarity from each
word’s context (i.e., surrounding words), using one-hidden-layer architectures
london is the capital of great britain
c1 c2 c3 t c4 c5 c6
t (target word): capital
c1- c6 (context words): london, is, the, of, great, britain
What neural networks look like
Word2Vec for word embeddings (Week 5) – pre-trained (unsupervised) on
large text corpus
From Mikolov et al., (2013)
1. CBOW (Continuous Bag-of-Words) – predicts target
word in the center using surrounding context words:
p(wt | wt-2, wt-1, wt+1, wt+2)
2. Skip-gram – predicts the surrounding words (i.e.,
context) from the target word in the center:
p(wt-2, wt-1, wt+1 | wt)
Neural networks in crime
sciences
Creation and detection of political
fake news
Aid police inteligence, predicting
crime events with deep neural
network (Furkan et al., 2021)
Intrusion detection
Adverse discrimination due to
biased AI (e.g. Challen et al.,
2019)
Neural networks in R
60 available R packages, see performance benchmark study
Demo with neuralnet on online credit card fraud data
neuralnet(formula, data, hidden = 1, threshold = 0.01, stepmax = 1e+05, rep = 1,
startweights = NULL, learningrate.limit = NULL, learningrate.factor = list(minus = 0.5,
plus = 1.2), learningrate = NULL, lifesign = "none", lifesign.step = 1000,
algorithm = "rprop+", err.fct = "sse", act.fct = "logistic", linear.output = TRUE,
exclude = NULL, constant.weights = NULL, likelihood = FALSE)
Recap
• Conceptual understanding of neural networks for machine learning
• Training neural networks
• Neuron as a unit of learning: input, processing, output
• Activation function, loss function, backpropagation, gradient descent
• Weighted connections, learning rate
• Types of neural networks
• Network architectures: perceptron, RNN, CNN, GAN
• Word2Vec: two shallow one-hidden-layer models
• Neural networks in crime sciences
• Neural networks in R


essay、essay代写