Trịnh Tấn Đạt
Khoa CNTT – Đại Học Sài Gòn
Email:
Website: />
Contents
Introduction
Perceptron
Neural Network
Backpropagation Algorithm
Introduction
❖ What are artificial neural networks?
A neuron receives a signal, processes it, and
propagates the signal (or not)
The brain is comprised of around 100 billion
neurons, each connected to ~10k other neurons:
1015 synaptic connections
ANNs are a simplistic imitation of a brain
comprised of dense net of simple structures
Origins: Algorithms that try to mimic the brain
Very widely used in 80s and early 90s; popularity
diminished in late 90s.
Recent resurgence: State-of-the-art technique for
many applica1ons
Comparison of computing power
Neural networks are designed to be massively parallel
The brain is effectively a billion times faster
Applications of neural networks
Medical Imaging
Fake Videos
Conceptual mathematical model
Receives input from sources
Computes weighted sum
Passes through an activation function
Sends the signal to m succeeding neurons
Artificial Neural Network
Organized into layers of neurons
Typically 3 or more: input, hidden and output
Neural networks are made up of nodes or units, connected by links
Each link has an associated weight and activation function
Perceptron
Simplified (binary) artificial neuron
Perceptron
Simplified (binary) artificial neuron with weights
Perceptron
Simplified (binary) artificial neuron; no weights
Perceptron
Simplified (binary) artificial neuron; add weights
Perceptron
Simplified (binary) artificial neuron; add weights
Introducing Bias
Perceptron needs to take into account the bias
o Bias is just like an intercept added in a linear equation.
o It is an additional parameter in the Neural Network which is used to
adjust the output along with the weighted sum of the inputs to the
neuron.
o Bias acts like a constant which helps the model to fit the given data
Sigmoid Neuron
The more common artificial neuron
Sigmoid Neuron
In effect, a bias value allows you to
shift the activation function to the left
or right, which may be critical for
successful learning.
Consider this 1-input, 1-output
network that has no bias:
Here is the function that this network
computes, for various values of w0:
Sigmoid Neuron
If we add a bias to that network, like
so:
Having a weight of -5 for w1 shifts the curve to the right, which allows us to
have a network that outputs 0 when x is 2.
Simplified Two-Layer ANN
One hidden layer
Simplified Two-Layer ANN
Optimization Primer
Cost function`
Calculate its derivative
Gradient Descent
Gradient Descent
Gradient Descent Optimization