• Keine Ergebnisse gefunden

DeepLearning on FPGAs Introduction to Artificial Neural Networks Sebastian Buschj¨ager

N/A
N/A
Protected

Academic year: 2022

Aktie "DeepLearning on FPGAs Introduction to Artificial Neural Networks Sebastian Buschj¨ager"

Copied!
82
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

DeepLearning on FPGAs

Introduction to Artificial Neural Networks

Sebastian Buschj¨ager

Technische Universit¨at Dortmund - Fakult¨at Informatik - Lehrstuhl 8

November 2, 2016

(2)

Recap: Computer Science Approach

Technical Problem

Mathematical Method

Algorithm

Implementation

Classification X

K-NNX

Brute force trees, hashingX

System and language X

(3)

Recap: Data Mining (1)

Important concepts:

Classification is one data mining task

Training data is used to define and solve the task A Method is a general approach / idea to solve a task A algorithmis a way to realise a method

A model forms the extracted knowledge from data Accuracy measures the model quality given the data

K-NN:Look at theknearest neighbours of ~x and use most common label as prediction

Homework: How good was your prediction?

(4)

Recap: Data Mining (1)

Important concepts:

Classification is one data mining task

Training data is used to define and solve the task A Method is a general approach / idea to solve a task A algorithmis a way to realise a method

A model forms the extracted knowledge from data Accuracy measures the model quality given the data K-NN:Look at the knearest neighbours of ~x and use most common label as prediction

Homework: How good was your prediction?

(5)

The MNIST dataset

Common error rates1 without pre-procssing:

K-NN:2.83 % - SVM:1.4 % - CNN:∼0.4 % Big Note: Dataset already centered and scaled

1See: http://yann.lecun.com/exdb/mnist/

(6)

K-NN: Example (1)

negative positive unknown

k= 1, all points available

negative positive unknown

k= 1,2points missing

(7)

K-NN: Example (2)

negative positive unknown

k= 1,8 points missing

negative positive unknown

k= 1,12points missing

(8)

Feature Engineering and Feature Dimensions

Note: K-NN fails to recognize patterns in incomplete data

Fact 1: State space grows exponentially with increasing dimension. ExampleX ={1,2, . . . ,10}

For: X1, there are10 different observations

For: X2, there are102 = 100different observations For: X3, there are103 = 1000different observations. . . Fact 2: Training data is generated by a noisy real-world process

We usually have no influence on the type of training data We usually cannot interfere with the real-world process Thus: Training data should be considered incomplete and noisy

(9)

Feature Engineering and Feature Dimensions

Note: K-NN fails to recognize patterns in incomplete data Fact 1: State space grows exponentially with increasing dimension. ExampleX ={1,2, . . . ,10}

For: X1, there are10 different observations

For: X2, there are102= 100 different observations For: X3, there are103= 1000 different observations. . .

Fact 2: Training data is generated by a noisy real-world process We usually have no influence on the type of training data We usually cannot interfere with the real-world process Thus: Training data should be considered incomplete and noisy

(10)

Feature Engineering and Feature Dimensions

Note: K-NN fails to recognize patterns in incomplete data Fact 1: State space grows exponentially with increasing dimension. ExampleX ={1,2, . . . ,10}

For: X1, there are10 different observations

For: X2, there are102= 100 different observations For: X3, there are103= 1000 different observations. . . Fact 2: Training data is generated by a noisy real-world process

We usually have no influence on the type of training data We usually cannot interfere with the real-world process

Thus: Training data should be considered incomplete and noisy

(11)

Feature Engineering and Feature Dimensions

Note: K-NN fails to recognize patterns in incomplete data Fact 1: State space grows exponentially with increasing dimension. ExampleX ={1,2, . . . ,10}

For: X1, there are10 different observations

For: X2, there are102= 100 different observations For: X3, there are103= 1000 different observations. . . Fact 2: Training data is generated by a noisy real-world process

We usually have no influence on the type of training data We usually cannot interfere with the real-world process Thus: Training data should be considered incomplete and noisy

(12)

Feature Engineering and Feature Dimensions

Fact: There is no free lunch (Wolpert, 1996)

Every method has is advantages and disadvantages

Most methods are able to perfectly learn a given toy data set Problem occurs with noise, outlier and generalisation

Conclusion: All methods are equally good or bad But: Some methods prefer certain representations

Feature Engineering: Finding the right representation for data Reduce dimension? Increase dimension?

Add additional information? Regularities? Transform data completely?

(13)

Feature Engineering and Feature Dimensions

Fact: There is no free lunch (Wolpert, 1996)

Every method has is advantages and disadvantages

Most methods are able to perfectly learn a given toy data set Problem occurs with noise, outlier and generalisation

Conclusion: All methods are equally good or bad But: Some methods prefer certain representations

Feature Engineering: Finding the right representation for data Reduce dimension? Increase dimension?

Add additional information? Regularities? Transform data completely?

(14)

Feature Engineering and Feature Dimensions

Fact: There is no free lunch (Wolpert, 1996)

Every method has is advantages and disadvantages

Most methods are able to perfectly learn a given toy data set Problem occurs with noise, outlier and generalisation

Conclusion: All methods are equally good or bad But: Some methods prefer certain representations

Feature Engineering: Finding the right representation for data Reduce dimension? Increase dimension?

Add additional information? Regularities?

Transform data completely?

(15)

Feature Engineering: Example

x1

x2

Raw data without transformation.

Linear model is a bad choice.

Parabolic model would be better.

φ

x1 x2

Data transformed with

φ(x1, x2) = (x1, x20.3·x21).

Now linear model fits the problem.

(16)

Feature Engineering: Conclusion

Conclusion: Good features are crucial for good results!

Question: How to get good features?

1 By hand: Domain experts and data miner examine the data and try different features based on common knowledge.

2 Semi supervised: Data miner examines the data and tries different similarity functions and classes of methods

3 Unsupervised: Data miner only encodes some assumptions about regularities into the method.

Note 1: Hand-crafted features give us insight about the process Note 2: Semi/unsupervised features give us insight about the data Our focus: Unsupervised feature extraction.

(17)

Feature Engineering: Conclusion

Conclusion: Good features are crucial for good results!

Question: How to get good features?

1 By hand: Domain experts and data miner examine the data and try different features based on common knowledge.

2 Semi supervised: Data miner examines the data and tries different similarity functions and classes of methods

3 Unsupervised: Data miner only encodes some assumptions about regularities into the method.

Note 1: Hand-crafted features give us insight about the process Note 2: Semi/unsupervised features give us insight about the data Our focus: Unsupervised feature extraction.

(18)

Feature Engineering: Conclusion

Conclusion: Good features are crucial for good results!

Question: How to get good features?

1 By hand: Domain experts and data miner examine the data and try different features based on common knowledge.

2 Semi supervised: Data miner examines the data and tries different similarity functions and classes of methods

3 Unsupervised: Data miner only encodes some assumptions about regularities into the method.

Note 1: Hand-crafted features give us insight about the process Note 2: Semi/unsupervised features give us insight about the data Our focus: Unsupervised feature extraction.

(19)

Data Mining Basics

What is Deep Learning?

(20)

Deep Learning Basics

So... What is Deep Learning?

Well... its currently one of the big things in AI!

Since 2010: DeepMind learns and plays old Atari games Since 2012: Google is able to find cats in youtube videos December 2014: Near real-time translation in Skype October 2015: AlphaGo beats the European Go champion October 2015: Tesla deploys Autopilot in their cars March 2016: AlphaGo beats the Go Worldchampion June 2016: Facebook introduces DeepText

. . .

(21)

Deep Learning: Example

(22)

Deep Learning Basics

Deep Learning: is a branch of Machine Learning dealing with (Deep) Artificial Neural Networks (ANN)

High Level Feature Processing Fast Implementations

ANNsare well known! So what’s new about it? We have more data and more computation power We have a better understanding of optimization We use a more engineering-style approach Our focus now: Artificial Neural Networks

(23)

Deep Learning Basics

Deep Learning: is a branch of Machine Learning dealing with (Deep) Artificial Neural Networks (ANN)

High Level Feature Processing Fast Implementations

ANNsare well known! So what’s new about it?

We have more data and more computation power We have a better understanding of optimization We use a more engineering-style approach Our focus now: Artificial Neural Networks

(24)

Artificial Neural Networks: Single Neuron

Simple case: Let~x∈Bd Biology’s view:

...

Neuron . . .

input processing output

“Fire” if input signals reach threshold:

f(~x) =

(+1 if Pd

i=1xi ≥b 0 else

Geometrical view:

x1 x2

Predict class depending on side of line (count):

f(~x) =

(+1 if Pd

i=1xi ≥b 0 else

(25)

Artificial Neural Networks: Single Neuron

Simple case: Let~x∈Bd Biology’s view:

...

Neuron . . .

input processing output

“Fire” if input signals reach threshold:

f(~x) =

(+1 if Pd

i=1xi ≥b 0 else

Geometrical view:

x1 x2

Predict class depending on side of line (count):

f(~x) =

(+1 if Pd

i=1xi ≥b 0 else

(26)

Artificial Neural Networks: Single Neuron

Note: We basically count the number of positive inputs 1943: McCulloch-Pitts Neuron:

Simple linear model with binary input and output Can model boolean OR with b= 1

Can model boolean AND with b=d Simple extension also allows boolean NOT

Thus: A network of McCulloch-Pitts neurons can simulate every boolean function (functional complete)

Remark: That does not help with classification, thus

Rosenblatt 1958: Use weightswi∈Rfor every inputxi∈B Minksy-Papert 1959: Allow real valued inputsxi ∈R

(27)

Artificial Neural Networks: Single Neuron

Note: We basically count the number of positive inputs 1943: McCulloch-Pitts Neuron:

Simple linear model with binary input and output Can model boolean OR with b= 1

Can model boolean AND with b=d Simple extension also allows boolean NOT

Thus: A network of McCulloch-Pitts neurons can simulate every boolean function (functional complete)

Remark: That does not help with classification, thus

Rosenblatt 1958: Use weightswi∈Rfor every inputxi∈B Minksy-Papert 1959: Allow real valued inputsxi ∈R

(28)

Artificial Neural Networks: Single Neuron

Note: We basically count the number of positive inputs 1943: McCulloch-Pitts Neuron:

Simple linear model with binary input and output Can model boolean OR with b= 1

Can model boolean AND with b=d Simple extension also allows boolean NOT

Thus: A network of McCulloch-Pitts neurons can simulate every boolean function (functional complete)

Remark: That does not help with classification, thus

Rosenblatt 1958: Use weightswi∈Rfor every inputxi∈B Minksy-Papert 1959: Allow real valued inputsxi ∈R

(29)

Artificial Neural Networks: Perceptron

A perceptronis a linear classifierf:Rd→ {0,1} with

fb(~x) =

(+1 if Pd

i=1wi·xi≥b 0 else

Linear function ind= 2: y=mx+ ˜b Perceptron: w1·x1+w2·x2≥b⇔x2= wb

2ww1

2x1 Obviously: A perceptron is a hyperplane in ddimensions Note: w~ = (w1, . . . , wd, b)T are the parameters of a perceptron Notation: Given~x we add a1 to the end of it~x= (x1, . . . , xd,1)T

Then: fb(~x) =

(+1 if ~x·w~T ≥0 0 else

(30)

Artificial Neural Networks: Perceptron

A perceptronis a linear classifierf:Rd→ {0,1} with

fb(~x) =

(+1 if Pd

i=1wi·xi≥b 0 else

Linear function ind= 2: y=mx+ ˜b Perceptron: w1·x1+w2·x2≥b⇔x2 = wb

2ww1

2x1 Obviously: A perceptron is a hyperplane inddimensions

Note: w~ = (w1, . . . , wd, b)T are the parameters of a perceptron Notation: Given~x we add a1 to the end of it~x= (x1, . . . , xd,1)T

Then: fb(~x) =

(+1 if ~x·w~T ≥0 0 else

(31)

Artificial Neural Networks: Perceptron

A perceptronis a linear classifierf:Rd→ {0,1} with

fb(~x) =

(+1 if Pd

i=1wi·xi≥b 0 else

Linear function ind= 2: y=mx+ ˜b Perceptron: w1·x1+w2·x2≥b⇔x2 = wb

2ww1

2x1 Obviously: A perceptron is a hyperplane inddimensions Note: w~ = (w1, . . . , wd, b)T are the parameters of a perceptron Notation: Given~x we add a1 to the end of it~x= (x1, . . . , xd,1)T

Then: fb(~x) =

(+1 if~x·w~T ≥0 0 else

(32)

ANN: Perceptron Learning

Note: A perceptron assumes that the data is linear separable

Big Note: This is an assumption and not necessarily true! But: In case of linear separability, there are many “good”w~

Note: We are happy withoneseparative vector w~

(33)

ANN: Perceptron Learning

Note: A perceptron assumes that the data is linear separable Big Note: This is an assumption and not necessarily true!

But: In case of linear separability, there are many “good”w~

Note: We are happy withoneseparative vector w~

(34)

ANN: Perceptron Learning

Note: A perceptron assumes that the data is linear separable Big Note: This is an assumption and not necessarily true!

But: In case of linear separability, there are many “good”w~

Note: We are happy withoneseparative vector w~

(35)

ANN: Perceptron Learning

Note: A perceptron assumes that the data is linear separable Big Note: This is an assumption and not necessarily true!

But: In case of linear separability, there are many “good”w~

Note: We are happy withoneseparative vector w~

(36)

ANN: Perceptron Learning

Question: How do we get the weights w?~

Observation: We look at~x·w~T ≥0

if output was 0but should have been 1increment weights if output was 1but should have been 0decrement weights if output was correct, don’t change weights

1: w~ =rand(1, . . . , d+ 1)

2: while ERRORdo

3: for (~xi, yi)∈ D do

4: w~ =w~ +α·~xi·(yi−fb(~xi))

5: end for

6: end while

Note: α∈R>0 is a stepsize / learning rate

(37)

ANN: Perceptron Learning

Question: How do we get the weights w?~ Observation: We look at~x·w~T ≥0

if output was 0but should have been 1increment weights if output was 1but should have been 0decrement weights if output was correct, don’t change weights

1: w~ =rand(1, . . . , d+ 1)

2: while ERRORdo

3: for (~xi, yi)∈ D do

4: w~ =w~ +α·~xi·(yi−fb(~xi))

5: end for

6: end while

Note: α∈R>0 is a stepsize / learning rate

(38)

ANN: Perceptron Learning

Question: How do we get the weights w?~ Observation: We look at~x·w~T ≥0

if output was 0but should have been 1increment weights if output was 1but should have been 0decrement weights if output was correct, don’t change weights

1: w~ =rand(1, . . . , d+ 1)

2: while ERRORdo

3: for(~xi, yi)∈ D do

4: w~ =w~ +α·~xi·(yi−fb(~xi))

5: end for

6: end while

Note: α∈R>0 is a stepsize / learning rate

(39)

ANN: Perceptron Learning

Question: How do we get the weights w?~ Observation: We look at~x·w~T ≥0

if output was 0but should have been 1increment weights if output was 1but should have been 0decrement weights if output was correct, don’t change weights

1: w~ =rand(1, . . . , d+ 1)

2: while ERRORdo

3: for(~xi, yi)∈ D do

4: w~ =w~ +α·~xi·(yi−fb(~xi))

5: end for

6: end while

Note: α∈R>0 is a stepsize / learning rate

(40)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi))

Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~oldT +α·~xi·~xTi =~xi·w~oldT +α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~oldT −α·~xi·~xTi =~xi·w~oldT −α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(41)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~oldT +α·~xi·~xTi =~xi·w~oldT +α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~oldT −α·~xi·~xTi =~xi·w~oldT −α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(42)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~oldT +α·~xi·~xTi =~xi·w~oldT +α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~oldT −α·~xi·~xTi =~xi·w~oldT −α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(43)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~Told+α·~xi·~xTi =~xi·w~Told+α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~oldT −α·~xi·~xTi =~xi·w~oldT −α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(44)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~Told+α·~xi·~xTi =~xi·w~Told+α· ||~xi||2

→w~ is incremented and classification is moved towards1 X

Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~oldT −α·~xi·~xTi =~xi·w~oldT −α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(45)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~Told+α·~xi·~xTi =~xi·w~Told+α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~oldT −α·~xi·~xTi =~xi·w~oldT −α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(46)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~Told+α·~xi·~xTi =~xi·w~Told+α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~oldT −α·~xi·~xTi =~xi·w~oldT −α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(47)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~Told+α·~xi·~xTi =~xi·w~Told+α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~Told−α·~xi·~xTi =~xi·w~Told−α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(48)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Wrong classification:

Case 1: yi−fbold(~xi) = 1⇒yi = 1,fbold(~xi) = 0

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old+α·1·~xi)T

= ~xi·w~Told+α·~xi·~xTi =~xi·w~Told+α· ||~xi||2

→w~ is incremented and classification is moved towards1 X Case 2: yi−fbold(~xi) =−1⇒yi= 0,fbold(~xi) = 1

fbnew(~xi) = ~xi·(w~new)T =~xi·(w~old−α·1·~xi)T

= ~xi·w~Told−α·~xi·~xTi =~xi·w~Told−α· ||~xi||2

→w~ is decremented and classification is moved towards0X

(49)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi))

Correct classification: yi−fb(~xi) = 0

~

wnew=w~old, thusw~ is unchanged X Rosenblatt 1958 showed:

Algorithms converges ifD is linear separable Algorithm may have exponential runtime

Variation: Batch processing - Updatew~ after testing all examples

~

wnew=w~old+α X

(~xi,yi)∈Dwrong

~xi·(yi−fbold(~xi))

Usually: Faster convergence, but more memory needed

(50)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Correct classification: yi−fb(~xi) = 0

~

wnew=w~old, thusw~ is unchanged X

Rosenblatt 1958 showed:

Algorithms converges ifD is linear separable Algorithm may have exponential runtime

Variation: Batch processing - Updatew~ after testing all examples

~

wnew=w~old+α X

(~xi,yi)∈Dwrong

~xi·(yi−fbold(~xi))

Usually: Faster convergence, but more memory needed

(51)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Correct classification: yi−fb(~xi) = 0

~

wnew=w~old, thusw~ is unchanged X Rosenblatt 1958 showed:

Algorithms converges ifD is linear separable Algorithm may have exponential runtime

Variation: Batch processing - Updatew~ after testing all examples

~

wnew=w~old+α X

(~xi,yi)∈Dwrong

~xi·(yi−fbold(~xi))

Usually: Faster convergence, but more memory needed

(52)

ANN: Perceptron Learning

Update rule: w~new =w~old+α·~xi·(yi−fbold(~xi)) Correct classification: yi−fb(~xi) = 0

~

wnew=w~old, thusw~ is unchanged X Rosenblatt 1958 showed:

Algorithms converges ifD is linear separable Algorithm may have exponential runtime

Variation: Batch processing - Updatew~ after testing all examples

~

wnew=w~old+α X

(~xi,yi)∈Dwrong

~xi·(yi−fbold(~xi))

Usually: Faster convergence, but more memory needed

(53)

ANN: The XOR Problem

Question: What happens if data is not linear separable?

Data linear separable, but noisy

?

Data not linear separable

Answer: Algorithm will never converge, thus: Use fixed number of iterations

Introduce some acceptable error margin

(54)

ANN: The XOR Problem

Question: What happens if data is not linear separable?

Data linear separable, but noisy

?

Data not linear separable

Answer: Algorithm will never converge, thus: Use fixed number of iterations

Introduce some acceptable error margin

(55)

ANN: The XOR Problem

Question: What happens if data is not linear separable?

Data linear separable, but noisy

?

Data not linear separable

Answer: Algorithm will never converge, thus:

Use fixed number of iterations

Introduce some acceptable error margin

(56)

ANN: Multilayer perceptrons

Recap: (Hand crafted) Feature transformation always possible But: What about an automatic way?

Idea: If all you have is a perceptron, use more perceptrons! Biology’s view:

x1

x2

...

xd

input layer hidden layer output layer

Geometric view:

Now outputs depends on layers: f(~x) =b fK(. . . f2(f1(~x)))

(57)

ANN: Multilayer perceptrons

Recap: (Hand crafted) Feature transformation always possible But: What about an automatic way?

Idea: If all you have is a perceptron, use more perceptrons!

Biology’s view:

x1

x2

...

xd

input layer hidden layer output layer

Geometric view:

Now outputs depends on layers: f(~x) =b fK(. . . f2(f1(~x)))

(58)

ANN: Multilayer perceptrons

Recap: (Hand crafted) Feature transformation always possible But: What about an automatic way?

Idea: If all you have is a perceptron, use more perceptrons!

Biology’s view:

x1

x2

...

xd

input layer hidden layer output layer

Geometric view:

Now outputs depends on layers: f(~x) =b fK(. . . f2(f1(~x)))

(59)

ANN: Multilayer perceptrons

Recap: (Hand crafted) Feature transformation always possible But: What about an automatic way?

Idea: If all you have is a perceptron, use more perceptrons!

Biology’s view:

x1

x2

...

xd

input layer hidden layer output layer

Geometric view:

Now outputs depends on layers: fb(~x) =fK(. . . f2(f1(~x)))

(60)

ANN: Multilayer perceptrons

Observation:

1 perceptron: Separates space into two sets

Many perceptrons in 1 layer: Identifies convex sets Many perceptrons in 2 layer: Identifies arbitrary sets

Hornik et. al 1989: MLP is a universal approximator

→Given enough hidden units, a MLP is able to represent any

“well-conditioned” function perfectly

Barron 1993: Worst case needs exponential number of hidden units But: That does not necessarily mean, that we will find it!

Usually we cannot afford exponentially large networks Learning of w~ might fail due to data or numerical reasons

(61)

ANN: Multilayer perceptrons

Observation:

1 perceptron: Separates space into two sets

Many perceptrons in 1 layer: Identifies convex sets Many perceptrons in 2 layer: Identifies arbitrary sets Hornik et. al 1989: MLP is a universal approximator

→Given enough hidden units, a MLP is able to represent any

“well-conditioned” function perfectly

Barron 1993: Worst case needs exponential number of hidden units But: That does not necessarily mean, that we will find it!

Usually we cannot afford exponentially large networks Learning of w~ might fail due to data or numerical reasons

(62)

ANN: Multilayer perceptrons

Observation:

1 perceptron: Separates space into two sets

Many perceptrons in 1 layer: Identifies convex sets Many perceptrons in 2 layer: Identifies arbitrary sets Hornik et. al 1989: MLP is a universal approximator

→Given enough hidden units, a MLP is able to represent any

“well-conditioned” function perfectly

Barron 1993: Worst case needs exponential number of hidden units

But: That does not necessarily mean, that we will find it! Usually we cannot afford exponentially large networks Learning of w~ might fail due to data or numerical reasons

(63)

ANN: Multilayer perceptrons

Observation:

1 perceptron: Separates space into two sets

Many perceptrons in 1 layer: Identifies convex sets Many perceptrons in 2 layer: Identifies arbitrary sets Hornik et. al 1989: MLP is a universal approximator

→Given enough hidden units, a MLP is able to represent any

“well-conditioned” function perfectly

Barron 1993: Worst case needs exponential number of hidden units But: That does not necessarily mean, that we will find it!

Usually we cannot afford exponentially large networks Learning ofw~ might fail due to data or numerical reasons

(64)

MLP: Learning

Question: So how do we learn the weights of our MLP?

Unfortunately: We need some more background

So far: We formulated anoptimizationalgorithm to find perceptron weights that minimize classificationerror This is a common approach in Data Mining:

Specify model family

Specify optimization procedure Specify a cost / loss function

Note: Loss function6=Accuracy

→The loss function is minimized during learning

→Accuracy is used to measure the model’s quality after learning

(65)

MLP: Learning

Question: So how do we learn the weights of our MLP?

Unfortunately: We need some more background

So far: We formulated an optimizationalgorithm to find perceptron weights that minimize classificationerror

This is a common approach in Data Mining: Specify model family

Specify optimization procedure Specify a cost / loss function

Note: Loss function6=Accuracy

→The loss function is minimized during learning

→Accuracy is used to measure the model’s quality after learning

(66)

MLP: Learning

Question: So how do we learn the weights of our MLP?

Unfortunately: We need some more background

So far: We formulated an optimizationalgorithm to find perceptron weights that minimize classificationerror This is a common approach in Data Mining:

Specify model family

Specify optimization procedure Specify a cost / loss function

Note: Loss function6=Accuracy

→The loss function is minimized during learning

→Accuracy is used to measure the model’s quality after learning

(67)

MLP: Learning

Question: So how do we learn the weights of our MLP?

Unfortunately: We need some more background

So far: We formulated an optimizationalgorithm to find perceptron weights that minimize classificationerror This is a common approach in Data Mining:

Specify model family

Specify optimization procedure Specify a cost / loss function

Note: Loss function6=Accuracy

→The loss function is minimized during learning

→Accuracy is used to measure the model’s quality after learning

(68)

Data Mining: Loss function (1)

Question: Given a model fb, some dataD, how good is fb? Fact: There are many different ways to measure the quality offb

0-1-loss:

`(D,θ) =b

N

X

i=1

|yi−1f

bθ(~xi)|

Note: We implicitly used 0-1-lossfor perceptron learning Root-Mean Squared Error (RMSE):

`(D,θ) =b v u u t

1 N

N

X

i=1

yi−f

θb(~xi)2

Note: Well known, has been around for∼200years

(69)

Data Mining: Loss function (1)

Question: Given a model fb, some dataD, how good is fb? Fact: There are many different ways to measure the quality offb 0-1-loss:

`(D,θ) =b

N

X

i=1

|yi−1f

bθ(~xi)|

Note: We implicitly used 0-1-lossfor perceptron learning

Root-Mean Squared Error (RMSE):

`(D,θ) =b v u u t

1 N

N

X

i=1

yi−f

θb(~xi)2

Note: Well known, has been around for∼200years

(70)

Data Mining: Loss function (1)

Question: Given a model fb, some dataD, how good is fb? Fact: There are many different ways to measure the quality offb 0-1-loss:

`(D,θ) =b

N

X

i=1

|yi−1f

bθ(~xi)|

Note: We implicitly used 0-1-lossfor perceptron learning Root-Mean Squared Error (RMSE):

`(D,bθ) = v u u t

1 N

N

X

i=1

yi−f

θb(~xi)2

Note: Well known, has been around for∼200years

(71)

Data Mining: Loss function (2)

Let: Y ={0,+1} andf

bθ(~xi)∈[0,1]

Cross-entropy / log liklihood

`(D,θ) =b −1 N

N

X

i=1

yiln fθb(~xi)

+ (1−yi) ln 1−fθb(~xi)

Observation 1: All values in logarithms are negative Therefore: Minus sign for minimization

Statistical interpretation: Given two distributions pandq how much entropy (≈chaos) is present inp

how similar arep andq to each other? Usually: Faster learning convergence than RMSE

(72)

Data Mining: Loss function (2)

Let: Y ={0,+1} andf

bθ(~xi)∈[0,1]

Cross-entropy / log liklihood

`(D,θ) =b −1 N

N

X

i=1

yiln fθb(~xi)

+ (1−yi) ln 1−fθb(~xi)

Observation 1: All values in logarithms are negative Therefore: Minus sign for minimization

Statistical interpretation: Given two distributions pandq how much entropy (≈chaos) is present inp

how similar arep andq to each other? Usually: Faster learning convergence than RMSE

(73)

Data Mining: Loss function (2)

Let: Y ={0,+1} andf

bθ(~xi)∈[0,1]

Cross-entropy / log liklihood

`(D,θ) =b −1 N

N

X

i=1

yiln fθb(~xi)

+ (1−yi) ln 1−fθb(~xi)

Observation 1: All values in logarithms are negative Therefore: Minus sign for minimization

Statistical interpretation: Given two distributions pandq how much entropy (≈ chaos) is present inp

how similar arep andq to each other?

Usually: Faster learning convergence than RMSE

(74)

Data Mining: Optimization

Question: Given loss `, some dataD, how to find optimal θ?

Mathematically:

θb= arg min

θ

`(D, θ)

Gradient descent: Follow steepest descent of` with stepsize α

→use 1st derivative∇θ`(D, θ) = (∂`(D,b∂θ θ)

1 , . . . ,∂`(D,b∂θ θ)

d )T

→make a step in direction of ∇θ`(D, θ) with stepsize α∈R>0

1: θb=rand(1, . . . , d)

2: while NOT STOPdo

3: θb=θb−α· ∇θ`(D,θ)b

4: end while

e.g. 100iterations

e.g. minimum change in θ

Note: We implicitly used ∇θ`(D,θ) =b −~xi·(yi−fb(~xi))

(75)

Data Mining: Optimization

Question: Given loss `, some dataD, how to find optimal θ?

Mathematically:

θb= arg min

θ

`(D, θ)

Gradient descent: Follow steepest descent of` with stepsize α

→use 1st derivative∇θ`(D, θ) = (∂`(D,b∂θ θ)

1 , . . . ,∂`(D,b∂θ θ)

d )T

→make a step in direction of ∇θ`(D, θ) with stepsize α∈R>0

1: θb=rand(1, . . . , d)

2: while NOT STOPdo

3: θb=θb−α· ∇θ`(D,θ)b

4: end while

e.g. 100iterations

e.g. minimum change in θ

Note: We implicitly used ∇θ`(D,θ) =b −~xi·(yi−fb(~xi))

(76)

Data Mining: Optimization

Question: Given loss `, some dataD, how to find optimal θ?

Mathematically:

θb= arg min

θ

`(D, θ)

Gradient descent: Follow steepest descent of` with stepsizeα

→use 1st derivative∇θ`(D, θ) = (∂`(D,b∂θ θ)

1 , . . . ,∂`(D,b∂θ θ)

d )T

→make a step in direction of ∇θ`(D, θ) with stepsize α∈R>0

1: θb=rand(1, . . . , d)

2: while NOT STOPdo

3: θb=θb−α· ∇θ`(D,θ)b

4: end while

e.g. 100iterations

e.g. minimum change in θ

Note: We implicitly used ∇θ`(D,θ) =b −~xi·(yi−fb(~xi))

(77)

Data Mining: Optimization

Question: Given loss `, some dataD, how to find optimal θ?

Mathematically:

θb= arg min

θ

`(D, θ)

Gradient descent: Follow steepest descent of` with stepsizeα

→use 1st derivative∇θ`(D, θ) = (∂`(D,b∂θ θ)

1 , . . . ,∂`(D,b∂θ θ)

d )T

→make a step in direction of ∇θ`(D, θ) with stepsize α∈R>0

1: θb=rand(1, . . . , d)

2: while NOT STOPdo

3: θb=θb−α· ∇θ`(D,θ)b

4: end while

e.g. 100iterations

e.g. minimum change in θ

Note: We implicitly used ∇θ`(D,θ) =b −~xi·(yi−fb(~xi))

(78)

Data Mining: Optimization

Question: Given loss `, some dataD, how to find optimal θ?

Mathematically:

θb= arg min

θ

`(D, θ)

Gradient descent: Follow steepest descent of` with stepsizeα

→use 1st derivative∇θ`(D, θ) = (∂`(D,b∂θ θ)

1 , . . . ,∂`(D,b∂θ θ)

d )T

→make a step in direction of ∇θ`(D, θ) with stepsize α∈R>0

1: θb=rand(1, . . . , d)

2: while NOT STOPdo

3: θb=θb−α· ∇θ`(D,θ)b

4: end while

e.g. 100iterations

e.g. minimum change in θ

Note: We implicitly used ∇θ`(D,θ) =b −~xi·(yi−fb(~xi))

(79)

Data Mining: Optimization

Question: Given loss `, some dataD, how to find optimal θ?

Mathematically:

θb= arg min

θ

`(D, θ)

Gradient descent: Follow steepest descent of` with stepsizeα

→use 1st derivative∇θ`(D, θ) = (∂`(D,b∂θ θ)

1 , . . . ,∂`(D,b∂θ θ)

d )T

→make a step in direction of ∇θ`(D, θ) with stepsize α∈R>0

1: θb=rand(1, . . . , d)

2: while NOT STOPdo

3: θb=θb−α· ∇θ`(D,θ)b

4: end while

e.g. 100iterations

e.g. minimum change inθ

Note: We implicitly used ∇θ`(D,θ) =b −~xi·(yi−fb(~xi))

(80)

Data Mining: Optimization

Question: Given loss `, some dataD, how to find optimal θ?

Mathematically:

θb= arg min

θ

`(D, θ)

Gradient descent: Follow steepest descent of` with stepsizeα

→use 1st derivative∇θ`(D, θ) = (∂`(D,b∂θ θ)

1 , . . . ,∂`(D,b∂θ θ)

d )T

→make a step in direction of ∇θ`(D, θ) with stepsize α∈R>0

1: θb=rand(1, . . . , d)

2: while NOT STOPdo

3: θb=θb−α· ∇θ`(D,θ)b

4: end while

e.g. 100iterations

e.g. minimum change inθ

Note: We implicitly used ∇θ`(D,θ) =b −~xi·(yi−fb(~xi))

(81)

Summary

Important concepts:

Feature Engineering is key to solve Data Mining tasks Deep Learning combines learning and Feature Engineering A perceptronis a simple linear model for classification A multilayer perceptron combine multiple perceptrons For parameter optimization we define a loss function For parameter optimization we use gradient descent The learning ruleperforms the actual optimization

(82)

Homework

Homeworkuntil next meeting Implement perceptron learning

Test your implementation on the MNIST dataset MNIST has10classes, so you’ll need10perceptrons

Train one perceptron per class: corresponding perceptron has label1 and remaining perceptrons label0

Check predictions of all perceptrons: Predict corresponding number of perceptron with positive prediction

If multiple percpetrons predict1, use that one with highest prediction value

Note 1: We will later useC, so please use Cor aC-like language Note 2: Use the smaller split for development and the complete data set for testing→ What’s your accuracy?

Referenzen

ÄHNLICHE DOKUMENTE

It just uses the data D → really fast model computation But: Application is really slow, since we search over all examples Observation 2: It is enough to only look at examples “near”

Non-linear activation: A network should contain at least one layer with non-linear activation function for better learning Sparse activation: To prevent over-fitting, only a few

Note 1: Since convolution is used internally, there is no need for mapping values inside the net → use computed values directly Note 2: The size of the resulting image depends on

Only works on subset of high level language Note: HLS lets you describe your hardware in C-Code and the HLS tool will try to guess what you code meant and put that on the FPGA

Aktiviert im Fall failure die Attribute failure type, affected body, object size, object hardness und object weight. phase mit dem Wert initial, middle

Also, it is visible from the research results of Section 3.2 that inter- faces like ODBC and JDBC, although intended for relational databases us- ing SQL (see Section 2.4), are used

The first stage of gram- mar development was done with the help of the LinGO Grammar Matrix customization system, which took care of the general design of the fea- ture geometry in

The contribution of the semantic features can be observed most clearly in the cross domain experiments where adding the semantics gives up a visible improvement of 2.2% when training