• Keine Ergebnisse gefunden

Exercise 18: Approximation with Neural Networks Summer Term 2019

N/A
N/A
Protected

Academic year: 2021

Aktie "Exercise 18: Approximation with Neural Networks Summer Term 2019"

Copied!
1
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Exercise 18: Approximation with Neural Networks

Summer Term 2019

Approximationis another typical application area in which neural networks can yield good re- sults. In this area, a function f : Rn → Rm maps an n-dimensional input domain onto an n-dimensional output domain. First, the function is defined by a set of points. Then, a neural network (or any other tool) should learn to approximate the function values in between reason- ably well.

Review: Discuss the following questions:

1. What is an appropriate stopping criterion for this task?

2. How many connectionswij does the network need?

To Do: In this exercise, you should implement a neural network that has to learn the simple two-dimensional function f(x, y) = cos(x) + cos(y), i.e., f : R2 → R, in the range (x, y)∈[−π..π].

Questions:

1. How many input and output units do you need?

2. What is the output range of a neuron, if you use the regular logistic transfer function?

3. How/where can you modify the number of network parameters?

Tasks: Reuse your simple backpropagation network that you have implemented in the previous exercise. Strip of the encoder stuff and loosely follow the following steps:

1. Introduce two parametersl patsandt pats that specify the number of learning and test patternsperinput dimension. How many patterns do you need in total?

2. Introduce another parameter that specifies whether the training and test pattern should be generated randomly or systematically.

3. Complete the program and print both the learning and the test error.

Hint:The documentation shows how you can directly callgnuplotfrom your own program.

4. Try to learn and generalize the given function reasonably well. In so doing, vary the learning rateη, the momentumα, the number of network parameters, the number of training patterns, the initialization mode, and the number of learning cycles. What can you observe?

Have fun, Theo and Ralf.

1

Referenzen

ÄHNLICHE DOKUMENTE

Our simulation results show that when a weak factor is present in data, our estimator (the one that receives the highest frequency from the estimation with 100 random

In this paper, we have shown how to compute the period lattice of loosely periodic func- tions, and applied the technique to the computation of the unit group of a finite extension K

Given this parameter set and a fixed number of samples m = 256, calling the estimator to calculate the costs for exhaustive search, Coded-BKW, using lattice reduction to

The number of spirals on a sunflower is always a Fibonacci number (or a number very close to a Fibonacci number), for instance in the large picture of on the previous slide there are

unfolding theorem whose proof requires some preparations about isochoric unfoldings and it requires a generalization of the classical Brieskorn module of a hypersurface singularity

We study the maximum number of minimal codewords in binary linear codes of a given length and dimension.. Improved lower and upper bounds on the maximum number

We derive a lower bound on the number of minimal codewords of a linear code using a geometric characterization of minimal (and non-minimal) codewords.. As a consequence, we obtain

The carpometacarpus is well preserved in the type specimen and closely resembles that of other messelirrisorids, although the processus pisiformis is shifted slightly farther