• Keine Ergebnisse gefunden

Exercise 3: Some “Classical” Optimization Strategies Date: Summer Term 2006

N/A
N/A
Protected

Academic year: 2021

Aktie "Exercise 3: Some “Classical” Optimization Strategies Date: Summer Term 2006"

Copied!
1
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Exercise 3: Some “Classical” Optimization Strategies

Date: Summer Term 2006

The following functions may be defined in the intervalx∈[−36..35]:

g(x) = (x−10)2

(x−10)2+ 0.01−1 h(x) = x2

f(x) = g(x) + 150h(x)

Unit 1: How do these functions look like? Please, draw simple sketchs.

Unit 2: For displaying graphs of various sorts, UNIX/Linux offers the program gnuplot. gnuplotallows for command line editing (by using the cursor keys) and provides online help by typing help. plot the following functions (one after the other):

sin(x), sin(2∗ x), and both at the same time. Readjust the range of the x axis to the interval [0..π] (set xrange ...). Add a coordinate grid and replot the graphs. Finally, browse further commands (help) and apply them.

Unit 3: Render functionsg(x), h(x), andf(x)as described above.

Unit 4: Assume a 2-dimensional search space with 0 ≤ xi ≤ 2. The goal is to find the optimum with eachxi being as precise as(ˆxi−xoi)2 ≤ǫ, withxˆidenoting the exact position of theith coordinate andxoi denoting the approximated position.

Some rather “classical” optimization procedures are:

Monte Carlo (Global Random Search): This method visits “every” point in search space at random with equal probability.

Systematic Search: This method systematically visits “all” points in search space.

Local random Search: Starting from a randomly selected search point, this pro- cedure visits the neighborhod at random. In case the randomly selected point is better, the procedure advances to it, otherwise it discards the random choice.

Assume an accuracy ofǫ= 0.1. How would you “configure” the three search proce- dures mentioned above? How efficient do they work, that is, how many trials do they need and how high is the probability of finding the global optimum? How do these procedures work if the objective function contains one local optimum, which has distanceζ = 1.0to the global optimum in every coodinate? How do the procedures derive future test points from the past?

Have fun, Hagen and Ralf.

1

Referenzen

ÄHNLICHE DOKUMENTE

That is, although the search engine benefits from the fact that site 1’s high quality organic link attracts more traffic, the sponsored prices may go down if site 1 becomes

6.1 Average characteristic values taken over all tested problem instances and all numbers of rollouts for problem instances of size 6 × 6 for MCTS using UCT for different

I recognize duplicates: when a state is reached on multiple paths, only keep one search node I search nodes correspond 1:1 to reachable states I search tree bounded, as number of

I breadth-first search ( this chapter) I uniform cost search ( Chapter 11) I depth-first search ( Chapter 12) I depth-limited search ( Chapter 12) I iterative deepening search (

uniform cost search: expand nodes in order of ascending path costs. I usually as a

15.1 Introduction 15.2 Best-first Search 15.3 Algorithm Details 15.4 Reopening..

43.2 Monte-Carlo Methods 43.3 Monte-Carlo Tree Search 43.4 Summary.. Helmert (University of Basel) Foundations of Artificial Intelligence May 17, 2021 2

B workers lose however since more disutility firms decide to reject B applicants at high wages (µ falls) so that a larger share of B workers do not find a job in this labor market