Exercise 3: Some “Classical” Optimization Strategies
Date: Summer Term 2006
The following functions may be defined in the intervalx∈[−36..35]:
g(x) = (x−10)2
(x−10)2+ 0.01−1 h(x) = x2
f(x) = g(x) + 150h(x)
Unit 1: How do these functions look like? Please, draw simple sketchs.
Unit 2: For displaying graphs of various sorts, UNIX/Linux offers the program gnuplot. gnuplotallows for command line editing (by using the cursor keys) and provides online help by typing help. plot the following functions (one after the other):
sin(x), sin(2∗ x), and both at the same time. Readjust the range of the x axis to the interval [0..π] (set xrange ...). Add a coordinate grid and replot the graphs. Finally, browse further commands (help) and apply them.
Unit 3: Render functionsg(x), h(x), andf(x)as described above.
Unit 4: Assume a 2-dimensional search space with 0 ≤ xi ≤ 2. The goal is to find the optimum with eachxi being as precise as(ˆxi−xoi)2 ≤ǫ, withxˆidenoting the exact position of theith coordinate andxoi denoting the approximated position.
Some rather “classical” optimization procedures are:
Monte Carlo (Global Random Search): This method visits “every” point in search space at random with equal probability.
Systematic Search: This method systematically visits “all” points in search space.
Local random Search: Starting from a randomly selected search point, this pro- cedure visits the neighborhod at random. In case the randomly selected point is better, the procedure advances to it, otherwise it discards the random choice.
Assume an accuracy ofǫ= 0.1. How would you “configure” the three search proce- dures mentioned above? How efficient do they work, that is, how many trials do they need and how high is the probability of finding the global optimum? How do these procedures work if the objective function contains one local optimum, which has distanceζ = 1.0to the global optimum in every coodinate? How do the procedures derive future test points from the past?
Have fun, Hagen and Ralf.
1