• Keine Ergebnisse gefunden

Foundations of Artificial Intelligence 44. Monte-Carlo Tree Search: Advanced Topics Malte Helmert

N/A
N/A
Protected

Academic year: 2022

Aktie "Foundations of Artificial Intelligence 44. Monte-Carlo Tree Search: Advanced Topics Malte Helmert"

Copied!
31
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

44. Monte-Carlo Tree Search: Advanced Topics

Malte Helmert

University of Basel

May 19, 2021

(2)

Board Games: Overview

chapter overview:

40. Introduction and State of the Art

41. Minimax Search and Evaluation Functions 42. Alpha-Beta Search

43. Monte-Carlo Tree Search: Introduction 44. Monte-Carlo Tree Search: Advanced Topics 45. AlphaGo and Outlook

(3)

Optimality of MCTS

(4)

Reminder: Monte-Carlo Tree Search

as long as time allows, perform iterations selection: traverse tree

expansion: grow tree

simulation: play game to final position backpropagation: update utility estimates execute move with highest utility estimate

(5)

Optimality

complete “minimax tree” computesoptimal utility valuesu

2

2 1

2 35 10 1

(6)

Asymptotic Optimality

Asymptotic Optimality

An MCTS algorithm isasymptotically optimal if ˆuk(n) converges to optimal utilityu(n) for alln ∈succ(n0) with k → ∞.

(7)

Asymptotic Optimality

A tree policy isasymptotically optimal if it explores forever:

every position isexpanded eventuallyandvisited infinitely often (given that the game tree is finite)

after a finite number of iterations, onlytrue utility values are used in backups

and it isgreedy in the limit:

the probability that an optimal move is selected converges to 1 in the limit, backups based on iterations where only

anoptimal policyis followed dominate suboptimal backups

(8)

Tree Policy

(9)

Objective

tree policies have two contradictory objectives:

exploreparts of the game tree that have not been investigated thoroughly

exploit knowledge about good moves to focus search on promising areas

central challenge: balanceexploration and exploitation

(10)

ε-greedy: Idea

tree policy with constant parameterε

with probability 1−ε, pick agreedy move (i.e., one that leads to a successor node with the best utility estimate) otherwise, pick a non-greedy successor uniformly at random

(11)

ε-greedy: Example

ε= 0.2

3 5 0

P(n1) = 0.1 P(n2) = 0.8 P(n3) = 0.1

(12)

ε-greedy: Asymptotic Optimality

Asymptotic Optimality ofε-greedy explores forever

not greedy in the limit not asymptotically optimal

ε= 0.2 2.7

2.3 2.8

2 3.5 10 1

(13)

ε-greedy: Asymptotic Optimality

Asymptotic Optimality ofε-greedy explores forever

not greedy in the limit not asymptotically optimal

asymptotically optimal variants:

use decayingε, e.g. ε= k1 use minimax backups

(14)

ε-greedy: Weakness

Problem:

whenε-greedy explores, all non-greedy moves are treated equally

50 49 0 . . . 0

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

| {z }

` nodes e.g.,ε= 0.2, `= 9: P(n1) = 0.8,P(n2) = 0.02

(15)

Softmax: Idea

tree policy with constant parameterτ >0

select moves with a frequency that directly relates to their utility estimate

Boltzmann exploration selects moves proportionally to P(n)∝eu(n)ˆτ for MAX nodes (P(n)∝e−ˆu(n)τ for MIN nodes)

(16)

Softmax: Example

50 49 0

. . . 0

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

| {z }

` nodes e.g.,τ = 10, `= 9: P(n1)≈0.51,P(n2)≈0.46

(17)

Optimality Tree Policy Other Techniques Summary

Boltzmann Exploration: Asymptotic Optimality

Asymptotic Optimality of Boltzmann Exploration explores forever

not greedy in the limit

(probabilities converge to positive constant) not asymptotically optimal

asymptotically optimal variants:

use decayingτ use minimax backups

careful: τ must not decay faster than logarithmically (i.e., must have τ ≥ constlogk) to explore infinitely

(18)

Boltzmann Exploration: Weakness

m1 m2

m3

scenario 1: high variance form3 P

m1 m2 m3

scenario 2: low variance for m3 P

Boltzmann exploration only considersmean of sampled utilities for the given moves

as we sample the same node many times, we can also gather information about variance (howreliable the information is) Boltzmann exploration ignores the variance,

treating the two scenarios equally

(19)

Upper Confidence Bounds: Idea

balanceexplorationand exploitationby preferring moves that have been successful in earlier iterations (exploit) have been selected rarely (explore)

(20)

Upper Confidence Bounds: Idea

Upper Confidence Bounds for MAX nodes:

select successorn0 of n that maximizes ˆu(n0) +B(n0) based on utility estimate u(nˆ 0)

and a bonus termB(n0)

select B(n0) such thatu(n0)≤u(nˆ 0) +B(n0) with high probability

idea: ˆu(n0) +B(n0) is anupper confidence bound onu(n0) under the collected information

(analogous for MIN nodes)

(21)

Upper Confidence Bounds: UCB1

use B(n0) = q2·ln

N(n)

N(n0) as bonus term

bonus term is derived from Chernoff-Hoeffding bound, which gives the probability that asampled value(here: ˆu(n0)) is far from itstrue expected value(here: u(n0)) in dependence of thenumber of samples(here: (N(n0)) picks the optimal move exponentiallymore often

(22)

Optimality Tree Policy Other Techniques Summary

Upper Confidence Bounds: Asymptotic Optimality

Asymptotic Optimality of UCB1 explores forever

greedy in the limit asymptotically optimal

notheoretical justification to use UCB1 in trees or planning scenarios

development of tree policies active research topic

(23)

Upper Confidence Bounds: Asymptotic Optimality

Asymptotic Optimality of UCB1 explores forever

greedy in the limit asymptotically optimal However:

notheoretical justification to use UCB1 in trees or planning scenarios

development of tree policies active research topic

(24)

Tree Policy: Asymmetric Game Tree

full tree up to depth 4

(25)

Tree Policy: Asymmetric Game Tree

UCT tree (equal number of search nodes)

(26)

Other Techniques

(27)

Default Policy: Instantiations

default: Monte-Carlo Random Walk

in each state, select a legal moveuniformly at random very cheap to compute

uninformed

usually not sufficientfor good results alternative: domain-dependentdefault policy

hand-crafted or

functionlearned offline

(28)

Default Policy: Alternative

default policy simulatesa game to obtain utility estimate default policy must be evaluated in many positions if default policy is expensive to compute,

simulations are expensive

solution: replace default policy with heuristic that computes a utility estimate directly

(29)

Expansion

to proceed deeper into the tree, each node must be visited at least once for each legal move

deep lookaheads not possible when branching factor is high and resources are limited

rather than add a single node,expand encountered leaf node andadd all successors

allows deep lookaheads needsmore memory

needsinitial utility estimatefor all children

(30)

Summary

(31)

Summary

tree policy is crucial for MCTS

-greedyfavorsgreedy movesand treats allothers equally Boltzmann explorationselects moves proportionally to anexponential functionof their utility estimates UCB1favors moves that weresuccessful in the past or have beenexplored rarely

for each, there are applications where they perform best good default policies are domain-dependent and hand-crafted or learned offline

using heuristics instead of a default policy often pays off

Referenzen

ÄHNLICHE DOKUMENTE

breadth-first search ( this chapter) uniform cost search ( Chapter 11) depth-first search ( Chapter 12) depth-limited search ( Chapter 12) iterative deepening search ( Chapter

breadth-first search optimal if all action costs equal otherwise no optimality guarantee example:.. remedy: uniform

heuristics estimate distance of a state to the goal can be used to focus search on promising states soon: search algorithms that

Dealing with Local Optima Outlook: Simulated Annealing Outlook: Genetic Algorithms Summary.. Combinatorial

Propositional formulas combine atomic formulas with ¬, ∧, ∨, → to more complex statements. Interpretations determine which atomic formulas are true and which ones

in every clause, choose the variables randomly literals positive or negative with equal probability critical parameter: #clauses divided by #variables phase transition at ratio ≈

basic idea of abstraction heuristics: estimate solution cost by considering a smaller planning task. formally: abstraction function α maps states to abstract states and thus

Hence the objective of our landmark heuristics is to approximate the optimal delete relaxed heuristic h + as accurately as possible.. More advanced landmark techniques work directly