if CUTOFF-TEST(state, depth) then return EVAL(state)
Could do with iterative deepening search
When time runs out, return move selected by deepest completed search, iterative deepening also helps with move ordering (to know which states evaluated best in the past and increase pruning)
Apply evaluation function only quiescent positions
Unlikely to exhibit wild changes in near future
Stochastic Games
Stochastic Games
Stochastic games involve unpredictability by including a random element during the game
Throwing a dice
Backgammon combines luck and skill
Stochastic Games
Stochastic Games
We know what the legal moves are
We don’t know what the oponent will roll
Then, we cannot construct a standard game tree (i.e. chess or ttt)
We need chance nodes* plus our MAX and MIN nodes
Stochastic Games
Stochastic Games
Positions don’t have definite minimax values anymore
Now we need minimax nodes with expected value
average over all possible outcomes of the chance nodes
Generalize minimax value for deterministic games to an expectiminimax value for games with chance nodes
Stochastic Games
Terminal, MAX and MIN nodes work as before
For chance nodes we compute expected value: sum of value over all outcomes weighted by probability of each chance action
r represents a possible dice roll
RESULT(s,r) is the same state as s considering that the result of the dice roll is r
Stochastic Games
State-of-the-Art Game Programs
State-of-the-Art Game Programs
Chess: IBM’s Deep Blue
Checkers: CHINOOK, Jonathan Schaeffer
Othello: LOGISTELLO, Buro, 2002
Backgammon: TD-GAMMON, Gerry Tesauro, 1992
Go: MoGo
Bridge: Bridge Baron, Smith et al., 1998; GIB, Ginsberg, 1999