• Sonuç bulunamadı

Yapay Zeka 802600715151

N/A
N/A
Protected

Academic year: 2021

Share "Yapay Zeka 802600715151"

Copied!
12
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

Yapay Zeka 802600715151

Doç. Dr. Mehmet Serdar GÜZEL

Slides are mainly adapted from the following course page:

at http://ai.berkeley.edu created by Dan Klein and Pieter Abbeel for CS188

(2)

Lecturer

Instructor: Assoc. Prof Dr. Mehmet S Güzel

Office hours: Tuesday, 1:30-2:30pm

Open door policy – don’t hesitate to stop by!

Watch the course website

Assignments, lab tutorials, lecture notes

slid e 2

(3)

Game Playing State-of-the-Art

Checkers: 1950: First computer player. 1994: First computer champion: Chinook ended 40-year-reign of human champion Marion Tinsley using complete 8-piece endgame. 2007: Checkers solved!

Chess: 1997: Deep Blue defeats human champion Gary Kasparov in a six-game match. Deep Blue examined 200M positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply.

Current programs are even better, if less historic.

Go: Human champions are now starting to be challenged by machines, though the best humans still beat the best machines. In go, b > 300! Classic programs use pattern knowledge bases, but big

recent advances use Monte Carlo (randomized) expansion methods.

Pacman

(4)

Zero-Sum Games

 Zero-Sum Games

Agents have opposite utilities (values on outcomes)

Lets us think of a single value that one maximizes and the other minimizes

Adversarial, pure competition

General Games

Agents have independent utilities (values on outcomes)

Cooperation, indifference, competition, and more are all possible

More later on non-zero-sum games

(5)

Adversarial Search

(6)

Single-Agent Trees

8

2 0 … 2 6 … 4 6

(7)

Tic-Tac-Toe Game Tree

(8)

Adversarial Search (Minimax)

 Deterministic, zero-sum games:

 Tic-tac-toe, chess, checkers

 One player maximizes result

 The other minimizes result

 Minimax search:

 A state-space search tree

 Players alternate turns

 Compute each node’s minimax value:

the best achievable utility against a rational (optimal) adversary

8 2 5 6

max

2 5 min

5

Terminal values:

part of the game Minimax values:

computed recursively

(9)

Minimax Implementation

def min-value(state):

initialize v = +∞

for each successor of state:

v = min(v, max-value(successor)) return v

def max-value(state):

initialize v = -∞

for each successor of state:

v = max(v, min-value(successor))

return v

(10)

Minimax Implementation (Dispatch)

def value(state):

if the state is a terminal state: return the state’s utility

if the next agent is MAX: return max-value(state) if the next agent is MIN: return min-value(state)

def min-value(state):

initialize v = +∞

for each successor of state:

v = min(v, value(successor)) return v

def max-value(state):

initialize v = -∞

for each successor of state:

v = max(v, value(successor))

return v

(11)

Minimax Example

120 80 50 20

30 20 40 60 140

(12)

Minimax Efficiency

 How efficient is minimax?

 Time: O(b

n

)

 Space: O(bn)

 Example: For chess, b  35, m  100

 Exact solution is completely inefficient

 But, do we need to explore the

complete tree?

Referanslar

Benzer Belgeler

 State space graph: A mathematical representation of a search problem.  Nodes are (abstracted)

• It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path.. • DFS uses a stack

 Backtracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail

each time step, the process is in some state , and the decision maker may choose any action that is available in

Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative

Q-learning is a values-based learning algorithm in reinforcement learning. . Introducing the Q-learning

It is assumed that future states depend only on the current state, not on the events that occurred before it (that is, it assumes the Markov

 Bu nedenle tüm seçilim yöntemlerinde uygunluk değeri fazla olan bireylerin seçilme olasılığı daha yüksektir..  En bilinen seçilim yöntemleri Rulet Seçilimi, Turnuva