• Sonuç bulunamadı

Yapay Zeka 802600715151

N/A
N/A
Protected

Academic year: 2021

Share "Yapay Zeka 802600715151"

Copied!
13
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

Yapay Zeka 802600715151

Doç. Dr. Mehmet Serdar GÜZEL

Slides are mainly adapted from the following course page:

at http://ai.berkeley.edu created by Dan Klein and Pieter Abbeel for CS188

(2)

Lecturer

Instructor: Assoc. Prof Dr. Mehmet S Güzel

Office hours: Tuesday, 1:30-2:30pm

Open door policy – don’t hesitate to stop by!

Watch the course website

Assignments, lab tutorials, lecture notes

slid e 2

(3)

Markov Decision Processes

[These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

(4)

Non-Deterministic Search

(5)

Example: Grid World

 A maze-like problem

 The agent lives in a grid

 Walls block the agent’s path

 Noisy movement: actions do not always go as planned

 80% of the time, the action North takes the agent North (if there is no wall there)

 10% of the time, North takes the agent West; 10% East

 If there is a wall in the direction the agent would have been taken, the agent stays put

 The agent receives rewards each time step

 Small “living” reward each step (can be negative)

 Big rewards come at the end (good or bad)

 Goal: maximize sum of rewards

(6)

Grid World Actions

Deterministic Grid World Stochastic Grid World

(7)

Markov Decision Processes

An MDP is defined by:

A set of states s  S

A set of actions a  A

A transition function T(s, a, s’)

Probability that a from s leads to s’, i.e., P(s’| s, a)

Also called the model or the dynamics

A reward function R(s, a, s’)

Sometimes just R(s) or R(s’)

A start state

Maybe a terminal state

MDPs are non-deterministic search problems

One way to solve them is with expectimax search

We’ll have a new tool soon

[Demo – gridworld manual intro (L8D1)]

(8)

What is Markov about MDPs?

A Markov decision process (MDP) is a discrete time stochastic control process. ... The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of the Markov chains. At

each time step, the process is in some state

, and the decision maker may choose any action that is available in state 

(9)

Policies

In deterministic single-agent search problems, we wanted an optimal plan, or sequence of actions, from start to a goal

For MDPs, we want an optimal policy *: S A →

A policy  gives an action for each state

An optimal policy is one that maximizes expected utility if followed

An explicit policy defines a reflex agent

Expectimax didn’t compute entire policies

It computed the action for a single state only

Optimal policy when R(s, a, s’) = -0.03

for all non-terminals s

(10)

Optimal Policies

R(s) = -2.0 R(s) = -0.4

R(s) = -0.03 R(s) = -0.01

(11)

Example: Racing

A robot car wants to travel far, quickly

Three states: Cool, Warm, Overheated

Two actions: Slow, Fast

Going faster gets double reward

Cool

Warm

Overheated

Fast

Fast

Slow

Slow 0.5

0.5

0.5

0.5

1.0

1.0

+1

+1

+1

+2

+2

-10

(12)

MDP Search Trees

Each MDP state projects an expectimax-like search tree

a

s

s

’ s, a

(s,a,s’) called a transition T(s,a,s’) = P(s’|s,a)

R(s,a,s’) s,a,s’

s is a state

(s, a) is a q- state

(13)

Utilities of Sequences

Referanslar

Benzer Belgeler

Pac-Man is a registered trademark of Namco-Bandai Games, used here for educational purposes Demo1: pacman-l1.mp4

 State space graph: A mathematical representation of a search problem.  Nodes are (abstracted)

• It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path.. • DFS uses a stack

 Backtracking is an algorithmic-technique for solving problems recursively by trying to build a solution incrementally, one piece at a time, removing those solutions that fail

 Go: Human champions are now starting to be challenged by machines, though the best humans still beat the best machines.. In go, b

Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative

Q-learning is a values-based learning algorithm in reinforcement learning. . Introducing the Q-learning

Yine aynı dönemde dişi kuzuların günlük canlı ağırlık artışı ortalama değerleri gruplar arası farklar incelendiğinde istatistikî açıdan Grup 1 ile Grup 2 ve Grup 2