• Keine Ergebnisse gefunden

Probabilistic Robotics

N/A
N/A
Protected

Academic year: 2021

Aktie "Probabilistic Robotics"

Copied!
53
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Probabilistic Robotics

Mobile Robot Localization

Wolfram Burgard Cyrill Stachniss Giorgio Grisetti Maren Bennewitz Christian Plagemann

(2)

Probabilistic Robotics

Key idea: Explicit representation of uncertainty

(using the calculus of probability theory)

• Perception = state estimation

Action = utility optimization

(3)

3

Bayes Filters: Framework

Given:

• Stream of observations z and action data u:

• Sensor model P(z|x).

• Action model P(x|u,x’).

• Prior probability of the system state P(x).

Wanted:

• Estimate of the state X of a dynamical system.

• The posterior of the state is also called Belief:

) ,

, ,

| ( )

(xt P xt u1 z1 ut zt

Bel = K

} ,

, ,

{ 1 1 t t

t u z u z

d = K

(4)

Markov Assumption

Underlying Assumptions

Static world

Independent noise

Perfect model, no approximation errors

) ,

| (

) ,

,

|

( x

t

x

1:t 1

z

1:t

u

1:t

p x

t

x

t 1

u

t

p

=

)

| ( )

, ,

|

( z

t

x

0:t

z

1:t

u

1:t

p z

t

x

t

p =

(5)

1 5

1

1) ( )

,

| ( )

|

(

P zt xt P xt ut xt Bel xt dxt

Bayes Filters

) ,

, ,

| ( )

, ,

, ,

|

(zt xt u1 z1 ut P xt u1 z1 ut

P K K

η

=

Bayes

z = observation u = action

x = state

) , , ,

| ( )

(xt P xt u1 z1 ut zt

Bel = K

MarkovP(zt | xt ) P(xt | u1, z1, K,ut )

Markov

1 1

1 1

1) ( | , , , )

,

| ( )

|

(

P zt xt P xt ut xt P xt u z K ut dxt

1 1

1 1

1 1

1

) ,

, ,

| (

) ,

, ,

,

| ( )

| (

=

t t

t

t t t

t t

dx u

z u x

P

x u z

u x

P x

z P

K η K

Total prob.

Markov

1 1

1 1 1

1) ( | , , , )

,

| ( )

|

(

P zt xt P xt ut xt P xt u z K zt dxt

(6)

Bayes Filter Algorithm

1. Algorithm Bayes_filter( Bel(x),d ):

2. η=0

3. If d is a perceptual data item z then 4. For all x do

5.

6.

7. For all x do 8.

9. Else if d is an action data item u then 10. For all x do

11.

12. Return Bel’(x)

) ( )

| ( )

(

' x P z x Bel x

Bel =

) ( ' x + Bel

=η η

) ( ' )

(

' x 1Bel x

Bel =η

' ) ' ( )

' ,

| ( )

(

' x P x u x Bel x dx

Bel =

1 1

1) ( )

,

| ( )

| ( )

(xt = P zt xt

P xt ut xt Bel xt dxt

Bel η

(7)

7

Bayes Filters are Frequently used Robotics

Kalman filters

Particle filters

Hidden Markov models

Dynamic Bayesian networks

Partially Observable Markov Decision Processes (POMDPs)

1 1

1) ( )

,

| ( )

| ( )

(xt = P zt xt

P xt ut xt Bel xt dxt

Bel η

(8)

ƒ

Action: motion information of the robot

ƒ

Perception: compare the robots sensor observations to the model of the world

ƒ

Particle filters are a way to efficiently represent non-Gaussian distribution

ƒ

Basic principle

ƒ

Set of state hypotheses (“particles”)

Example: Robot Localization using

a Bayes Filter

(9)

9

ƒ

Set of weighted samples

Mathematical Description

ƒ

The samples represent the posterior

State hypothesis Importance weight

(10)

Particle Filter Algorithm

ƒ

Sample the next generation for particles using the proposal distribution

ƒ

Compute the importance weights :

weight = target distribution / proposal distribution

ƒ

Resampling: “Replace unlikely samples by more likely ones”

(11)

11

Particle Filters

(12)

)

| ) (

(

) ( )

| (

) ( )

| ( )

(

x z x p

Bel

x Bel

x z w p

x Bel

x z p x

Bel

α α α

=

Sensor Information: Importance Sampling

(13)

13

' d ) ' ( )

'

| ( )

(x p x u, x Bel x x

Bel

Robot Motion

(14)

)

| ) (

(

) ( )

| (

) ( )

| ( )

(

x z x p

Bel

x Bel

x z w p

x Bel

x z p x

Bel

α α α

=

Sensor Information: Importance Sampling

(15)

15

Robot Motion

' d ) ' ( )

'

| ( )

(x p x u, x Bel x x

Bel

(16)

draw xit−1 from Bel(xt−1 ) draw xit from p(xt | xit−1 ,ut−1 )

Importance factor for xit :

)

| (

) (

) ,

| (

) (

) ,

| ( )

| (

on distributi proposal

on distributi target

1 1

1

1 1

1

t t

t t

t t

t t

t t t

t i

t

x z p

x Bel u

x x p

x Bel u

x x p x

z p w

=

=

η

1 1

1

1, ) ( )

| ( )

| ( )

(xt = p zt xt

p xt xt ut Bel xt dxt

Bel η

Particle Filter Algorithm

(17)

17

1. Algorithm particle_filter( St-1 , ut-1 zt ):

2.

3. For Generate new samples

4. Sample index j(i) from the discrete distribution given by wt-1 5. Sample from using and

6. Compute importance weight

7. Update normalization factor

8. Insert

9. For

10. Normalize weights

Particle Filter Algorithm

0

, =

= η

St

n i =1K

} ,

{< >

= t ti ti

t S x w

S

i

wt

+

=η η

i

xt p(xt | xt1,ut1) xtj(1i) ut1 )

| ( t ti

i

t p z x

w =

n i =1K

η

i /

t i

t w

w =

(18)

Resampling

ƒ

Given: Set S of weighted samples.

ƒ

Wanted : Random sample, where the probability of drawing xi is given by wi.

ƒ

Typically done n times with replacement to generate new sample set S’.

(19)

19 w2

w3 w1

wn Wn-1

Resampling

w2

w3 w1

wn Wn-1

Roulette wheel

Binary search, n log n

Stochastic universal sampling

Systematic resampling

Linear time complexity

Easy to implement, low variance

(20)

1. Algorithm systematic_resampling(S,n):

2.

3. For Generate cdf

4.

5. Initialize threshold

6. For Draw samples …

7. While ( ) Skip until next threshold reached 8.

9. Insert

10. Increment threshold

11. Return S’

Resampling Algorithm

1

, 1

' c w

S = = n i = 2K

i i

i c w

c = −1 +

1 ],

, 0 ]

~ 1

1 U n i =

u

n j =1K

1 1

+ = u +n uj j

i

j c

u >

{

< >

}

= ' , −1

' S x n

S i

+1

= i i

(21)

21

Start

Motion Model

(22)

Proximity Sensor Model Reminder

Laser sensor Sonar sensor

(23)

23

(24)
(25)

25

(26)
(27)

27

(28)
(29)

29

(30)
(31)

31

(32)
(33)

33

(34)
(35)

35

(36)
(37)

37

(38)
(39)

39

(40)
(41)

41

Sample-based Localization (sonar)

(42)

Initial Distribution

(43)

43

After Incorporating Ten

Ultrasound Scans

(44)

After Incorporating 65

Ultrasound Scans

(45)

45

Estimated Path

(46)

Using Ceiling Maps for Localization

(47)

47

Vision-based Localization

P(z|x)

h(x) z

(48)

Under a Light

Measurement z: P(z|x):

(49)

49

Next to a Light

Measurement z: P(z|x):

(50)

Elsewhere

Measurement z: P(z|x):

(51)

51

Global Localization Using Vision

(52)

Summary – Particle Filters

ƒ

Particle filters are an implementation of recursive Bayesian filtering

ƒ

They represent the posterior by a set of weighted samples

ƒ

They can model non-Gaussian distributions

ƒ

Proposal to draw new samples

ƒ

Weight to account for the differences between the proposal and the target

ƒ

Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter

(53)

53

Summary – PF Localization

ƒ In the context of localization, the particles are propagated according to the motion model.

ƒ They are then weighted according to the likelihood of the observations.

ƒ In a re-sampling step, new particles are drawn with a probability

proportional to the likelihood of the

observation.

Referenzen

ÄHNLICHE DOKUMENTE

Käesolevas töös uuritakse eeliseid ja probleeme, mis sülearvuti igapäevase kasutamisega õppetöös kaasnevad, seda nii uurimuses osalenud õpilaste kui õpetajate poolt

In this paper we provide patterns for the context establishment of the security information risk management process according to the ISO 27005 [ISO08] standard.. The importance of

If any liquid has been spilled on the product, turn off the power, unplug the AC adapter and remove the batteries immediately, and then contact the EPSON customer support

If any liquid has been spilled on the product, turn off the power, unplug the AC adapter and remove the batteries immediately, and then contact the EPSON customer support

The thesis deals with female characters in selected works by Franz Kafka (1883–1924), the well known representative of the Prague German literature.. The thesis

In our German data, after the acceptance of a possible pre-closing (extract 1, line 30: &#34;gut.&#34;), the next activity is not a terminal greeting.. In fact, we have not found

The Money of the Mind and the God of Commodities – The real abstraction. according

The methodology for the structuring of the NDP components by the criterion of innovation receives further development, which makes it possible to prove the importance of the