Probabilistic Robotics
Mobile Robot Localization
Wolfram Burgard Cyrill Stachniss Giorgio Grisetti Maren Bennewitz Christian Plagemann
Probabilistic Robotics
Key idea: Explicit representation of uncertainty
(using the calculus of probability theory)
• Perception = state estimation
• Action = utility optimization
3
Bayes Filters: Framework
• Given:
• Stream of observations z and action data u:
• Sensor model P(z|x).
• Action model P(x|u,x’).
• Prior probability of the system state P(x).
• Wanted:
• Estimate of the state X of a dynamical system.
• The posterior of the state is also called Belief:
) ,
, ,
| ( )
(xt P xt u1 z1 ut zt
Bel = K
} ,
, ,
{ 1 1 t t
t u z u z
d = K
Markov Assumption
Underlying Assumptions
• Static world
• Independent noise
• Perfect model, no approximation errors
) ,
| (
) ,
,
|
( x
tx
1:t 1z
1:tu
1:tp x
tx
t 1u
tp
−=
−)
| ( )
, ,
|
( z
tx
0:tz
1:tu
1:tp z
tx
tp =
1 5
1
1) ( )
,
| ( )
|
(
∫
− − −=η P zt xt P xt ut xt Bel xt dxt
Bayes Filters
) ,
, ,
| ( )
, ,
, ,
|
(zt xt u1 z1 ut P xt u1 z1 ut
P K K
η
=
Bayes
z = observation u = action
x = state
) , , ,
| ( )
(xt P xt u1 z1 ut zt
Bel = K
Markov =η P(zt | xt ) P(xt | u1, z1, K,ut )
Markov
1 1
1 1
1) ( | , , , )
,
| ( )
|
(
∫
− − −=η P zt xt P xt ut xt P xt u z K ut dxt
1 1
1 1
1 1
1
) ,
, ,
| (
) ,
, ,
,
| ( )
| (
−
−
∫
−=
t t
t
t t t
t t
dx u
z u x
P
x u z
u x
P x
z P
K η K
Total prob.
Markov
1 1
1 1 1
1) ( | , , , )
,
| ( )
|
(
∫
− − − −=η P zt xt P xt ut xt P xt u z K zt dxt
Bayes Filter Algorithm
1. Algorithm Bayes_filter( Bel(x),d ):
2. η=0
3. If d is a perceptual data item z then 4. For all x do
5.
6.
7. For all x do 8.
9. Else if d is an action data item u then 10. For all x do
11.
12. Return Bel’(x)
) ( )
| ( )
(
' x P z x Bel x
Bel =
) ( ' x + Bel
=η η
) ( ' )
(
' x 1Bel x
Bel =η−
' ) ' ( )
' ,
| ( )
(
' x P x u x Bel x dx
Bel =
∫
1 1
1) ( )
,
| ( )
| ( )
(xt = P zt xt
∫
P xt ut xt− Bel xt− dxt−Bel η
7
Bayes Filters are Frequently used Robotics
•
Kalman filters•
Particle filters•
Hidden Markov models•
Dynamic Bayesian networks•
Partially Observable Markov Decision Processes (POMDPs)1 1
1) ( )
,
| ( )
| ( )
(xt = P zt xt
∫
P xt ut xt− Bel xt− dxt−Bel η
Action: motion information of the robot
Perception: compare the robots sensor observations to the model of the world
Particle filters are a way to efficiently represent non-Gaussian distribution
Basic principle
Set of state hypotheses (“particles”)Example: Robot Localization using
a Bayes Filter
9
Set of weighted samplesMathematical Description
The samples represent the posteriorState hypothesis Importance weight
Particle Filter Algorithm
Sample the next generation for particles using the proposal distribution
Compute the importance weights :weight = target distribution / proposal distribution
Resampling: “Replace unlikely samples by more likely ones”11
Particle Filters
)
| ) (
(
) ( )
| (
) ( )
| ( )
(
x z x p
Bel
x Bel
x z w p
x Bel
x z p x
Bel
α α α
=
←
←
−
−
−
Sensor Information: Importance Sampling
13
∫
− ←
' d ) ' ( )
'
| ( )
(x p x u, x Bel x x
Bel
Robot Motion
)
| ) (
(
) ( )
| (
) ( )
| ( )
(
x z x p
Bel
x Bel
x z w p
x Bel
x z p x
Bel
α α α
=
←
←
−
−
−
Sensor Information: Importance Sampling
15
Robot Motion
∫
− ←
' d ) ' ( )
'
| ( )
(x p x u, x Bel x x
Bel
draw xit−1 from Bel(xt−1 ) draw xit from p(xt | xit−1 ,ut−1 )
Importance factor for xit :
)
| (
) (
) ,
| (
) (
) ,
| ( )
| (
on distributi proposal
on distributi target
1 1
1
1 1
1
t t
t t
t t
t t
t t t
t i
t
x z p
x Bel u
x x p
x Bel u
x x p x
z p w
∝
=
=
−
−
−
−
−
η −
1 1
1
1, ) ( )
| ( )
| ( )
(xt = p zt xt
∫
p xt xt− ut− Bel xt− dxt−Bel η
Particle Filter Algorithm
17
1. Algorithm particle_filter( St-1 , ut-1 zt ):
2.
3. For Generate new samples
4. Sample index j(i) from the discrete distribution given by wt-1 5. Sample from using and
6. Compute importance weight
7. Update normalization factor
8. Insert
9. For
10. Normalize weights
Particle Filter Algorithm
0
, =
∅
= η
St
n i =1K
} ,
{< >
∪
= t ti ti
t S x w
S
i
wt
+
=η η
i
xt p(xt | xt−1,ut−1) xtj−(1i) ut−1 )
| ( t ti
i
t p z x
w =
n i =1K
η
i /
t i
t w
w =
Resampling
Given: Set S of weighted samples.
Wanted : Random sample, where the probability of drawing xi is given by wi.
Typically done n times with replacement to generate new sample set S’.19 w2
w3 w1
wn Wn-1
Resampling
w2
w3 w1
wn Wn-1
• Roulette wheel
• Binary search, n log n
• Stochastic universal sampling
• Systematic resampling
• Linear time complexity
• Easy to implement, low variance
1. Algorithm systematic_resampling(S,n):
2.
3. For Generate cdf
4.
5. Initialize threshold
6. For Draw samples …
7. While ( ) Skip until next threshold reached 8.
9. Insert
10. Increment threshold
11. Return S’
Resampling Algorithm
1
, 1
' c w
S = ∅ = n i = 2K
i i
i c w
c = −1 +
1 ],
, 0 ]
~ 1
1 U n− i =
u
n j =1K
1 1
− + = u +n uj j
i
j c
u >
{
< >}
∪
= ' , −1
' S x n
S i
+1
= i i
21
Start
Motion Model
Proximity Sensor Model Reminder
Laser sensor Sonar sensor
23
25
27
29
31
33
35
37
39
41
Sample-based Localization (sonar)
Initial Distribution
43
After Incorporating Ten
Ultrasound Scans
After Incorporating 65
Ultrasound Scans
45
Estimated Path
Using Ceiling Maps for Localization
47
Vision-based Localization
P(z|x)
h(x) z
Under a Light
Measurement z: P(z|x):
49
Next to a Light
Measurement z: P(z|x):
Elsewhere
Measurement z: P(z|x):
51
Global Localization Using Vision
Summary – Particle Filters
Particle filters are an implementation of recursive Bayesian filtering
They represent the posterior by a set of weighted samples
They can model non-Gaussian distributions
Proposal to draw new samples
Weight to account for the differences between the proposal and the target
Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter53