• Keine Ergebnisse gefunden

Recognition in the wild

N/A
N/A
Protected

Academic year: 2022

Aktie "Recognition in the wild"

Copied!
78
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Intelligent Systems:

Recognition in the wild

Carsten Rother

(2)

Roadmap this lecture

β€’ Example: Exams Questions

β€’ Finishing off last lecture

β€’ Recognition:

β€’ Define the problem

β€’ Decision Forests

β€’ Person tracking … what runs on Microsoft Xbox

(3)

Roadmap this lecture

β€’ Example: Exam Questions

β€’ Finishing off last lecture

β€’ Recognition:

β€’ Define the problem

β€’ Decision Forests

β€’ Person tracking … what runs on Microsoft Xbox

(4)

Exam Questions

β€’ Nur von meinem und Dimitri Schlesinger Teil.

β€’ Drei Typen von Aufgaben:

1) Algorithmen

2) Definitionen und Wissensfragen 3) Theoretische Herleitungen

β€’ Antworten kΓΆnnen auf English oder Deutsch gegeben werden

(5)

1) Algorithmen

Was wΓΌrde ein parallelisierter ICM Algorithmus in den nΓ€chsten zwei Schritten machen? Bitte zeichnen sie ein.

Gegeben die Energy:

π‘₯

𝑖

∈ {0,1}

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

Initializer Zustand:

π‘₯

1

=0 π‘₯

2

=1

π‘₯

3

= 1 π‘₯

4

=0

Schritt 1:

π‘₯

1

=? π‘₯

2

=?

π‘₯

3

= ? π‘₯

4

=?

Hinweis: dunkle Konten wird im ersten Schritt nicht verΓ€ndert

π‘₯

1

=? π‘₯

2

=?

π‘₯

3

= ? π‘₯

4

=?

Schritt 2:

πœƒ

1

0 = 0, πœƒ

1

1 = 1 πœƒ

2

0 = 1, πœƒ

2

1 = 1 πœƒ

3

0 = 2, πœƒ

3

1 = 1 πœƒ

4

0 = 1, πœƒ

4

1 = 2 πœƒ

𝑖𝑗

π‘₯

𝑖

, π‘₯

𝑗

= |π‘₯

𝑖

βˆ’ π‘₯

𝑗

|

For all 𝑖, 𝑗

(6)

2) Definitionen und Wissensfragen

β€’ Was ist ein β€œIsing Prior”

Antwort aus den Slides:

Θ 𝑖,𝑗 π‘₯ 𝑖 , π‘₯ 𝑗 = exp{βˆ’|π‘₯ 𝑖 βˆ’ π‘₯ 𝑗 |} called β€œIsing prior”

(7)

3) Theoretische Herleitungen

Man berechne die Wahrscheinlichkeit dafΓΌr, dass die Summe der Augenzahlen zweier von einander unabhΓ€ngig gewΓΌrfelten

SpielwΓΌrfel durch 5 teilbar ist.

(Etwas Γ€hnliches wurde bei der Vorlesung "Probability Theory"

betrachtet, Folie 14)

(8)

Kommentar

β€’ Aussagen auf den Sides die als Hintergrund Information dienen.

Diese werden nicht β€œim Detail” abgefragt.

(9)

Beispiel: Image Retargeting

E 𝒙 =

𝑖

πœƒπ‘– π‘₯𝑖 +

𝑖,𝑗

πœƒπ‘–π‘— π‘₯𝑖, π‘₯𝑗

Binary Label: π‘₯

𝑖

∈ {0,1}

πœƒπ‘–π‘— π‘₯𝑖, π‘₯𝑗

πœƒπ‘– π‘₯𝑖

Force

label 0 Force

label 1

labeling (sketched)

Label 0 Label 1

Path from top to bottom In this case the problem can be

represented in 2 ways:

labeling or path finding

You can do that as an exercise, please see details in:

http://www.merl.com/reports/docs/TR2008-064.pdf

(10)

Roadmap this lecture

β€’ Example: Exam Questions

β€’ Finishing off last lecture

β€’ Recognition:

β€’ Define the problem

β€’ Decision Forests

β€’ Person tracking … what runs on Microsoft Xbox

(11)

Comments to last lecture

I added some slides to last lecture to make it

clearer. These slides are also added below. The

slides from last lecture are now online.

(12)

Reminder: Definition: Factor Graph models

β€’ Given an undirected Graph 𝐺 = (𝑉, 𝐹, 𝐸), where 𝑉, 𝐹 are the set of nodes and 𝐸 the set of Edges

β€’ A Factor Graph defines a family of distributions:

𝑓: partition function 𝐹 : Factor

𝔽: Set of all factors

𝑁(𝐹): Neighbourhood of a factor

πœ“

𝐹

: function (not distribution) depending on 𝒙

𝑁(𝐹)

(πœ“

𝐢

: 𝐾

|𝐢|

β†’ 𝑅 where x

i

∈ 𝐾)

𝑃 𝒙 = 1

𝑓 𝐹∈ 𝔽

πœ“ 𝐹 (𝒙 𝑁 𝐹 ) where 𝑓 = 𝒙 𝐹∈ 𝔽 πœ“ 𝐹 (𝒙 𝑁 𝐹 )

(13)

Reminder: Definition: Undirected Graphical models

β€’ Given an undirected Graph 𝐺 = (𝑉, 𝐸), where 𝑉 is the set of nodes and 𝐸 the set of Edges

β€’ An undirected Graphical Model defines a family of distributions:

𝑓: partition function C(G): Set of all cliques

C: a clique, i.e. a subset of variable indices.

πœ“

𝐢

: factor (not distribution) depending on 𝒙

𝐢

(πœ“

𝐢

: 𝐾

|𝐢|

β†’ 𝑅 where x

i

∈ 𝐾)

Definition: Clique is a set of nodes where all nodes are linked with an edge

𝑃 𝒙 = 1

𝑓 𝐢∈𝐢(𝐺)

πœ“ 𝐢 (𝒙 𝐢 ) where 𝑓 = 𝒙 𝐢∈𝐢(𝐺) πœ“ 𝐢 (𝒙 𝐢 )

(14)

Comment on definition

In some books the set 𝐢(𝐺) is defined as the set of all maximum cliques only.

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

The set of families of distributions is equivalent.

For instance, a factor πœ“ π‘₯ , π‘₯ = (π‘₯ +π‘₯ ) π‘₯ can also be written as two

𝑃 π‘₯1, π‘₯2, π‘₯3, π‘₯4, π‘₯5 =1

𝑓 πœ“ π‘₯1, π‘₯2, π‘₯4 πœ“ π‘₯2, π‘₯3, π‘₯4 πœ“ π‘₯1, π‘₯2 πœ“ π‘₯1, π‘₯2 πœ“ π‘₯1, π‘₯4 πœ“ π‘₯4, π‘₯2

πœ“ π‘₯3, π‘₯2 πœ“ π‘₯3, π‘₯4 πœ“ π‘₯4, π‘₯5 πœ“ π‘₯1 πœ“ π‘₯2 πœ“ π‘₯3 πœ“ π‘₯4 πœ“ π‘₯5

𝑃 π‘₯1, π‘₯2, π‘₯3, π‘₯4, π‘₯5 =1

𝑓 πœ“ π‘₯1, π‘₯2, π‘₯4 πœ“ π‘₯2, π‘₯3, π‘₯4 πœ“ π‘₯4, π‘₯5

Using maximum cliques.

(15)

Easy to convert between the two representations

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

Make sure that every factor is represented by a clique

Family of distributions with this

factor graph

Family of distributions with this undirected graphical model

Convert a factor graph in such a way that the family of distributions of the

undirected graphical model covers all possible distributions of this factor graph:

(16)

Easy to convert between the two representations

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

Family of distributions of this undirected graphical model and factor graph is the same

Convert an undirected graphical model in such a way that the family of

distributions of the factor graph covers all possible distributions of this

undirected graphical model:

(17)

Easy to convert between the two representations

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

Make sure that every clique has an associated factor

Family of distributions of this undirected graphical model and factor graph is the same

Convert an undirected graphical model in such a way that the family of distributions of the factor graph covers all possible distributions of this undirected graphical model:

Comment: this is also correct, but not minimal

representation

(18)

Easy to convert between the two representations

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

Family of distributions of this undirected graphical model and factor graph is the same

Convert an undirected graphical model in such a way that the family of distributions of the factor graph covers all possible distributions of this undirected graphical model:

Comment: this is not correct

π‘₯

1

π‘₯

2

π‘₯

3

π‘₯

4

π‘₯

5

(19)

Reminder: Last lecture Road Map

β€’ Define: Structured Models

β€’ Formulate applications as discrete labeling problems

β€’ Discrete Inference:

β€’ Pixels-based: Iterative Conditional Mode (ICM)

β€’ Line-based: Dynamic Programming (DP)

β€’ Field-based: Graph Cut and Alpha-Expansion

β€’ Interactive Image Segmentation

β€’ From Generative models to

β€’ Discriminative models to

β€’ Discriminative function

(20)

Reminder: Generative Model

Models explicitly (or implicitly) the distribution of the input 𝒛 and output 𝒙

Joint Probablity 𝑃(𝒙, 𝒛) = 𝑃(𝒛|𝒙) 𝑃(𝒙)

Comment:

1. The joint distribution does not necessarily have to be decomposed into likelihood and prior, but in practice it (nearly) always is

2. Generative Models are used successfully when input 𝒛 and output 𝒙 are very related, e.g. image denoising.

Pros: 1. Possible to sample both: 𝒙 and 𝒛

2. Can be quite easily used for many applications (since prior and likelihood are modeled separately) 3. In some applications, e.g. biology, people want to model

likelihood and Prior explicitly, since the want to understand the model as much possible

4. Probability can be used in bigger systems

Cons: 1. might not always be possible to write down the full distribution (involves a distribution over images 𝒛).

likelihood prior

(21)

Reminder: Discriminative model

𝑃 𝒙 𝒛 = 1

𝑓 exp βˆ’πΈ 𝒙, 𝒛 where 𝑓 =

𝒙

exp{βˆ’πΈ(𝒙, 𝒛)}

Models that model the Posterior directly are discriminative models.

In Computer Vision we use mostly the Gibbs distribution with an Energy 𝐸:

These are also called: β€œConditional random field”

Pros: 1. Simpler to write down than generative model (no need to model 𝒛)

and goes directly for the desired output 𝒙 2. More flexible since energy is arbitrary

3. Probability can be used in bigger systems

Cons: we can no longer sample images 𝒛

(22)

Reminder: Discriminative model

β€’ Relation: Posterior and Joint: 𝑃 𝒙 𝒛 =

1

𝑃 𝒛

𝑃 𝒙, 𝒛

β€’ 𝑃(𝒙, 𝒛), 𝑃 𝒙 𝒛 and 𝐸(𝒙, 𝒛) all have the same optimal solution 𝒙

βˆ—

given z:

β€’ 𝒙

βˆ—

= π‘Žπ‘Ÿπ‘”π‘šπ‘Žπ‘₯

𝒙

𝑃 𝒙, 𝒛 given 𝒛

β€’ 𝒙

βˆ—

= π‘Žπ‘Ÿπ‘”π‘šπ‘Žπ‘₯

𝒙

𝑃 𝒙|𝒛 given 𝒛 (since 𝑃 𝒙 𝒛 =

1

𝑃 𝒛

𝑃 𝒙, 𝒛 )

β€’ 𝒙

βˆ—

= π‘Žπ‘Ÿπ‘”π‘šπ‘–π‘›

𝒙

𝐸 𝒙, 𝒛 (since βˆ’log 𝑃 𝒙 𝒛 = log 𝑓 + 𝐸(𝒙, 𝒛))

(23)

Comment on Generative Models

One may also write the joint distribution 𝑃(𝒙, 𝒛) as a Gibbs distribution:

𝑃(𝒙, 𝒛) = 1

𝑓 exp βˆ’πΈ 𝒙, 𝒛 where 𝑓 =

𝒙,𝒛

exp{βˆ’πΈ(𝒙, 𝒛)}

If likelihood and prior are no longer modelled separately:

β€’ sampling 𝒙, 𝒛 gets more difficult

β€’ We can no longer learn prior and

likelihood separately (as in de-noising)

β€’ We train 𝑃 𝒙, 𝒛 =

1

𝑓

exp βˆ’πΈ 𝒙, 𝒛 , 𝑃(𝒙|𝒛) =

1

𝑓

exp βˆ’πΈ 𝒙, 𝒛 in a quite similar way.

𝒙 Samples:

𝒛

𝒙 𝒛

The advantage of a generative model over a discriminative model are gone But … it lost the meaning of a β€œgenerative” model, since we don’t have

a likelihood which says how the data was β€œgenerated”.

(24)

Discriminative functions

𝐸 𝒙, 𝒛 : 𝑲 𝒏 β†’ 𝑹

Models that model the classification problem via a function

Examples:

- Energy

- support vector machines - nearest neighbour classifier

Pros: most direct approach to model the problem Cons: no probabilities

𝒙 βˆ— = π‘Žπ‘Ÿπ‘”π‘šπ‘–π‘› 𝒙 𝐸 𝒙, 𝒛

This is the most used approach in computer vision!

(25)

Recap

Modelling a problem:

β€’ The input data is 𝒛 and the desired output 𝒙

We can identify three different approaches:

[see details in Bishop, page 42ff]:

β€’ Generative (probabilistic) models: 𝑃(𝒙, 𝒛)

β€’ Discriminative (probabilistic) models: 𝑃(𝒙|𝒛)

β€’ Discriminative functions: 𝑓(𝒙, 𝒛)

The key difference are:

β€’ Probabilistic or none-probabilistic model

β€’ Generative models model also the data 𝒛

β€’ Differences in Training (see previous lectures)

(26)

Roadmap this lecture

β€’ Example: Exams Questions

β€’ Finishing off last lecture

β€’ Recognition:

β€’ Define the problem

β€’ Decision Forests

β€’ Person tracking … what runs on Microsoft Xbox

(27)

Slides credits

β€’ Bernt Schiele

β€’ Li Fei-Fei

β€’ Rob Fergus

β€’ Kirsten Grauman

β€’ Derek Hoiem

β€’ Stefan Roth

β€’ Jamie Shotton

β€’ Antonio Criminisi

(28)

Recognition / Classification is fundamental part of many β€œIntelligent System”

β€’ Robot / Intelligent cars

β€’ Stereo Image recognition

β€’ Classification of sensor

β€’ Search in documents:

β€’ Biology / medicine: classify cells, DNA, etc.

β€’ Language / Hand drawings:

Google input:

β€œShow me frogs”

(29)

Image Recognition – What is the Goal ?

β€’ Object instance recognition (more precise: known object instance recognition)

β€’ We know exactly the instance

β€’ Object class recognition (also called: Generic object recognition)

β€’ Different instance of the same class

Proto-type images

Test image

Train-set Test-set

Result

(30)

Class versus Instance – a gray zone

Same instance or not?

(31)

Class-based recognition: Level of Detail

β€’ Image Categorization

β€’ One or more categories per image

β€’ Object Class Detection

β€’ Also find bounding box

β€’ Part-based Object Detection

β€’ Find parts of the object

(and in this way the full object)

β€’ Semantic Segmentation (see last lecture) (segmentation implies pixel-wise accuracy)

β€’ Object-class segmentation

Frog, branch

2D bounding box for

each frog

(32)

Roadmap this lecture

β€’ Example: Exams Questions

β€’ Finishing off last lecture

β€’ Recognition:

β€’ Define the problem

β€’ Decision Forests

β€’ Person tracking … what runs on Microsoft Xbox

(33)

Random Forest

Slide credits: Jamie Shotton, and Antonio Criminisi

β€’ Proven very capable, especially for real-time applications

β€’ e.g. keypoint recognition, Kinect body part classification

β€’ High accuracy with very low computational cost

β€’ can exploit low-level features (e.g. raw pixel values)

β€’ feature vector computed sparsely on-demand

β€’ generalization through randomization

β€’ Gives out confidence values

β€’ Flexible, non-parametric model

β€’ Easy to implement and parallelize

(34)

What can random forests do? Tasks

Regression forests Classification / Decision forests

Manifold forests

Density forests Semi-supervised forests

(35)

What can random forests do? Applications

Regression forests Classification / Decision forests

Manifold forests

Density forests Semi-supervised forests

e.g. semantic segmentation e.g. object localization

e.g. novelty detection e.g. dimensionality reduction e.g. semi-sup. semantic segmentation

Classification / Decision forests

e.g. DNA classification

(36)

Reminder: Last Lecture

Global optimum

𝒙

βˆ—

= π’‚π’“π’ˆπ’Žπ’‚π’™

𝒙

𝑃 𝒛, 𝒙

The user defined with brush strokes what is object and background:

Now we want to do that automatically

(37)

Semantic Segmentation

The desired output

Label each pixel with one out of 21 classes

[TextonBoost; Shotton et al, β€˜06]

(38)

Failure Cases

(39)

Optimizes an Energy (details not important)

(color model) (location prior)

(class)

(edge aware smoothess prior)

Define Energy:

Location Prior:

grass sky

class information:

Each pixel gets a distribution over 21-classes:

πœƒ

𝑖

π‘₯

𝑖

= 𝑐, 𝒛 = 𝑃 π‘₯

𝑖

= 𝑐 𝒛 Many options: Here we use Random Forest π‘₯

𝑖

∈ {1, … , 𝐾}

𝐸 π‘₯, Θ =

𝑖

πœƒ 𝑖 π‘₯ 𝑖 , 𝑧 𝑖 , Θ + πœƒ 𝑖 π‘₯ 𝑖 + πœƒ 𝑖 π‘₯ 𝑖 , 𝒛 +

𝑖,𝑗

πœƒ 𝑖,𝑗 (π‘₯ 𝑖 , π‘₯ 𝑗 )

(40)

Semantic Segmentation

Class and location only

+ edges + color

model

We concentrate on getting this out

(41)

Let us talk about features and simple (weak) classifiers

1) Get 𝑑-dimensional feature, depending on some parameters πœƒ:

1-dimensional:

We want to classify the white pixel Feature: color (e.g. red channel) of the green pixel

Parameters: 2D offset vector

1-dimensional: We want to classify the white pixel

Feature: average color (e.g. red channel) in the rectangle

Parameters: 4D (offset vector + size of rectangle)

Object Class Segmentation using Random Forests

(42)

Let us talk about features and simple (weak) classifiers

1) Get 𝑑-dimensional feature, depending on some parameters πœƒ:

2-dimensional:

We want to classify the white pixel Feature: color (e.g. red channel) of the green pixel and color (e.g. red channel) of the red pixel

Parameters: 4D (2 offset vectors)

We will visualize in this way:

(43)

Let us talk about features and simple (weak) classifiers

Weak learner: axis aligned Weak learner: oriented line Weak learner: conic section

Examples of weak learners

Feature response for 2D example.

With a generic line in homog. coordinates.

Feature response for 2D example.

With a matrix representing a conic.

Feature response for 2D example.

In general may select only a very small subset of features

With or

Classifier has 2 parameter:

which axis, continuous threshold

Classifier has 2/3 parameter Classifier has 5/6 parameter

Axis aligned linear classifier

linear classifier conic classifier π‘Žπ‘₯

1

+ 𝑏π‘₯

2

+ c <> 0 π‘Žπ‘₯

12

+ 𝑏π‘₯

22

+

𝑐π‘₯

1

π‘₯

2

+ 𝑑π‘₯

1

+

𝑒π‘₯

2

+ 𝑓 <> 0

(44)

Let us talk about features and simple (weak) classifiers

β€’ We put all the parameters of the classifier and of the features into one vector: 𝜽 Axis aligned linear

classifier

linear classifier conic classifier π‘Žπ‘₯

1

+ 𝑏π‘₯

2

+ c <> 0 π‘Žπ‘₯

12

+ 𝑏π‘₯

22

+

𝑐π‘₯

1

π‘₯

2

+ 𝑑π‘₯

1

+ 𝑒π‘₯

2

+ 𝑓 <> 0

β€’ They are called weak classifiers since they will be used to build a stronger classifier (here random forest, later Boosting)

β€’ We denote the classifier as β„Ž πœƒ, 𝑣 ∈ {π‘‘π‘Ÿπ‘’π‘’(> 0), π‘“π‘Žπ‘™π‘ π‘’(< 0)}

(45)

Decision Tree (Classification)

[ Y. Amit and D. Geman. Shape quantization and recognition with randomized trees. Neural Computation. 9:1545--1588, 1997]

terminal (leaf) node internal

(split) node

root node 0

1 2

3 4 5 6

7 8 9 10 11 12 13 14

A general tree structure

Is top part blue?

Is bottom

part green? Is bottom

part blue?

A decision tree

(46)

Decision Tree – Test Time

Input test

point Go left or right according to:

Input data in feature space

𝑝(𝑐)

(47)

Decision Tree – Train Time

Input: all training points Input data in feature space

each point has a class label

The set of all labelled (training data) points, here 35 red and 23 blue.

Split the training set at each node

Measure 𝑝(𝑐) at each leave, it could be 3 red an 1 blue,

i.e. 𝑝(π‘Ÿπ‘’π‘‘) = 0.75; 𝑝(𝑏𝑙𝑒𝑒) = 0.25 (remember, the feature space

is also optimized with πœƒ)

(48)

Random Forests – Training of features (illustration)

What does it mean to optimize over πœƒ

β€’ For each pixel the same feature test (at one split node) will be done.

β€’ One has to define what happens with

feature tests that reaches outside the image Goal during Training: spate red pixel (class 1) from blue pixels (class 2)

Feature:

Value π‘₯

1

: what is the value of green color channel (could also be red or blue) if you look:

πœƒ1

pixel right and πœƒ

2

pixels up

Value π‘₯

2

: what is the value of green color channel (could also be red or blue) if you look:

πœƒ3

pixel right and πœƒ

4

pixels down

Goal: find a such a πœƒ that it is best to separate the data π‘π‘œπ‘  + (πœƒ

1

, πœƒ

2

)

π‘π‘œπ‘  + (πœƒ

3

, πœƒ

4

)

Image Labeling (2 classes, red and blue)

(49)

Decision Tree – Split Criteria

Be for e sp lit

Shannon’s entropy Node training

Sp lit 1 Sp lit 2

Information gain

Think of minimizing Entropy

(50)

Example Calculation

β€’ We have 𝑆 = 12 with 𝑆

𝐿

= 6 and 𝑆

𝑅

= 6

β€’ In 𝑆 we have 6 red and 6 blue points (2 classes)

β€’ We look at two possible splits:

1) 50%-50% class-split (each side (𝑆

𝐿

and 𝑆

𝑅

) gets 3 red and 3 blue) 𝐻 𝑆

𝐿

= βˆ’ 0.5 log 0.5 + 0.5 log 0.5 = 1

𝐻 𝑆

𝑅

= βˆ’ 0.5 log 0.5 + 0.5 log 0.5 = 1 𝐼(𝑆) = 𝐻(𝑆) – (0.5 + 0.5) = 𝐻(𝑆) – 1 = 0

2) 16%-84% class-split (left side has 5 red and 1 blue, right side has 5 blue and 1 red) 𝐻 𝑆

𝐿

= βˆ’

1

6

log

1

6

+

5

6

log

5

6

= 0.64 𝐻 𝑆

𝑅

= βˆ’

1

6

log

1

6

+

5

6

log

5

6

= 0.64

𝐼 𝑆 = 𝐻 𝑆 – 0.5 βˆ— 0.64 + 0.5 βˆ— 0.64 = 𝐻 𝑆 – 0.64 = 0.36

(Higher information gain)

(Lower information gain)

(51)

Decision Forest

Tree t=1 t=2 t=3

Forest output probability

The ensemble model

𝑝 𝑐 = 1 𝑇 𝑑

𝑇

𝑝𝑑(𝑐) 𝑝𝑐

𝑇 is the number of trees

(52)

Randomness in the training set

Bagging (randomizing the training set)

The full training set

The randomly sampled subset of training data made available for the tree t

Forest training

(53)

Example: Two classes; axis aligned linear classifier

Training different trees in the forest

Testing different trees in the forest

Training points

(54)

Example: Two classes; linear classifier

Training different trees in the forest

Testing different trees in the forest

Training points

(55)

Example: Two classes; conic classifier

Training different trees in the forest

Testing different trees in the forest

Training points

(56)

Decision Tree - Randomization

The full set of all possible node test parameters For each node the set of randomly sampled features Randomness control parameter.

For no randomness and maximum tree correlation.

For max randomness and minimum tree correlation.

Randomized node optimization

Small value of ; little tree correlation. Large value of ; large tree correlation.

The effect of

Node weak learner

Node test params Node training

(57)

Decision Forest the choices

β€’ What is the depth of the trees?

β€’ How many trees?

β€’ Choice of 𝜌 ?

β€’ What are the features?

β€’ What type of classifier (linear, conic, etc.) ?

β€’ What split criteria? (other than information gain)

(58)

A crucial factor is tree depth

(59)

Definition

β€’ Over-fitting: Is the effect that the model perfectly memorizes the training data, but does not perform well on test data

β€’ Generalization: One of the most important aspect of a model is its ability to generalize. That means that new (unseen) test data is

correctly classified. A model which overfitts does not generalize

well.

(60)

Example: four classes; conic classifier

Training different trees in the forest

Testing different trees in the forest

Training points

(61)

Examples

Training points: 4-class spiral Training pts: 4-class spiral, large gaps Tr. pts: 4-class spiral, larger gaps

Testing posteriors

(62)

Examples - overfitting

T=200, D=3, w. l. = conic T=200, D=6, w. l. = conic T=200, D=15, w. l. = conic

Training points: 4-class mixed

(63)

Roadmap this lecture

β€’ Example: Exams Questions

β€’ Finishing off last lecture

β€’ Recognition:

β€’ Define the problem

β€’ Decision Forests

β€’ Person tracking … what runs on Microsoft Xbox

(64)

Body Tracking with Kinect Camera

… what runs on Microsoft Xbox

(65)

Reminder: Kinect Camera

(66)

Example Depth: Person

top view

side view

(67)

RGB vs depth for pose estimation

β€’ RGB

 Only works well lit

 Background clutter

 Scale unknown

Clothing & skin colour

β€’ D EPTH

 Works in low light

 Person β€˜pops’ out from bg

 Scale known

Uniform texture

(68)

Body Tracking: Pipeline overview

body joint hypotheses

front view side view top view

input depth image body parts

Bodypart

Labelling Clustering

Body is divided into 31 body parts

(simple centroid computation)

(69)

Create lots and lots of training data

Model all sorts of variations:

Record mocap

100,000s of poses

Retarget to varied body shapes

Render (depth, body parts) pairs + add noise

[Vicon]

(70)

Train on synthetic data – test on real data

Synthetic (graphics) Real (hand-labelled)

(71)

Decision Forest

Each leaf stores a distribution over the 31 body parts:

(72)

Very Fast Features

Super Simple feature that can be computed very fast:

input depth image

p

Ξ”

p

Ξ” p

Ξ”

p

Ξ”

p

Ξ”

p

Ξ”

π‘₯ 𝑖 p = 𝐽 p βˆ’ 𝐽 p + Ξ”

depth

image coordinate

offset depth feature

response

offset scales with depth: Ξ” = 𝐫 𝑖 𝐽(𝐩)

β€’ 1D feature

β€’ 2 Parameters (𝒓

π’Š

)

(73)

Number of Trees

ground truth

1 tree 3 trees 6 trees

inferred body parts (most likely)

40%

45%

50%

55%

1 2 3 4 5 6

A ver ag e p er -clas s acc ur acy

Number of trees

Test Performance

(74)

depth 1 depth 2 depth 3 depth 4 depth 5 depth 6 depth 7 depth 8 depth 9 depth 10 depth 11 depth 12 depth 13 depth 14 depth 15 depth 16 depth 17 depth 18

input depth ground truth parts inferred parts (soft)

Depth of Trees

(75)

Avoid Over-fitting

The More (diverse) training images the better.

(76)

Results – Posterior Distributions

(77)

Results - Tracking

(78)

Roadmap this lecture

β€’ Example: Exam Questions

β€’ Finishing off last lecture

β€’ Recognition:

β€’ Define the problem

β€’ Decision Forests

β€’ Person tracking … what runs on Microsoft Xbox

Referenzen

Γ„HNLICHE DOKUMENTE

Übertrage alle farbigen KΓ€stchen auf das Raster und male sie in der entsprechenden Farbe aus.. So wirdβ€˜S

The cluster based depletion depth measurement has been used for the estimation of the effective depletion voltage and results are shown for layer 0 only.. In order to be able to

A common particle detector consists of several layers of subdetectors so that a generated particle first traverses a tracker system and deposits its full energy inside the

During the last two decades, a wide variety of advanced methods for the visual exploration of large data sets have been proposed.For most of these techniques user interaction has

TCAD Simulation of the MuPix7 Sensor for the Mu3e Pixel Tracker.. Annie

Large area O(1m 2 ) monolithic pixel detectors with X/X 0 = 0.1% per tracking layer Novel helium gas cooling concept.. Thin scintillating fiber detector with ≀ 1mm thickness

Store time stamp and row address of 1 st hit in column in end-of-column cell Delete hit flag.. LdCol

o Integrated cooling Beam pipe supports detectors..