• Keine Ergebnisse gefunden

Intelligent Systems

N/A
N/A
Protected

Academic year: 2022

Aktie "Intelligent Systems"

Copied!
20
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Intelligent Systems

Exercise 3: Energy Minimization techniques

(2)

Energy Minimization techniques

Outline:

1. Energy Minimization (example – segmentation) 2. Iterated Conditional Modes

3. Dynamic Programming 4. Block-ICM

5. MinCut

6. Equivalent transformations + 𝛼-expansion

(3)

Energy Minimization (Segmentation)

(4)

Energy Minimization

(5)

Iterated Conditional Modes

Idea: choose (locally) the best label for the fixed rest [Besag, 1986]

Repeat:

 extremely simple, parallelizable

 coordinate-wise optimization, does not converge to the global optimum even for very simple energies

(6)

Dynamic Programming

Suppose that the image is one pixel high → a chain

The goal is to compute

(7)

Dynamic Programming – example

(8)

Dynamic Programming

General idea – propagate Bellman functions by

The Bellman functions represent the quality of the best expansion onto the processed part.

(9)

Dynamic Programming (algorithm)

is the best predecessor for -th label in the -th node Time complexity –

(10)

Iterated Conditional Modes (again, but now 2D)

Fix labels in all nodes but for a chain (e.g. an image row) Before (simple)

The “auxiliary” task is solvable exactly and efficiently by DP

(11)

MinCut

(12)

MinCut for Binary Energy Minimization

(13)

Search techniques

(14)

𝛼-expansion

(15)

𝛼-expansion

After 𝛼-expansion we have ↓ but we need ↓

in order to transform it further to MinCut.

What to do ?

(16)

Equivalent Transformation (re-parameterization)

Two tasks and are equivalent if

holds for all labelings .

− equivalence class (all tasks equivalent to ).

Equivalent transformation:

(17)

Equivalent Transformation

Equivalent transformation can be seen as “vectors”, that satisfy certain conditions:

(18)

Back to 𝛼 -expansion

Remember out goal:

It can be done by equivalent transformations – below is the sketch how to transform it to the “canonical form”

𝑔 𝑦𝑟, 𝑦𝑟 = 𝜃𝑟𝑦𝑟 + 𝜃𝑟𝑟𝑦𝑟𝑦𝑟 + 𝜃𝑟𝑦𝑟

For MinCut it should be transformed (similarly) to

(19)

Some example tasks

A (simple) Energy Minimization problem on a chain is given. Solve it using Dynamic Programming – write down the Bellman functions, find the best labelling (see solution draft on the next slide).

An (simple) multi-label Energy Minimization problem is given together with a labelling which should be improved using 𝛼-

expansion. Consider a particular label 𝛼. Draft the corresponding binary Energy Minimization problem as well as the corresponding MinCut graph.

A pairwise function 𝑔(𝑦𝑟, 𝑦𝑟) over binary variables 𝑦𝑟 ∈ {0,1} and 𝑦𝑟 ∈ {0,1} is given. Use re-parameterization in order to write it in the canonical form 𝑔 𝑦𝑟, 𝑦𝑟 = 𝜃𝑟𝑦𝑟 + 𝜃𝑟𝑟𝑦𝑟𝑦𝑟 + 𝜃𝑟𝑦𝑟 (see the previous slide).

(20)

Solution draft for the Dynamic Programming task

3 nodes, 2 edges (a chain), 2 labels.

White – original values for 𝑞 and 𝑔,

Yellow – Bellman functions, the best labelling At labels: “𝑎 + 𝑏 = 𝑐” means:

 𝑎 – the original 𝑞-values

 𝑏 – the quality for the best labelling so far

Referenzen

ÄHNLICHE DOKUMENTE

Inspired by the log-linear model and the joint space embedding, we address large- scale multi-label classification problems, in which both hierarchical label structures are given

Write down joint probability distribution as a so-called Graphical Model:.. • Directed graphical model (also called Bayesian Networks)

In particular for modelling, learning and making predictions in directed graphical models (DGM), undirected graphical models (UGM), and factor graphs (FG). • Comment DGM and UGM

Compute the maximum flow from source to sink such that three conditions on flow are satisfied Method:.. Read out the minimum cut from the

• Over-fitting: Is the effect that the model perfectly memorizes the training data, but does not perform well on test data. • Generalization: One of the most important aspect of

– A simple Energy Minimization Problem for a chain is given. our part) should be manageable in

For ML-HARAM with HX a and SebDif in slice 3 (a good combination for improvement of large support labels), there was no predominant rule but it used 1052 rules, the highest of which

In this paper, a multi-label extension of Fuzzy ARTMAP (FAM) [14] named Multi-Label FAM (ML-FAM), an interpretable classifier based on the Adaptive Resonance Theory (ART),