A Rank Based Language for Qualitative Preferences
Gerhard Brewka
brewka@informatik.uni-leipzig.de
Universit¨at Leipzig
Outline
1. Motivation
2. Ranked KBs for specifying preferences 3. Complex preferences: the language LPD 4. Example: selecting a movie
5. Conclusions
Problems and solutions
problem solutions often value assignments: for each variable pick a value out of domain
constraints describe legal assignments
(here: boolean variables and set of formulas , solutions models of )
legal assignments not necessarily of same quality
=> preferences
given the number of assignments is exponential:
how to describe the preferences?
Goal based preferences
simplest case: single formula representing goal
better than iff
,
agents have more than 1 goal
goals are of different importance use ranked knowledge base
integers, more important than iff focus on qualitative approaches where numbers only used to represent total preorder on goals
Preferences on models
rank
3
2
1
Is better than ?
No if importance of best satisfied goal counts No if importance of best falsified goal counts Yes if number of satisfied goals at highest level where they differ counts
No if subset relation at highest level where they differ counts
Basic strategies
better than iff
rank of highest goal satisfied in rank of highest goal satisfied in (Benferhat et al. 02)
rank of highest goal unsatisfied in
rank of highest goal unsatisfied in (Pearl 90) at highest rank where and differ wrt.
number of goals, satisfies more goals (Benferhat et al. 93)
at highest rank where and differ wrt.
goals, satisfies superset of goals (Brewka 89)
Examples
,
,
Relationships
implies ,
implies ,
implies ,
implies ,
implies ,
implies .
The preference language
Basic preference expressions:
pair
,
strategy, ranked goal base, written
single agent: different strategies for different aspects multiple agents: generate global preference ordering
=> LPD:
basic preference expressions are in LPD if and are in LPD, so are
,
,
and
.
Semantics of LPD
meaning of expressions: preorder on models
iff but
basic expressions induce preorder on models as discussed earlier
and preorders induced by and ,
inverse of . The preorders induced by a complex expression are:
The combination methods
Pareto ordering: at least as good as iff at least as good with respect to and
at least as good as iff at least as good with respect to or , needs transitive closure
lexicographic ordering: at least as good as iff at least as good with respect to and , or strictly better wrt.
inverse ordering
Example: selecting a movie
wife’s actors preferences, yours,
type preferences,
more important than (wife’s birthday), type as important as actors.
Strategy:
Generating the models
complete movie information, represented in as
Preferences among movies
types exclusive:
select 1 movie:
info in background knowledge : each model makes 1 movie true
Why is this nonmon?
call for papers explicitly mentions preferences preferred solutions change when background knowledge or goals added
NMR via preference on models standard approach
for
define
iff holds in all maximally -preferred models of
Discussion
related paper at KR: based on rules rather than goals, handles default negation, focus on answer set optimization
CP-networks: ceteris paribus interpretation of preferences; here: a multi criteria view
Benferhat et al.: bipolar representations: goals and rejections, rejections can be modeled using negation and adequate strategy
a lot more related work -> see paper
Future work: properties of the language, partially ordered goals, computational issues