• Keine Ergebnisse gefunden

This chapter provides an introduction to concepts, modeling techniques, and tools for multi-agent systems (MAS) and multi-multi-agent-based simulation (MABS). The structure and content of the presentation is largely based on Klügl (2000, Chs. 2,3,4). Several sections were adopted from Page and Kreutzer (2005, Ch. 11), co-written by and partly based on the diploma thesis (Knaak, 2002) of the author.

Sociality: In order to reach its goals an agent communicates and interacts with other agents in a cooperative or competitive manner.

Adaptivity: An agent can adapt its future behaviour based on past experiences; i.e. it can learn.

Mobility: An agent is able to change its location within a physical or virtual environ-ment (e.g. a computer network). (Page and Kreutzer, 2005, pp. 340-341)

While these properties are listed in many textbooks, their appropriateness is a subject of con-tinuing discussions. One common objection says that the conceptual framework of agents might not provide signicant advantages, because computer science has dealt with systems exhibiting similar properties before; e.g. in active objects or expert systems which can be regarded as pre-decessors of agents (Wooldridge, 2003, pp. 26). A second popular objection says that talking about computer systems hardly justies the use of philosophically or sociologically biased terms like autonomy.

In the following, we will discuss the benets and limitations of the agent metaphor and compare it with related concepts. The presentation is based on Klügl (2000), Wooldridge (2003), Ferber (1995), and Padgham and Winiko (2004).

3.1.1.1. Benets and Limitations of the Agent Metaphor

As criticized in the rst objection, MAS are indeed nothing 'new', but a mixture of concepts from object-orientation, distributed systems, articial intelligence, and sociology. Their main purpose is to provide a natural abstraction and decomposition of complex [...] systems (Padgham and Winiko, 2004, p. 5). In this context, sociological and economic terms are used as a metaphor. Though MAS research has gained relevant results at the technological level, the provision of a new1 conceptual framework might be regarded as the main contribution.

The unreected adoption of sociological and economic terms, however, leads to the second objection. Therefore it is important to narrow down the scope of biased notions like autonomy in the context of MAS. In this thesis (as often in agent-based simulation) the terms are on the one hand used to conceptually describe actors from a real system. On the other hand, several notions can be given a technical interpretation that helps to distinguish agents from related concepts.

Situatedness, for instance, is a characteristic property because it delimits agents from earlier AI artifacts like expert systems (Wooldridge, 2003, p. 27). According to Ferber (1995, p. 53), classical AI programs are abstractthinkers that can at the utmost advise users how to act on the basis of presented data. In contrast agents percieve and change their environment directly.

They can only percieve, act, and move within a certain local radius (Klügl, 2000, p. 59), which ts the modeling of real-world actors in simulation well (see also Klügl, 2000, p. 6).

Autonomy, even in a restricted sense, distinguishes agents from the object-oriented world view (Wooldridge, 2003, p. 25). This is summarized in the often-cited sentence thatobjects do it for free [while] agents do it because they want to (Wooldridge, 2003, p. 26). Some authors concretize the term by identifying dierent degrees of autonomy. According to Klügl (2000, p. 11) au-tonomy of control means that an agent can perform its tasks without extensive interventions

1but nevertheless historically grown, as indicated above

of users. This is a rather unspecic property in the simulation context, since entities in many simulation models exhibit autonomy of control without being regarded as agents. Autonomy of behavior denotes learning agents that autonomously modify their behavior based on past experiences.

Though autonomous control and behavior can be implemented in an object-oriented language, autonomy is not an inherent concept of this world view, which is dominated by the principle of design by contract (see e.g. Meyer, 1997).

Agent Object

Methods Queries

Replies

Services

Goals

Messages

Figure 3.1.: Conceptual distinction between objects and agents (adopted [with modications] from Ferber, 1995, p. 58). (Caption and gure cited from Page and Kreutzer, 2005, pp. 353)

In (Page and Kreutzer, 2005, p. 352), we reviewed the discussion by Ferber (1995) on this subject:

Objects are dened through their interfaces; i.e. the services they can perform on demand.

Their implementation must therefore ensure that all methods are correctly implemented and that expected results are returned (Ferber, 1995, p. 57). This viewpoint clashes with the requirement for agent autonomy, which leaves agents free to pursue their own goals.

Agents can, for example, refuse a request if it would cause conict or if some information is currently unavailable (Ferber, 1995, p. 58).

The important point of distinction is that such decisions are based on the perceived state of an environment, as well as the state of the agent's internal knowledge base. The same request can therefore lead to dierent reactions at dierent times. In a typical implemen-tation this results in an additional ltering level, which mediates between service requests and internal agent processes (see Figure 3.1). In this way agents themselves retain tight control over their own behaviour.

An agent's actions can fail in certain situations (Wooldridge, 2003, p. 24) or it might select between dierent possibilities to satisfy its clients' needs based on their respective preferences (Garion and van der Torre, 2006, p. 175; see also Knaak, 2002, p. 7). This leads to higher demands on the agent's 'intelligence' where the term denotes behavioral exibility. Agents with exible behavior provide increased robustness in situations in which the environment is challenging (Padgham and Winiko, 2004, pp. 45).

The presented benets of the agent metaphor must be contrasted by a number of problems:

1. The slightly 'esoteric' terminology of MAS might lead to an over-expectation. As dis-cussed above, this can be avoided by clearly distinguishing between conceptual and tech-nical implications of the metaphor. According to Padgham and Winiko (2004, p. 4), agents are not magic [but ...] simply an approach to structuring and developing software.

2. The very general agent metaphor might be overused in situations where other concepts appear more appropriate. An example is the modeling of a spatial environment as a specic 'agent' in simulation (Klügl, 2000, p. 104). Moss (2000, p. 2) notes that MA(B)S research often seems to exhibit an overstated focus on its abstract concepts instead of practical applications.

3. Complex agent systems tend to be hard to analyze and validate (Klügl, 2000, p. 190).

While this problem can be partly reduced by nding an appropriate level of modeling detail (Klügl, 2000, p. 74) and applying proven software engineering methods, it is also inherent to the modeling style.

3.1.2. Agent Architectures2

According to Klügl (2000, p. 14) an agent's architecture determines its internal information processing, i.e. how perceptions are mapped to actions. Many agent architectures have been proposed, ranging from intentionally simple designs to complex reasoning systems (Klügl, 2000, p. 15).

Reactive Subcognitive

Subsymbolic

Tropistic

Model-Based Agent Architectures

Deliberative

Cognitive

Hybrid Utility-Based

Learning

Figure 3.2.: Classication of agent architectures based on Ferber (1995); Klügl (2000); Müller (1996); Russel and Norvig (2003).

In view of this variety, the literature distinguishes several classes of agent architectures. Dier-ent classication schemes are reviewed and integrated by Klügl (2000, Sec. 2.2.1), who regards the complexity of the internal representation as the main classication criterion (Klügl, 2000, p. 14). Figure 3.2 displays a structured overview of the architectural types mentioned in this summary. Most authors distinguish between reactive and deliberative agents as the two main classes.

The behaviour of reactive agents is constituted by more or less direct reactions to stimuli. Their design is often inspired by the idea of a collective intelligence without reason (Brooks, 1999) emerging from basic interactions (Klügl, 2000, p. 20). Klügl (2000, p. 20) criticizes that the

2This Section is based on (Page and Kreutzer, 2005, Ch. 11.2.3), which contains a more detailed presentation of exemplary agent architectures based on the diploma thesis of the author (Knaak, 2002, Sec. 2.4).

term 'reactive' is misleading since deliberative agents can also react to external stimuli.3 In-stead she identies two classes of 'non-deliberative' architectures: Subsymbolic architectures use non-symbolic internal representations such as neural networks (Klügl, 2000, p. 18). Subcogni-tive architectures apply symbolic information processing, often based on rule-based production systems or nite automata (Klügl, 2000, pp. 19).

Russel and Norvig (2003, Ch. 2.4) take the presence of an internal memory as a further criterion to classify reactive agents:4 A simple reex agent5is 'memory-less' without an internal model of its environment. A model-based reex agent, in contrast, has an internal state that additionally inuences its action selection.

Deliberative agents hold internal representations of goals and are able to generate and execute plans for their achievement (Klügl, 2000, pp. 20). Again, several sub-classes can be identied.

Russel and Norvig (2003, Sec. 2.4) distinguish between plan-based agents capable of dynamic planning, and utility-based agents that can additionally evaluate the utility of alternative plans with respect to their current goals.6 Müller (1996, cited in Klügl, 2000, p. 15) adds the class of hybrid architectures that consist of at least one deliberative and one reactive layer.

Klügl (2000, pp. 22) introduces the class of cognitive architectures, i.e. deliberative agents the design of which is explicitly based on theories from cognitive science. As examples, she names the BDI (Belief, Desire, Intention) architecture (e.g. Rao and George, 1995) based on a theory of rational action by Bratman (1987) and the PECS (Physics, Emotion, Cognition, Status) architecture by Urban (1997) that strives to include non-rational aspects related to physics and emotions into agent design (Klügl, 2000, pp. 22-23).

Learning agents (also called adaptive agents by some authors) can autonomously acquire new or adapt existing abilities from the observation of their environment (Russel and Norvig, 2003, Sec. 2.4).

3.1.3. Multi-Agent Systems

As reviewed by Page and Kreutzer (2005, p. 341):

A straightforward denition of multi-agent systems (MAS) views them as systems in Sec-tion [... 2.1]'s sense. MAS' dening property is that its components are sets of agents, located and cooperating in a shared environment (Wooldridge, 2003, pp. 105).

A formal denition mirroring this explanation is e.g. stated by Ferber (1995). Thereby, a MAS might also contain further passive components (objects or resources) that are not understood as agents.

The analysis of MAS is often focused on how structures and processes at the macroscopic level emerge from interactions of agents at the microscopic level without or with only few inuence of a central control instance (Jennings et al., 1998, cited in Klügl, 2000, p. 13). The MAS metaphor is thus closely connected to questions of distributed problem solving based on local

3cited in (Page and Kreutzer, 2005, p. 343)

4also cited by Klügl (2000, pp. 16)

5called a tropistic agent by Ferber (1995, p. 192; see also Klügl, 2000, p. 15)

6see also Klügl (2000, p. 16-17)

information (Jennings et al., 1998, cited in Klügl, 2001, p. 13), computational emergence (see Section 2.1.1), and (self-)organisation (e.g. Holland, 1998); see also the brief discussion in (Page and Kreutzer, 2005, p. 342).