1.2 Foundations on Agent-Based Systems
1.2.2 Basic Agent Architectures
An often asked question refers to the difference between the concepts of agents and objects as well as between agents and actors.
Within the science of informatics an object is described by the concepts of a class-instance-relationship, inheritance and message transmission. The first concept esteems a class as a model of structure and behaviour meanwhile an instance is seen as concrete representation of the class. By inheritance a class is derivable from another one and thereby able to use its properties. Message transmission allows the definition of poly-morphic procedures whose code can be differently interpreted by different clients. By these common concepts of objects they cannot be interpreted as agents because they are not designed to fulfil certain goals or to satisfy a need. Furthermore message trans-mission is only a procedure invocation [Ferber, 1999]. Agents are able to decide about message acceptance and about an appropriate reaction.
Actors are parallel systems communicating by asynchronous buffered messages.
They do not wait for an answer but order the receiver to send it to another actor. Actors are no agents due to the same reasons as explained above.
Object
Figure 1.6:Comparison agent and object (cp. [Ferber, 1999] and [Bauer and Müller, 2004])
Agent architectures represent the transition from agent theory towards their practical application [Kernchen and Vornholt, 2004]. Therefor three main research and applica-tion direcapplica-tions exist.
1.2.2.1 Deliberative Agents
Deliberative agents base on the classic Artificial Intelligence by explicitely requiring a symbolic model of the environment as well as the capability for logic reasoning. Fun-damental aspects are described by Newell and Siman within their “Physical-Symbol System Hypothesis” [Newell and Simon, 1976]. This theory describes a system being
intention is its capability to run processes for symbol processing. The symbols itself can be used to create a symbolically encoded set of instructions. Their final statement is that such a systems is capable to perform intelligent actions.
Inter-action
Initiator Scheduler Planner
Symbolic Environment Model Information
Receiver
Reasoner B+D+I
Manager
Act Perceive
Figure 1.7:Deliberative agent architecture (cp. [Brenner et al., 1998])
Deliberative agents are the next step of this development. They contain an explicit symbolic model of the environment and decide following certain logical rules. The targeted types of problems to be solved are:
◦ Transduction problems:describing the translation of the real world into an adequate symbolic description,
◦ Representation problems: describing the symbolic representation of information about real world objects and processes and how agents reason with those data.
The vision, especially of representatives of the classic AI, was to create automically planning, automatically reasoning and knowlegde-based agents.
The most important deliberative architecture is the BDI architecture of Rao and Georgeff [Rao and Georgeff, 1991]. It is exemplary described below.
The basic elements of this architecture are the Beliefs, Desires and Intentions. They form the basis for the agent’s capability for logical reasoning. Beliefs contain data about environmental information, action possibilities, capabilities and resources. An agent must be able to manage the heterogenous, changeable knowledge about the domain of its interest. The agent’s desires derive from its beliefs and contain “individual” judgements of future environmental situations from the agent’s point of view. The desires can be mutional, non-realistic and even come into conflict with each other. The intentions are a subset of the agent’s actual goals and points to the goal that is actually intended to be achieved.
Additonal components completing the mental state of an BDI agent are its goals and plans [Brenner et al., 1998]. Goals are a subset of the agent’s desires and describe its
po-14 1 Introduction
Beliefs
Desires Intentions
Perceive Act
Communicate
Figure 1.8:BDI architecture (cp. [Rao and Georgeff, 1991])
tential, realistic, not conflicting latitude. Plans subsume intentions and describe actions to solve a problem.
The agent needs sensors to perceive data about its environment to create its world model (cp. figure 1.8). These data need to be interpreted and may cause adaptions or extension of the agent’s actual beliefs. Actuators are used to realise plans with certain actions. Thereby the agent changes its environment in a goal-directed, methodical way.
Because of the high complexity of appropriate environmental representations, delib-erative agents are rarely sufficiently applicable within dynamic environments.
1.2.2.2 Reactive Agents
Reactive agents are an alternative approach to solve problems that are not or only in-suffiently solveable with symbolic AI. Therefore a reactive agent architecture does not include an explicit description of the environment as well as no mechanisms for logical reasoning.
Perceive Sen- Act
sors
Actu-ators Competence
Module Competence
Module Competence
Module
Figure 1.9:Reactive agent architecture (cp. [Rao and Georgeff, 1991])
This interaction is the basis for their intelligence, in contrast to the internal representa-tions of deliberative agents [Brenner et al., 1998]. The basic architecture of a reactive agent is shown in figure 1.9. Even in complex situations the agent only needs to identify basic axioms or dependencies. These information are processed by task-specific com-petence modules to create reactions. Again actuators influence the environment based on the determined actions.
A representative of reactive agent architectures is the Subsumption Architecture of [Brooks, 1991]. There every behaviour is an almost independent process subsuming the behaviours of the lower behaviours (cp. figure 1.10).
Behaviour 3 Behaviour 2 Behaviour 1
Real World
Act Perceive
+
+
Figure 1.10:Subsumption agent architecture (cp. [Kernchen and Vornholt, 2004])
1.2.2.3 Hybrid Approaches
Hybrid architectures try to combine different architectural approaches to a complex sys-tem. The idea behind is to get all advantages but not the trade-offs of the particular approaches. Following Ferber hybrid approaches can be classified according to the ca-pacity of agents to accomplish their tasks individually as well as to plan their actions.
Purely Deliberative Agents
Purely Reactive Agents
Symbolic Representations only
Symbolic and Numerical Representations
Non Symbolic Representations
No Representations
Figure 1.11:Hybrid agent architecture classification (cp. [Ferber, 1999])
16 1 Introduction
Literature like [Brooks, 1991] proposes horizontal as well as vertical levels, each with own functionality, in those complex systems. An example of a hybrid architecture is shown in figure 1.12 and was developed by Müller in 1996.
Social Model Mental Model
World Model
Sensors
Communi-cation Aktuators
SG PS
SG PS
SG PS
Cooperative Planning Level Local Planning Level Behaviour-Based Level
World Interface Knowledge Base Control Unit
Figure 1.12:Hybrid agent architecture (cp. [Müller, 1996])
One important advantage of agent technology is its possibility to find better problem solutions due to the cooperation of many individuals. That directly leads to the concept of multi-agent systems.