• Keine Ergebnisse gefunden

The knowledge of the autonomous agents, which are part of the proposed multi-agent vision system described in Sect. 6.3, has been modelled using the expert system tool Clips 6.10 [CLIPS1998]. This tool is provided as a complex C-library, that implements a LISP-style programming language as well as an efficient inference engine, which can be completely controlled by C/C++-programs. Furthermore, Clips can be extended by embedding user-defined C-functions within its environment.

Although, this appendix cannot consider the knowledge representation in great detail, some aspects are mentioned to give an impression about the provided high-level knowledge which enables the agents to react to requested tasks.

In general, the knowledge of an agent is stored in facts and in rules. While facts are utilised to store information about different entities such as real world objects, rules mainly deter-mine the behaviour of an agent, i.e. how an agent react in a specific situation.

Examples for facts, which have been stored in the knowledge base of the recognition fusion agent, are listed below:

(is-a rim object)

(has-name rim rim)

(has-color rim red)

(is-a blue-screw object)

(has-name blue-screw screw) (has-name blue-screw blue-screw) (has-color blue-screw blue)

This short excerpt determines the information about two different objects of the Baufix do-main, namely the rim and the blue cube. As indicated, the facts specify the particular types of entities, their names as well as their properties and attributes. There are two important points to note here: (i) entities can be identified not only by their unambiguous names but also by their features and attributes (like the colour attribute) and (ii) the provided knowledge about an entity can be easily augmented through further facts.

An example for a simple rule, which also has been stored in the recognition fusion agent, is given by:

;

---; makeProgramGetImageFromMessage

;

---; If the source of an image is the input-message, this rule adds

; a get-image operation to the program, if the source is the (defrule makeProgramGetImageFromMessage

?f0<-(object (is-a GProgramOperationRequest) (program ?prg)

(operation get ?dest ?src) (operation-code get ?var-dest ?)) (is-a ?src image)

(has-source ?src message) (has-index ?src ?index)

=>

; Generate operation get-data-from-message (bind ?var-src (sym-cat var- (gensym*))) (bind ?operation

)

(send ?prg insert-operation 1

"(bind ?" ?var-dest " (send [input-message] get-data "

?index "))"

)

; Retract old operation request (send ?f0 delete)

)

This rule might be used during the generation process of program scripts, that are suitable to perform requested tasks. In particular, it adds a program line which fetches an image from the input message.

Such rules are employed to determine the behaviour of an agent. Several types of rules are provided, e.g. rules are used to analyse a message text, to generate program scripts (as the one mentioned above), and to perform memory management. Similar to the facts, rules can be easily added in order to provide new functionality.

E

Interpretation of Abstract Task Descriptions

This appendix explains the syntax and the semantics of abstract task descriptions which are specified using the proposed communication language (see Sect. 6.2.2) in more detail. This is done, by describing how master agents analyse such abstract descriptions in order to be able to react to the requested tasks adequately.

In particular, the interpretation process will be described on the basis of a recognition task that can be accomplished by the recognition fusion agent. This agent provides the commu-nication language determined by the formal grammar, shown in Tab. E.1, which is stored as facts and rules in the knowledge base of the agent (Appendix D). Since the recognition fu-sion agent has the capability to perform recognition tasks only, the provided communication language is relative simple.

A typical recognition task which has been already mentioned in Sect. 6.4.1 is given by:

(recognise ?dest ?src) (is-a ?dest object)

(or (has-name ?dest cube) (has-color ?dest wooden)) (is-a ?src image)

(has-source ?src message) (has-index ?src 0)

This task requests the agent society to recognise all cubes as well as all wooden-coloured objects in the image attached to the message.

Table E.1: Communication language provided by the recognition fusion agent

<abstract-description> ::= (<goal> <variable>*) <goal-condition>*

<goal> ::= recognise | recognize

<goal-condition> ::= <attr-condition> | <not-condition> |

<and-condition> | <or-condition>

<attr-condition> ::= <is-a-attr> | <has-index-attr |

<has-name-attr> | <has-color-attr> |

<has-no-of-holes-attr> | <has-source-attr>

<not-condition> ::= (not <goal-condition>)

<and-condition> ::= (and <goal-condition>+)

<or-condition> ::= (or <goal-condition>+)

<is-a-attr> ::= (is-a <variable> <class-spec>)

<has-color-attr> ::= (has-color <variable> <color-spec>)

<has-index-attr> ::= (has-index <variable> <number>+)

<has-name-attr> ::= (has-name <variable> <name-spec>)

<has-no-of-holes-attr> ::= (has-no-of-holes <variable> <no-of-holes-spec>)

<has-source-attr> ::= (has-source <variable> <source-spec>)

<class-spec> ::= image | object

<color-spec> ::= blue | green | orange | red | white | wood | wooden | yellow

<name-spec> ::= cube | blue-cube | green-cube | red-cube |

yellow-cube | ledge | ledge-3 | ledge-5 | ledge-7 | nut | rim | screw | blue-screw | green-screw | red-screw | yellow-screw | slat | slat-3 | slat-5 | slat-7 | tyre

<number-of-holes-spec> ::= 3 | 5 | 7

<source-spec> ::= message

<variable> ::= ? <alpha> {<alpha> | <number>}

<alpha> ::= a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z

<number> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

In general, the interpretation process of a message proceeds in the following way: First, the master agents which have received the message analyse the global goal of the requested task. In this case, the goal is to recognise a destination denoted by variable ?destfrom a source denoted by ?src. If an agent, like the recognition fusion agent, knows how to perform a recognition task, it examines the abstract description in further detail: The agent extracts all task specifications that are related to the destination variable ?dest and de-termines all entities stored in its knowledge base (see Appendix D) that match the given specifications. The inference engine of the agent performs the required search as a pattern matching process. For example, the above destination specifications can be unified with the following facts stored in the knowledge base of the recognition fusion agent:

(is-a cube-red object) (has-name cube-red cube)

That means, the agent is capable of recognising the entitycube-red. Similar, the recogni-tion fusion agent filters all other types of cubes as well as the (wooden-coloured) slats.

In the developed multi-agent vision architecture it is sufficient that an agent understands the global goal as well as the destination specifications only. For example, although the master FII-recognition agent does not understand the source specifications, which simply specify that the objects should be recognised in the image (is-a ?src image) stored in the first data slot (has-index ?src 0) of the message (has-source ?src message), the master FII-recognition agent is able to accomplish the recognition task correctly.

Generally, the agents generate program scripts in order to solve requested tasks. If the agents do not understand the source specifications, i.e. no corresponding entities have been found, these scripts include further message passing to other agents. In such cases the agents request the society to generate a required sub-result from the unknown source.

However, if an agent understands the source specifications it will use the provided informa-tion to trigger the program script.

It must be noted, that the grammar of the proposed communication language is not static but can be easily expanded in order to adapt the communication language to different re-quirements. For example, it might be possible and useful to permit the specification of the quality of the input image:

(has-quality ?src noisy)

so that the master image processing agent knows that smoothing filters should be applied to the input image in order to enhance the recognition results.