• Keine Ergebnisse gefunden

6   Conclusion

2.2.4   Violation of expectations

In most cases, many memories are stored within one TOP. That is why indices are needed to find a particular memory.

More than one TOP can be active at a given time, which is reasonable because people want to be able to apply memories that are about the kind of goal they are pursuing as well as others that have nothing to do with the particular goal. Cross-contextual learning is enabled by breaking apart experiences and then decomposing them with the help of TOPs when remembering. For this process, memory structures have to be linked in the brain. According to Schank (1999), information about how memory structures are ordinarily linked in frequently occurring combinations is held in MOPs (memory organization patterns). MOPs are both memory and processing structures.

“The primary job of a MOP in processing new inputs is to provide relevant memory structures that will in turn provide expectations necessary to understanding what is being received. Thus MOPs are responsible for filling in implicit information about what events must have happened but were not explicitly remembered.” (Schank, 1999, p.113)

TOPs and MOPs are the structural basis of memories and information in memory (Schank, 1999). All these structures need to have the following basic content:

a prototype

a set of expectations organized in terms of the prototype

a set of memories organized in terms of the previously failed expectations of the prototype

a characteristic goal

If a situation deviates from the prototype, expectations are failed. As was mentioned above, this enables learning because with the help of TOPs it supports to improve outcome predictions.

This idea is central in the model presented below because the aim of the user studies that it will be applied to is to find differences between the prototype of the designed interaction and the mental representations of the users and to identify behaviors that are connected to (dis-) confirmation of their expectations. What happens in the case of expectation disconfirmation will be discussed in the following section.

Figure 2-5. Expectancy violations theory (Burgoon, 1993, p.34)

Here the approach presented in Burgoon (1993) seems to be helpful. As Figure 2-5 shows, Burgoon (1993) has identified three main factors influencing expectancy violation:

communicator characteristics, relational characteristics, and context characteristics. The communicator characteristics capture salient features of the interlocutors of the interaction such as appearance, demographics, etc. The “10-arrow model for the study of interpersonal expectancy effects” (Harris & Rosenthal, 1985) is related to the communicator characteristics. It distinguishes variables that moderate or mediate expectancy effects between communicators.

Moderator variables are preexisting variables like sex, age, and personality. They influence the size of the interpersonal expectancy effects. Mediating variables describe behaviors by which expectations are communicated. Mediating variables and expectancies are modified continually in the course of the interaction. In HRI, communicator characteristics have been researched in terms of robot appearance; for example, in Lohse, Hegel, and Wrede (2008) it was found that appearance determines which tasks subjects ascribe to a robot. The influence of robot appearance is further discussed in Section 2.2.6.

The relational characteristics explain how the interlocutors are related to each other (degree of familiarity, liking, attraction, status, etc.). One study on relationship between human and robots has been reported by Dautenhahn et al. (2005) who raised the question “What is a robot companion – Friend, assistant or butler?”. Their results indicated that the subjects preferred a household robot to be a kind of butler or assistant, but not a friend. This relation highlights that the user has a higher social status than the robot and is in control of it.

Finally, context characteristics include information about the situation (environmental constraints, aspects of the situation such as privacy, formality, and task orientation). The rela-tionship between context, situation, and expectations will be further discussed in Section 2.3.

According to Burgoon (1993), these three factors determine the expectancies of the interlocutors and the communicator reward valence. The communicator reward valence is a measure of how rewarding it is to communicate with a certain person. It is composed of physical attractiveness, task expertise and knowledge, status, giving positive and negative feedback, and other factors.

Communicator reward valence affects the interaction because it determines the valence of expectancy violations. Violations can be connected to conversational distance, touch, gaze, nonverbal involvement, and verbal utterances. They arouse and distract the perceiver and more attention is paid to the violator. Moreover, the perceiver tries to interpret the violation and evaluates it based on the interpretation, taking into account who has committed the violation.

For example, the communicator reward valence is very high for a boss who is knowledgeable and gives feedback to his or her subordinates. Hence, if he or she violated an expectation, this would be interpreted differently than a violation committed by another person or by a robot with a low communicator reward valence.

Since the enacted behavior might be more positive or negative than the expected behavior, the valence of a violation can be either positive or negative. At first sight, one might suppose that the most promising strategy in interaction is to always confirm the expectations. However, some deviations might lead to better results. Two experiments by Kiesler (1973) have demonstrated that in the case of high-status individuals disconfirming may show better results and seem more attractive. On the other hand, low-status individuals are expected to act according to normative expectations and are then rated as being more attractive. As was mentioned before, Dautenhahn et al. (2005) found that the user preferred robots that had a lower status. Hence, the robots should behave in accordance with the expectations of the users and not violate them. The direction that research has taken indicates that this is what robot designers try to achieve (for example, Dautenhahn et al., 2006; Hüttenrauch, Severinson-Eklundh, Green, & Topp, 2006;

Koay, Walters, & Dautenhahn, 2005; Pacchierotti, Christensen, & Jensfelt, 2005; Walters, Koay, Woods, Syrdal, & Dautenhahn, 2007). However, often the robots involuntarily violate expectations. This connects to Schank’s (1999) book where he states that people can violate our expectations because they intend to or because of some error (motivational vs. error explanation). Often the reason is hard to tell. Schank (1999) has suggested reasons why people do not do what we expect:

• misperception of the situation/different perception of optimum strategy

• different goal

• lack of resources

• disbelief (not believing they should do what you thought they should do)

• lack of common sense (not knowing what to do, lack of ability)

In most situations first error explanations are considered. If no error can be found, the assumptions about the person’s goals could be erroneous. However, determining these goals is highly speculative. If it is assumed that people knew exactly what they were doing and no error was made, a motivational explanation has to be sought. For robots, it can be assumed that the violations are mostly based on errors (unless experiments are designed to explicitly violate user

expectations). Such errors occur if the robots fail to perceive the situation correctly and lack common sense. Even worse, robots do not notice that they have violated expectations. Here the question arises of how humans recognize that this has happened.

In her model, Burgoon (1993) has not specified a mechanism that detects expectancy violations.

However, Roese and Sherman (2007) have done so and they term this mechanism regulatory feedback loops. In the loops, the current state of a situation is compared to an ideal or expected state:

“Behavior control therefore requires continuous online processing in the form of continuous comparison, or pattern matching, between the current state and the expected state. Very likely in parallel to this comparative process is the conceptually similar online comparison between the current state and recent past states.” (Roese & Sherman, 2007, p.93)

The regulatory feedback loop is an online mechanism. It is triggered when people try to validate their hypothesis of the world (at least for subjective expectancies) and search for more informa-tion. Validation processes occur especially in cases of expectation disconfirmation, for example, in the case of a robot not answering when greeted.

Disconfirmation has certain consequences. If an expectation is disconfirmed, it usually becomes less certain. Moreover, disconfirmed expectations become more explicit, salient, and accessible and are, therefore, more likely to be tested in the future (see also Section 2.2.1).

In Schank’s (1999) words the main question based on expectation failure that has to be asked is what changes have to be made to the memory structure responsible for this expectation? He describes the following procedure: (a) first, conditions that were present when the failure occurred have to be identified and the question has to be answered if the current context differs from situations in which the expectation was confirmed, (b) second, one has to determine whether the failure indicates a failure in the knowledge structure itself or a failure of specific expectations in that structure in order to decide what aspects of the structure should be altered.

In other words, disconfirmation of expectations causes more careful processing, because humans try to make sense of the actions. Sense making includes three classes of activity: causal attribution to identify the cause of a particular outcome, counterfactual thinking to determine what would have happened under different key causal conditions, and hindsight bias which focuses on the question of whether the outcome was sensible and predictable (Roese &

Sherman, 2007). Attribution is the activity that has been most deeply discussed in the literature.

It is the process by which people arrive at causal explanations for events in the social world, particularly for actions they and other people perform (Sears, Peplau, Freeman, & Taylor, 1988, p.117). In this context, expectations about the future are influenced by past experiences; for example, future success is expected, if past success is attributed to ability. Thus, if the users in HRI assume that the robot has done something because they have the ability to get it to do so and the action did not occur accidentally, they probably expect that the robot would react in the same way again in a future situation. However, if the expectation is disconfirmed, the sense making activities may result in ignoring, tagging, bridging (the discrepancy is explained away

by connecting expectancy and event), and revising (in contrast to bridging involves changes at a foundational level) (Roese & Sherman, 2007). Which consequence results depends on “the magnitude of the discrepancy between expectancy and outcome and the degree of complexity or sophistication of the underlying schemata basis of the expectancy” (Roese & Sherman, 2007, p.103). Small discrepancies will more likely be ignored, moderate ones will lead to slow and steady expectancy revision, and large discrepancies lead to separate subcategories. Especially if similar deviations are encountered repeatedly, at some point these form a new structure. When a new structure is used repeatedly, the links to the episodes it is based on become harder to find (Schank, 1999).

Whatever change is made to memory structures, only the explanation of an expectation failure protects us from encountering the same failure again (Schank, 1999). Explanations are the essence of real thinking. However, people are usually content with script-based explanations if these are available (for more information about scripts see Section 2.2.5.3).

While discrepancies may lead to changes in memory structure, perceivers may also confirm their inaccurate expectancies. According to Neuberg (1996), perceivers may confirm their inaccurate expectancies in two primary ways: by creating self-fulfilling prophecies (target behavior becomes consistent with the expectancies) and by exhibiting a cognitive bias (target behavior is inappropriately viewed as being expectancy-consistent). Thus, the perceivers impose their definition of the situation on the target, or even affect their behavior, because they want to confirm their own expectations.

Olson, Roese, and Zanna (1996) have also investigated affective and physiological consequences of expectancies (for example, placebo effect) and expectation disconfirmation.

Affect usually is negative when expectancies are disconfirmed, an exception being positive disconfirmations like a surprise party. In general, an expected negative outcome is dissatisfying, an unexpected negative outcome even more so. The other way around a positive outcome is even more satisfying when being unexpected.

To conclude, the goal of HRI should be that the robot does not disconfirm the users’

expectations, especially not in ways leading to negative outcomes. Unfortunately, it is difficult to achieve this goal. Therefore, it is important to take into account the consequences of disconfirmation which depend on the characteristics of the human and the robot, the relationship between the interactants, and the context. One part of the model presented below is to determine how the users make sense of the disconfirmation of their expectations by the robot and what are the results of the sense-making process. Knowledge about this will help to design the robot in a way to better contribute to the correct explanation of why the disconfirmation occurred or to avoid the disconfirmation in the first place.