• Keine Ergebnisse gefunden

1 I NTRODUCTION AND T HEORETICAL F RAMEWORK

1.2. Measuring Presentation Competence

1.2.1. Conceptualizing presentation competence assessment

The definition of presentation competence is the first step for its assessment. As discussed above, appropriateness and effectiveness are integral components of presentation competence. A construct-specific discussion of whether both components are measurable and accessible for empirical research can contribute to conceptualizing presentation competence assessment. This discussion also addresses different assessment perspectives, such as self-reports or observations, because they represent different sources of information. They each have strengths and limitations regarding the assessment of appropriateness and effectiveness.

Measuring appropriateness

The term and the notion of appropriateness are rooted in rhetorical theory (see 1.1.2).

Appropriateness relates to the uniqueness of each situation. It takes into account the

circumstances of the situation, the speaker’s individuality, the audience, characteristics of the speech location as well as the subject of the speech. A behavior is appropriate when it matches social rules or norms. Appropriateness remains relevant in modern communication theories focused on interpersonal communication (Westmyer et al., 1998). According to Asmuth (1992), appropriateness is a rather flexible criterion. When the situation changes, the requirements of appropriateness change too (Hoffmann, 2009). Acting according to simple, fixed rhetorical rules – such as always keep your hands above your waist – does not meet the complex requirements of changing presentation situations (Kramer, 2012). In addition to this flexibility, violations of or deviations from appropriateness are also included in rhetorical theory. A deviation is accepted when it is useful and generally supports the goal of the speech (Stroh, 2014). Consequently, the rhetorical aptum is a relational principle that goes beyond any detailed schema (Asmuth, 1992). Quintilian (Institutio oratoria, XI, 1, 91) asserted that there is no fixed measure for appropriateness. It is hard to define appropriateness because of the diverse aspects it is related to (Hannken-Illjes, 2013). Hence, from this point of view, appropriateness is very challenging to measure.

Nevertheless, some approaches to assessing appropriateness have been developed based on the rhetorical conceptualization of appropriateness. While appropriateness implies a kind of flexibility, it is not an arbitrary construct (Hannken-Illjes, 2013). According to Hannken-Illjes, knowledge and adaption of behavior to align with social conventions and social rules that are not codified contribute to appropriate performance. To assess whether a behavior is appropriate, rhetoricians considered iudicium and consilium to be relevant (Ueding & Steinbrink, 2011).

Iudicium refers to practical wisdom and relates to the competence to judge which part of rhetorical theory is useful in the specific situation (Wagner, 1998). Consilium relates to strategic considerations (Ueding & Steinbrink, 2011). Both can be trained through practice. Hence, measuring appropriate presentation behavior involves two aspects: i) appropriateness includes (subjective) experience and/or knowledge and ii) appropriateness is directed towards others.

First, experience and/or explicit and tacit knowledge are required to assess whether social norms are met. This is also relevant for other appropriateness-related aspects, such as text composition, presentation location, and audience. By identifying recurrent situations, i.e., standard situations, the rhetorical theory provided a key for assessing appropriate presentation behavior. For example, genre of speeches, such as political speech, forensic speech or epideictic speech are defined as recurrent situations (see Gottschling & Kramer, 2012). Each gerne of speeches is characterized by specific recurrent features such as the specific purpose, setting, length, subject matter (Rossette-Crake, 2019). Based on these characteristics expectations to the speech can be

deduced. This approach can be transferred to contemporary speeches such as students’

presentations. When keeping specific features of a presentation constant, e.g., defining the same length of presentation, same scientific topic of presentation or same situational settings, the speaker as well as the person who assess the presentation can deduce appropriate behaviors.

Thus, knowledge about the standardized situation of the presentation task makes it easier for implications of appropriate presentation behavior. People without experience and/or knowledge of the situational and social circumstances of the delivered presentation cannot assess its appropriateness. They would base their judgments on an incorrect derivation of appropriateness. Second, appropriateness is critically linked to the fact that a presentation is directed at others. This indicates that assessment by others is crucial.

In educational settings, when students start learning the basics of presentation competence, including the basics of appropriateness, the teacher’s perspective is of interest. It can be assumed that teachers are familiar with presentation situations in the school context.

They create presentation situations for their students and possess experience and knowledge of the audience. On the one hand, teachers themselves form the target audience. On the other hand, they have knowledge about the rest of the class, which also represents part of the target audience. Thus, teachers can take into account community rules in the students’ peer group, even though uncertainty remains. They are able to take into account the class’s social expectations from an external point of view without disregarding their own subjective expectations. Based on this dual orientation, teachers as experts can take a meta-perspective.

This goes beyond a mere subjective point of view, which causes problems because it takes the teacher’s subjective perspective as representative of the universal audience perspective. In contrast, a meta-perspective evaluates appropriateness more broadly than based only on the teacher’s own perspective (Hugenberg & Yoder, 1996).

In summary, the construct of appropriateness cannot be uniformly defined across all situations, as it requires the explicit or implicit knowledge and/or experience of social norms, the addressed community, and the situation. This suggests that experts must be involved in measuring appropriateness.

Measuring effectiveness

Effectiveness is also an integral part of presentation competence. As described above, the term effectiveness refers to achieving the goals of a presentation (see 1.1.2). In order to assess effective presentation behavior, one must know the goal as well as examine whether it has been achieved or not. Both highlight the difficulties of assessing effectiveness. For example,

Hugenberg and Yoder (1996) argue that there might be not only one but multiple goals within a given presentation. Informing, catching the audience’s attention, and/or entertaining might all be goals pursued at the same time. In addition, the goals can change over the course of a presentation. Therefore, the goal cannot be identified from an external perspective unless it is determined a priori. For example, teachers might specifically ask their students to inform the audience in a presentation task in school. After specifying the goal of informing, knowledge tests for the audience could be developed in order to assess effectiveness. However, a new test would be necessary for each presentation, which is neither efficacious nor feasible, as each test would be limited to a fixed presentation condition. Consequently, measurement approaches for effectiveness depend on the definition of the presentation goals. With respect to self-reports, Parks (1994) argued that actors focus on achieving their goals. One reason for this can be that speakers themselves know their multiple goals within a presentation and thus can monitor their goal achievement better than the audience. However, the main focus in effectiveness measurement must be on the audience, because only the audience has access to their knowledge, reactions and emotions, which the speaker addresses with his/her presentation goals.

In summary, measuring effectiveness first requires the identification of one or multiple presentation goals set by the speaker or external individuals. These goals are not accessible from an external point of view, because the speaker’s goals can change during a presentation.

Focusing on the goal of informing the audience would lead to developing knowledge tests for each presentation. Including this component of presentation competence shifts the focus of assessing presentation competence to examining the audience’s knowledge and feelings, which would help to advance presentation competence research.

Multi-perspective assessment

In addition to proceeding from a construct-specific discussion in the measurement context, one can also approach both constructs from an empirical perspective. Specifically, both constructs can be captured using different measurement perspectives. The goal of this section is to show the advantages and disadvantages of each data collection method. Parks (1994) differentiates between the actor’s own perspective (in this case: self-reports) and the observer’s perspective.

Self-reports are considered a quick way to obtain data from many people (Abernethy, 2015). The self-report perspective provides information about the actor himself/herself and his/her feelings, experiences or thoughts that are only available through direct questioning (Abernethy, 2015). With respect to communication competence, self-reports refer not to

competence itself but to the individual’s perception of how competent he/she is (McCroskey &

McCroskey, 1988). Their validity for actual communication competence performance is low (McCroskey & McCroskey, 1988). Nevertheless, the self-report perspective is of relevance.

According to McCorskey and McCroskey (1988), self-perceived communication competence determines future communication decisions more than actual communication competence.

However, when assessing self-report data, biasing factors must be taken into account and minimized to the greatest extent possible (Döring & Bortz, 2016). For example, a central biasing factor for self-reports is social desirability response bias, in which people answer in line with social expectations in order to appear in a favorable light instead of providing true personal information (Abernethy, 2015). One way to reduce this bias is to ensure anonymization during data collection in order to reduce social pressure (e.g., Gottfredson et al., 2015; Hager, 2000).

Applied to the assessment of presentation competence, the self-report perspective can reveal information about self-perceived presentation competence that is powerful for future presentation behavior. However, when interpreting self-report data, self-serving bias must be taken into account. In addition, self-report data are not necessary compatible with actual presentation competence as assessed via external observation due to the self-focused view. As already discussed with respect to appropriateness, it is assumed that individuals have a hard time going beyond their own self-perspective and assessing appropriateness from an external perspective. Carrell and Willmington (1996) argue that students in learning contexts have difficulties perceiving both the environment as well as their own behavior. They focus either on the environment or on their own presentation behavior.

In contrast, external observation requires a further individual apart from the actor himself/herself. This perspective reveals information about behavior as it appears to an observer (Abernethy, 2015). In the communication context, this perspective is required to assess an individual’s actual communication competence (McCroskey & McCroskey, 1988). It is considered a valid method for assessing performance (Abernethy, 2015). Thus, this perspective provides accurate measurements and is utilized, for example, in educational settings.

Nevertheless, the accuracy of observer assessments is not always ensured. Central biasing factors are rater agreement and observer expectancy effects (Abernethy, 2015). Statistical methods and study design procedures exist to prevent, minimize or control assessment bias to the greatest extent (see Abernethy, 2015; Podsakoff et al., 2003). For example, the ICC can be used to measure interrater agreement and intrarater reliability (Hintze, 2005). Applying this to presentation competence, external observer ratings provide information regarding actual presentation competence. To obtain accurate measurements from external observers, the

implementation of both procedural and statistical control methods is required. Furthermore, the observer perspective can further be divided into direct observation (i.e., live ratings) as well as indirect and controlled observation (i.e., video ratings; Ryan et al., 1995). The next subsection provides a detailed overview of both external observation formats to shed light on the opportunities and risks of this data collection method.

(Dis)advantages of live ratings and video ratings

Both observation perspectives, live ratings as well as video ratings, are commonly used in presentation trainings for secondary school students, as Böhme (2015) reveals in their analysis of lesson plans published by teachers in established journals in Germany. Examining both types of observer ratings from an empirical perspective highlights their advantages and disadvantages for presentation competence assessment.

The live rating perspective refers to the assessment of behavior at the time of performance or immediately after performance by an external observer. The rater is present in the field, i.e., part of the situation and is thus physically present when the behavior is exhibited.

Hence, the live rating perspective gives the observer a direct impression of the performance and reveals information about the actual situation. It is considered a form of real-time data collection. However, live ratings also have some pitfalls threatening their accuracy. First, the rater’s mere presence, behavior and/or reactions may affect the behavior of the ratee, increasing the noise in the data (Ryan et al., 1995). In addition, the live situation can attract the rater’s attention and shift their focus to aspects apart from evaluation. This risk of distraction is higher in live situations than in video-recorded situations (Ryan et al., 1995). Furthermore, the live rater must process a continuous stream of behavioral information. This requires continuous attentional focus on relevant aspects (Ryan et al., 1995). Transferring this to presentation competence assessment, the live rating perspective provides a direct impression of the observed behavior. It reflects the authentic presentation situation and is close to the audience perspective.

However, interactions with the ratee, i.e., nodding or shaking one’s head, can strengthen the speaker’s anxiety and undermine his/her performance of presentation competence. The rater’s interactions must be standardized to control for this factor.

In contrast to live ratings, in video ratings, the rater is not physically present in the actual performance situation. This perspective is a form of indirect observation, as the assessment is based on video-recorded material. The video rating approach is characterized by the repeatability of the material, as the raters can view, pause and replay these videos multiple times. However, this indirect observation in the form of watching the video decreases the

amount of available information about the real situation. For example, the video rater can only perceive information via the visual and auditory channels. It is not possible to feel the temperature of the room or smell the environment (Nagel, 2012). In addition, the information available via visual and audio signals is predetermined. For example, the camera angle is prearranged and the video raters can only observe what this angle reveals, while live raters can turn their head to change their point of view. In addition, the video is two-dimensional, while the real situation is perceived as three-dimensional, which could affect notions of space, e.g., gestures might become more or less observable (Ryan et al., 1995). In terms of audio signals, playing the videos can make the speaker’s voice more salient even though in the actual situation context the speaker’s volume was low and vice versa (Nagel, 2012). Therefore, video-recording a performance may affect the salience and distinctiveness of behavior (Hintze, 2005).

Moreover, the repeatability of the video-recorded presentation can both positively and negatively affect ratings. On the one hand, raters might pay less attention when they know that they can replay the video an unlimited number of times (K. R. Murphy & Cleveland, 1991). On the other hand, repeatability can reduce encoding bias, i.e., the pause and play button helps the rater better focus on the relevant behavior without distractions, which is seldom the case for a continuous performance in a live situation (Hauenstein, 1992). In addition, repeatability makes it possible for different raters to observe the performance with different observation goals (see Curby et al., 2016). Moreover, the video-recorded material prevents rater interactions or rater reactions from influencing the ratee’s behavior because the rater is not physically present in the performance situation. This can also reduce the need for accountability, which is higher in direct observation settings such as live ratings (Gordon et al., 1988; Longenecker et al., 1987).

However, obtaining data via video-recording devices might also influence the ratee’s behavior;

for example, the presence of a camcorder might increase the ratee’s anxiety and undermine their performance (see Bush et al., 1972; Nielsen & Harder, 2013). Transferring this to the presentation competence context, the video rating perspective provides information via a strongly standardized assessment procedure. It enables a detailed view of presentation competence due to the repeatability of the video recordings. However, contextual information is needed for the assessment process, and watching the presentation several times can also change the rater’s first impression of the presentation.

When comparing the two external observation perspectives, it is assumed that both live ratings as well as video ratings impact the observer in a similar way. When the speaker addresses the audience directly, the human brain does not differentiate between live or medium-based communication, meaning that the effect of being addressed is the same in both scenarios

(Nagel, 2012). In addition, Ryan, Daum, Bauerman, and Grisez (1995) found no difference in rating accuracy between live and video ratings. They only found some differences in observation accuracy; namely, observer accuracy was higher for video ratings than live ratings when raters used the pause and replay possibilities of video recordings (Ryan et al., 1995).

In summary, the central components of presentation competence, appropriateness and effectiveness, are measurable in the presentation context. The different perspectives on assessing presentation competence all provide different but beneficial information. The self-report perspective reveals self-perceived presentation competence, which drives future presentation competence decisions; the live rating perspective provides information about the audience’s direct impression after the presentation; and the video rating perspective allows for a detailed view of presentation competence because more factors can be assessed than in live ratings. Hence, the selection of the assessment perspective depends on the research goals. These content-specific and fundamental considerations must be taken into account when collecting and interpreting presentation competence data from secondary school students via presentation competence instruments. Additionally, a thorough investigation of the psychometric properties of presentence competence instruments is required to determine the utility of the data collected from them (Hintze, 2005). Hence, the quality of existing presentation competence instruments is an important question for this dissertation. Only if the existing instruments are of sufficiently high quality in terms of development and psychometric quality can they be used to address the further research topics in this dissertation.