• Keine Ergebnisse gefunden

1 I NTRODUCTION AND T HEORETICAL F RAMEWORK

1.2. Measuring Presentation Competence

1.2.2. Existing instruments: Strengths and limitations

The goal of this subsection is to determine whether existing presentation competence instruments are appropriate for continued use in presentation research. To meet this goal, their strengths and limitations are analyzed in order to highlight the need for a new instrument.

Instruments for assessing presentation competence operate with rubrics identifying behaviors considered relevant for a successful presentation (Schreiber et al., 2012). An individual’s characteristics, in this case their possession of a specific presentation competence level, is rated on a Likert-type scale (Hintze, 2005). Four central instruments were selected to illustrate the current state of existing instruments with regard to psychometric properties. These four presentation competence instruments were chosen because their psychometric properties have already been examined and because they are widely cited and have been recently published.

Namely, they are the Public Speaking Competence Rubric (PSCR; Schreiber et al., 2012), the Competent Speaker Speech Evaluation Form (CSSEF; Morreale et al., 2007), the Public

Speaking Instrument (De Grez, 2009), and the Public Speaking Competency Instrument (PSCS;

Thomson & Rucker, 2002). In discussing these instruments, the focus lies on external ratings, especially video ratings, because they are considered more objective (Carrell & Willmington, 1996). In addition, the discussion on assessing appropriateness concluded that appropriateness can be assessed via external raters. Examining the effectiveness of presentation behavior is not the focus of this section because, as indicated above, it requires approaches other than evaluation forms, which goes beyond the scope of this dissertation.

Background of the instruments

Existing presentation competence instruments focus on assessing appropriate presentation behavior. Effective presentation behavior is not included in published instruments (see Herbein, 2017). The instruments and their psychometric examinations rely on the use of observers. In most studies, presentation competence is deduced from one presentation situation taking place in a defined setting; presentation behavior in other presentation situations is not taken into account (Hugenberg & Yoder, 1996). However, the definition of presentation competence involves the demonstration of skills in various situations. Considering just one situation in assessment means that the assessment of presentation competence is based on a very limited range of situations (Hugenberg & Yoder, 1996). Nevertheless, there are specific presentation situations in secondary school that occur only once: for example, presentations within German academic-track secondary school leaving exams (Abitur). In such cases, presentation behavior in this single situation is what matters for one’s grade. This is why even a single situation can be meaningful for assessing presentation competence. Moreover, the rubrics are developed on the basis of different references. That is, the authors of the four instruments draw upon theoretical frameworks such as communication theories (e.g., Morreale et al., 2007), didactic principles (e.g., Schreiber et al., 2012) and previous instruments (e.g., De Grez, 2009). It is notable that explicit references to rhetorical theory were not part of the instruments’ development. In the rest of this subsection, the psychometric properties of the four central instruments are summarized. The strengths and limitations of this psychometric examination process are also addressed.

Psychometric properties of the instruments

The psychometric examinations of the four instruments occurred using samples of higher education students (e.g., Thomson & Rucker, 2002). The PSCR instrument (Schreiber

et al., 2012) was used in introductory speech classes at the university, for example. The sample sizes varied between N = 1 (Thomson & Rucker, 2002) and N = 219 (De Grez, 2009).

With respect to objectivity, the assessment procedure as well as rater procedures are described to facilitate standardized rater assessments (e.g., Morreale et al., 2007). Schreiber and colleagues (2012) reported ICCs, an indicator of interrater reliability, between .54 and .93 for their instrument items. This indicates satisfactory to excellent interrater reliability and interrater agreement (Cicchetti, 1994).

Several measures have been established to assess instruments’ reliability (Hintze, 2005).

In the presentation context, the test-retest reliability, also called stability, indicates whether the same rater will assess the same presentation behavior by the same speaker highly similar in two separate sessions (Hintze, 2005). Test-retest measures for the four instruments were not reported; however, the selection of adequate reliability measures depends on the study design.

Cronbach’s alpha can be considered the final step of reliability testing and reveals the internal consistency of the scales (Hintze, 2005). Cronbach’s alpha was reported for the Public Speaking Instrument (De Grez, 2009), with values of α = .83 up to α = .89, indicating that the scale has good internal consistency. Further reliability measures for the four presentation competence instruments were not reported.

With respect to validity, content and face validity is given when the presentation behavior that is observed and focused on and therefore the observational system as a whole are representative of what the authors intended, i.e., presentation competence (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999). Construct validity is indicated when constructs that are interrelated in theory, for example, presentation competence and speech anxiety, are also correlated empirically in the expected way. A common method of examining the content and face validity of the presentation competence instruments was to have the items reviewed by an expert panel (e.g., De Grez, 2009). Construct validity was examined in terms of convergent validity. For example, the correlation between the presentation competence instrument and the communication apprehension instrument was examined, with the results indicating acceptable to good validity (e.g., Morreale et al., 2007). In addition, construct validity was investigated through exploratory factor analysis (e.g., Thomson & Rucker, 2002) This analysis revealed different factor dimensions depending on the item pool used in the exploratory factor analysis.

Because the item pools considered do not cover the six facets of presentation competence identified in this dissertation, the findings are limited to those specific configurations.

Requirements for future instruments

This dissertations’ critical examination of four central presentation competence instruments revealed common approaches in terms of instrument development. The strengths of these instruments lie in their psychometric examinations, which provide a starting point for the development and psychometric evaluation of further instruments. In addition, the four instruments based their conceptualizations of presentation competence on didactic principles and theories. However, their theoretical foundations are not linked to rhetoric. Moreover, secondary school student samples are underrepresented in the psychometric evaluations, and the German versions of the presentation competence instruments were not part of the discussion. Furthermore, the development of the items making up these instruments could be more transparent, perhaps by elucidating their relationship to concrete theories. This could also address the fact that rhetorical theory was not explicitly considered. In addition, existing psychometric examination approaches can be supplemented by a broader validation procedure, for example, one that takes into account experts’ live ratings. On a similar note, the psychometric examinations could also include test-retest measures (reliability) indicating the stability of the instrument. In conclusion, this critical examination of existing instruments indicates that there is a need for a new instrument. There appears to be no instrument covering all of the facets of presentation competence deduced form rhetorical theory. In addition, test-retest measures could indicate the instruments’ stability, and the use of experts’ live ratings to validate the instruments could contribute to a more robust psychometric examination procedure.

1.3. Fostering Presentation Competence of Secondary School Students