• Keine Ergebnisse gefunden

3 RESEARCH ON FACTORS INFLUENCING STUDENT PERFORMANCE

3.3 General Effectiveness Factors

3.3.2 Time on task

dimension of educational effectiveness (see also section 3.1.2). Interestingly, findings suggest that some factors “travel” across countries, depending on the cultural context, while others don’t. Reynolds (2006) for example, summarizing major findings from the ISERP study, found that many general effectiveness factors regarding classroom management, instruction, and cli-mate did explain variation in student achievement in diverse countries. In particular, Reynolds found that specific teacher behaviors – such as clarity, questioning, high expectations, a com-mitment to academic achievement, and lesson structuring – could partially explain differences between more and less effective schools across the world. On the other hand, it seemed that certain school factors, such as the quality of the principal, while being an important factor in all countries under investigation, travelled conceptually – meaning that the leadership style mattered by context. For example, Reynolds reported that leadership is more directive in Asian cultures, while it is more lateral/ vertical in the Western societies.

The subsequent sections describe major factors that were identified as being associated with student achievement and indicate empirical evidence from previous studies, reviews, and meta-analyses. While some of the factors are operating from outside (extrinsic), and thus are suscep-tible to policy interventions, others are inherent in nature (intrinsic) and thus cannot be easily altered. While EER is often more interested in malleable factors on school and classroom levels, both groups of factors are interlinked and both are important in predicting achievement. All will be discussed in the following sections. Two factors (time on task and opportunity to learn) that are for several effectiveness frameworks considered as essential elements on each educational level (eg., Creemers, 1994; Creemers & Kyriakides, 2008; Scheerens, 1992) are discussed across all levels at the beginning of sections 3.3.2 and 3.3.3. Subsequently, intrinsic student-level factors will be discussed in section 3.3.4, followed by class-student-level factors in section 3.3.5 and school-level factors in section 3.3.6. The chapter will be concluded by a short overview on context-level conditions for effective schooling in section 3.3.7. All of the factors reviewed in the sections below constitute the basis for the conceptual framework of this research project that will be developed in chapter 6.

Time on task dependents on student motivation and expectation but also on the amount of time offered for learning to students by the school and especially by the teachers. The general con-cept of time on task has received criticizism, for example, by Gage (1978, p. 75): “because of its psychologically empty and quantitative nature.” However, the author agrees here with Creemers and Kyriakides (2008) that these criticisms don’t affect the concept of time on task itself; rather, they imply that in addition to the time factor, the question of which activities are offered and what learning processes are taking place needs to be considered. Consequently, this factor is closely related to the factors described in the subsequent sections: opportunity to learn and quality of teaching – or as Creemers and Kyriakides (2008, p. 100) state: “It is also im-portant to note that time on task refers to the time during which students are really involved in learning, provided that this time is filled with opportunities to learn.”

In his definition of academic learning time, Creemers (1994, p. 29) identifies four different aspects showing the different levels on which the variable is operating and its relation to the concept of opportunity to learn. He distinguishes between the allocated time (learning time allocated by teachers), time on task (the time students are really involved), student error rate (level of difficulty of tasks), and task relevance (relevance to a certain part of the curriculum).

This emphasizes again that concepts of time on task and opportunity to learn are operating closely together.

At the student level, the conceptualization of time on task is somewhat challenging, as direct observation is usually not possible, or at least difficult. Therefore often proxies are used, such as the time spent on homework (Cho, 2010; Kyriakides, 2005; Kyriakides, Campbell, & Ga-gatsis, 2000; Neuschmidt & Aghakasiri, 2015), the time spent on private tutoring (Cho, 2010;

Kyriakides et al., 2000; Kyriakides, 2005), or on learning related out-of-school activities (Cho, 2010). Additionally, indicators related to student absence are used (de Jong, Westerhof, &

Kruiter, 2004).

The amount of time spent on homework is a proxy which is used in many educational effec-tiveness studies for time on task, while in other studies it is used for opportunity to learn. This concept merits further discussion. Moreover, the empirical evidence of relations between the amount of time spent on homework and higher educational outcomes is rather mixed. Cooper, Robinson, and Patall (2006), in their meta-analysis, found some evidence of a homework-achievement correlation for secondary schools in the United States (Cooper, Lindsay, Nye, &

Greathouse, 1998) and came to similar conclusions in a separate study. Neuschmidt and Ag-hakasiri (2015) indicated a significant relation between amount of homework and achievement

in Oman. Conversely, other authors found no correlation (Kyriakides, 2005) or indicated con-tradicting results, or even negative correlations on student level, in certain models (Cool &

Keith, 1991). Looking at international large-scale assessment data, mixed results can also be found: Based on the PISA 2012 results, the OECD (2014a) reported that for most of the coun-tries spending more time on doing homework tends to be associated with higher PISA scores.

They also indicated, based on analyses of PISA 2009 data, that the effect decreases with the amount of time spent, reporting that after around four hours additional time spent on homework, it only had a “negligible impact on performance.” However, Dettmers, Trautwein, and Lüdtke (2009) analyzing PISA 2003 data from 40 countries, could not establish a clear-cut relationship between homework time and achievement in their multilevel analyses. In TIMSS 2011, the relation between the amount of homework and achievement are reported to be more “mixed”;

this can be explained by the different objectives homework can have: While in some cases it is given to students in order to keep up with their classmates, in other situations it as given for practice or as an enrichment exercise. However, it was found for most countries that in the 8th grade, students who reported doing homework for over 45 minutes, but below 3 hours, achieved the highest mathematics and science achievement on average (Martin et al., 2012, p. 418; Mul-lis, Martin, Foy, & Arora, 2012, p. 402).

It becomes apparent that the objective of homework assigned by teachers differs between stu-dent groups, grades, and possibly subjects – leading to varying results in relation to stustu-dent achievement.

Other aspects of a more methodological nature should also be considered. Cool and Keith (1991) for example, raise questions about the validity of the homework variables in use. They conclude that a homework indicator that is “based on a single general question about normal homework practice, is probably an unreliable measure of true homework practice” (Cool

& Keith, 1991, p. 40). Trautwein (2007) also argues that it is important to clearly distinguish between effects on an individual level, as discussed here, and those on a classroom level – a distinction which he sees as unfortunately not having been taken into account by many studies.

Moreover, some researchers argue that the effect of homework on achievement might be at-tributable to a “common cause,” thus possibly decreasing once the models control for variables such as motivation, prior ability, quality of instruction, tracking, or home background (Cool

& Keith, 1991; Dettmers et al., 2009; Trautwein, 2007). Other researchers indicated that at least on a class level, other factors – such as the frequency of homework or the number of tasks – might be more important than the amount of time spent on homework alone (de Jong et al., 2004; Trautwein, Köller, Schmitz, & Baumert, 2002).

In summary, it can be concluded here that homework as an indicator for time on task might not be a reliable measure, and thus should be avoided if possible.

Concerning other out-of-school activities, those related to activities within schools are espe-cially found to have a relation to student achievement. For example, Anderson, Wilson, and Fielding (1988) had 155 5th Grade students record their outside-school activities for a period of between eight and 26 weeks. They found that “reading books” was the best predictor for stu-dent’s reading ability. Similarly, Mullis, Martin, Kennedy, and Foy (2007), analyzing the PIRLS 2006 data, found that “On average internationally, and in most countries, students who reported reading novels and short stories most frequently had higher average achievement than those who read less frequently”; Won and Han (2010) reported associations between reading behavior and mathematics achievement using TIMSS 2003 data. In contrast, non-academic out-of-school activities, such as “listening to music”, “watching television”, or “playing computer games” were repeatedly found to be negatively associated with academic performance if an extensive amount of the daily leisure time was spent in such activities (Anderson et al., 1988;

Martin et al., 1997, 1997; Mullis et al., 1997).

At the class level, students’ time on task, which is defined as the time students spend actively learning, will next to student compositional factors and classroom environment related factors also be determined by the actual time spent on teaching by teachers (the instructional time) and is in general closely related to classroom management (see also section 3.3.5.2). In this regard, effective teachers are characterized according to their ability to direct their classrooms and the environment therein; teaching environments that are effective, therefore, are characterized as those in which “academic activities run smoothly, transitions are brief, and little time is spent getting organised or dealing with inattention or resistance”, as per Brophy and Good (1986, p. 109). On the other hand, the extent to which students are engaged in the activities led by their teachers, or rather distracted by off-tasks activities such as social interaction, is also an im-portant question. With the exception of studies which make use of classroom observations, the measurement of student attentiveness is not strictly feasible; therefore, analyses have been more focused on the investigation of the relationship between instructional time and achievement.

Findings in this regard are not unambiguous: as previously mentioned, the question of how instructional time is used is likewise important; this, in turn, depends on additional factors such as the opportunity to learn (for example the quality of the curriculum and instructional materi-als) and the quality of teaching (hence the use of instructional approaches). Lee and Barro (2001), analyzing cross-country achievement data after controlling for a variety of school re-sources, found inconsistent results for the relation between school-term length, while Wößmann

(2003), in a similar study using TIMSS 1995 data, found significant (albeit small) effects. Lavy (2010), however, analyzing the PISA 2006 database and additional Israeli data, reported modest to large effects associated with one more hour of weekly instruction on average. Using data from TIMSS 1995, Martin, Mullis, Gonzalez, Smith, and Kelly (1999) reported that in high-performing countries, students tend to spend more time in schools and have more instructional time than in lower-performing countries. Relevant analyses were carried out by Sandoval-Her-nández, Aghakasiri, Wild, and Rutkowski (2013) on PIRLS 2006 data from 45 countries. While the authors didn’t find a consistent relation between the yearly overall schooling time and read-ing achievement, they found a far stronger relation in many countries when correlatread-ing solely the effective teaching time (the time the teacher spent to instruction as opposed to time spent on administration and other tasks) with student achievement. This finding again gives clear indi-cation that the amount of time is not necessarily a factor on its own, but rather should be re-garded in conjunction with other important, interrelated factors, such as the opportunities to learn and the quality of teaching. It should be noted that for the current analyses, due to the absence of suitable data, only indicators for the overall available time can be created, but not specifically for the amount of time the teacher actually focuses on instruction.

At the school level (often based on policies implemented on a regional or national level) the time for learning is mainly determined by the time scheduled for instruction, depending on the duration and amount of lessons per subject, and the school days per year. It should be noted that the prescribed time for learning might differ significantly from the actual amount of time stu-dents are taught because of external circumstances such as unplanned school closings, for ex-ample due to severe weather conditions, civil unrest, teacher absenteeism, etc.