• Keine Ergebnisse gefunden

Accounts to the subject-object asymmetry in relative clauses

5.5 The processing of relative clauses

5.5.1 Accounts to the subject-object asymmetry in relative clauses

Many different accounts have been proposed in order to explain the observed asymmetry between syntactic derivations entailing argument extraction either from the subject or from the object position. Different classifications of the provided accounts are possible,

142

depending on the parameters and factors that are taken as guidelines. I classify each study according to the linguistic or the cognitive aspect each account mostly focuses on. I will distinguish between memory-, processing- and syntactic-based accounts: the first group of accounts focuses on the role of working memory in sentence computation, the second one on processing strategies, and the third one on syntactic structures. Most ideas and hypotheses advanced in the past decades broke boundaries of this schematic classification and considered the interplay of different factors as the cause of enhanced difficulties with ORs. I decided to classify the different accounts according to the component that seems to characterize each at most.

Memory-based accounts (Ford, 1983; Frazier & Fodor, 1978) focus their attention on the limitation that working memory sets to our capacity to compute syntactic structures characterized by increasing levels of complexity, i.e. recursive embedding. According to Frazier & Fodor (1978), the parser proceeds by analysing a limited amount of words at the time, cutting the word string into sub-units, in order to try to assign them a structure.

In case one (or more elements) in the string at stake cannot receive a proper role in the structure, storing loads highly increase, thus reducing memory resources for further analysis, at least until a proper structure is reconstructed. Along similar lines, Ford (1983) claims that (embedded) ORs bring along a heavier burden than SRs do, because they require the parser to keep in mind the unassigned relative head for a longer time span, before assigning it to the proper gap. When a gap is met, the parser must search backwards for a proper filler. Gaps in the object position automatically require longer inspections in the previously processed sentence portion, because the filler is set further away. Gibson (1998) shifts the attention to integration costs and claims that parsing demands are determined by the number of elements that intervene between the filler and the gap.

Building on Bever’s idea (1974) according to which some syntactic items (e.g., pronouns) are easier to process than others (lexical referring expression), Gibson claims that lexical referents are particularly demanding because they require higher levels of lexical, semantic and discourse activation for their integration in the structure. Gordon, Hendrick and Johnson (2001) further refine the proposal by hypothesising the existence of a similarity-based interference. In their study, authors find reduced parsing difficulties in ORs when the subject and the object belong to different word classes, for instance a pronoun and a noun. In their view, working memory is particularly challenged by

143

sentences in which the involved DPs are highly similar, because of the difficulty at keeping both in mind and discriminating between two (almost identical) elements, while assigning them different syntactic functions and discourse roles. Accordingly, clearly different elements, e.g. a noun and a pronoun, facilitate processing. This contrasts the idea that lexical categories are associated to specific levels of complexity per se, as proposed by Gibson (1998) for pronouns. In line with Gordon et al. (2001), I assume that the processing demand of an element is not to be determined a priori; rather, the context in which the element appears is crucial to determine its processing demand.

As already mentioned, a second group of accounts ascribes the observed asymmetry between subject and object extraction to processing strategies. McWhinney & Pléh (1988;

but see also McWhinney, 1987) claim that the parser builds expectations while processing a sentence; in particular it tends to form and maintain perspectives in which elements maintain fixed functions. If the parser meets a DP and analyses it as the subject of the matrix clause, it expects that the item maintains the same function when modified by a RC. If this is not the case and the element is assigned a different syntactic function, the parser is forced to change perspective, resulting in enhanced processing loads.

Perspective maintaining is economical and allows for smoother sentence processing.

Certainly, the focus of processing-based accounts is the claim that processing takes place always in an economic way, in which predetermined procedures or strategies play a facilitating role. That is precisely the core idea of the Active Filler Strategy Hypothesis (Clifton & Frazier, 1989; Frazier & Clifton, 1989): the parser tends to assign a filler the first possible gap in order to build the simplest possible structure. De Vincenzi (1991b, 1996) also specifies that the parser avoids postulating nodes that are not necessary, but, at the same time, does not delay the insertion of required chain members. In other words, the parser is engaged in building just the proper amount of structure needed to represent the linguistic input. This strategy works particularly fine for SRs because the filler (the relative head) is assigned the first possible gap, namely the one corresponding to the subject position in the relative clause, and processing completes successfully. In contrast, the strategy is less proficient when it comes to ORs: in this case, after assuming a gap in the subject position, the analysis must be revised and corrected, in order to allow the assignment of the head to the correct object position. Under this view, prolonged reading times for ORs are the manifestation of the ongoing correction of the initial structure.

144

However, the described principle does not predict the possibility to reduce the asymmetry between subject and object relatives through a manipulation of the arguments involved in the derivation. In this sense, it cannot account for some of the experimental results I will review below.

Processing-based accounts also include a proposal that dedicates a particular attention to discourse constraints. The claim is formalized in Kidd, Brandt, Lieven &

Tomasello (2007) and it states that speakers produce ORs only under specific circumstances. The first condition authors set to the computation of ideally-formed ORs is the presence of inanimate referents in correspondence of the relative head (see also Fox

& Thompson, 1990). As animate referents strongly trigger SR representations, in their view inanimate nouns posit looser constraints for the sentence interpretation and therefore favour an OR reading. The second condition for ideally-formed ORs concerns the subject.

According to the author, subjects of ORs most often refer back to referents already presented in the discourse and therefore are expressed in the form of pronouns. According to Kidd et al. (2007), these two constraints taken together, namely inanimacy of the head and pronominal subjects, represent basic conditions for the spontaneous production of ORs and indeed characterize the majority of uttered ORs. The authors take the issue to have a certain relevance at the processing level because computation takes place based on statistical data. In other words, frequency of use determines speed and accuracy in processing, assumed that very frequent structures are faster and better processed. In the authors’ view, the same principle holds for L1 acquisition too, with children producing earlier and better the structures that they more often find in the linguistic input. Authors also review previous studies from this perspective and claim that the subject/object asymmetry widely found in a variety of tasks is ultimately due to ill-formed ORs in the experimental input. That equals to say that the use of very rare and not-common-at-all ORs is the primary cause of poor experimental performance.

I now go back to the issue of the distance between filler and gap because this is relevant in the first of the syntactic-based accounts I would like to mention here, namely the Structural Distance Hypothesis formulated by O’Grady (1997; O’Grady, Lee & Choo, 2003). O’Grady thinks of distance neither in terms of number of referents entering the discourse between the filler and the gap (as in Gibson, 1998; Gordon et al., 2001) nor in linear terms (De Vincenzi, 1991b, 1996), rather in structural terms. The author claims that

145

complexity is proportional to the degree of embedding and therefore to the gap depth. In his view, what is crucial is the number of syntactic nodes that are necessary to create SRs and ORs. Given that the latter entails more nodes in the structure assumed by O’Grady, effects of increased processing difficulties are automatically explained.

Authors who build on the principle of Relativized Minimality (Rizzi 1990, 2004) in order to explain the asymmetry, take quite a different syntactic approach. Grillo (2009) and Friedmann, Belletti & Rizzi (2009) first observe that locality constraints are at play in ORs. In ORs the internal subject intervenes between the extracted object DP and its target position in the CP layer of the RCs. The syntactic principle also accounts for a variety of phenomena observed in previous studies (as mentioned above). For example, I refer to the issue concerning the asymmetry (brought up by Gordon et al., 2001) between lexical referents and pronouns and the different ways in which they can alternatively improve or not OR comprehension, or to the role played by the number of referents intervening between the filler and the gap (Gibson, 1998). All these issues find an explanation under the syntactic formalization offered by RM, as I will show in the following and in the subsequent sections.