• Keine Ergebnisse gefunden

Interfaces

Im Dokument Voice at the interfaces (Seite 33-38)

1.3 Architectural assumptions

1.3.2 Interfaces

When syntactic structure is spelled-out, it is interpreted at LF (semantics) and PF (phonology). The Trivalent approach shares with other current work a certain view of the so-called autonomy of syntax (Marantz 2013; Wood 2015; Wood &

Marantz 2017; Myler 2017). Essentially, the grammar (the syntax) is free to gen-erate different syntactic structures, so long as these satisfy inherently syntactic requirements (for example Case licensing or feature valuation). The syntactic ob-ject must then still be interpreted by the interfaces at Spell-Out, at which point they can be said to interpret but also “filter” the output. At LF the semantic com-position may or may not converge, and at PF the phonological calculation may or may not yield an optimal candidate. In both cases we may expect certain kinds of crosslinguistic variation.

Compositional semantics proceeds straightforwardly (the main operations are Functional Application and Event Identification, as mentioned above), as does linearization, prosodification and phonological evaluation. See the introductory chapters of Wood (2015) or Myler (2016) for additional details on the semantic composition. While I make repeated reference to semantic roles such as Agent, I do not assume that theta-roles are a primitive of the system. The phonological calculation may be implemented using Optimality Theory (Prince & Smolensky 1993/2004) as done in Kastner (2019b). Here are the other points that might re-quire further elaboration.

1.3.2.1 Roots

I individuate roots based on their phonology (e.g.√ktb and √arrive), but it is more accurate to think of them as pointers to phonological and semantic infor-mation (Harley 2014a; Faust 2016; Kastner 2019b). Nevertheless, I will use the phonological shorthand for convenience.

Despite the crucial role of roots in determining the reading of a word, I can-not provide a theory of root meaning here. Not every root can appear in every template, meaning that a root has to license the functional heads it combines with somehow (Harley & Noyer 2000). Exactly how this happens is left vague.

Presumably, this licensing should be similar to the way that a root like√murder requires Voice in English, but a root like√arrive does not license Voice.

The idea that roots pick out meanings which are shared across forms will likewise not be formalized. I will be relatively comfortable talking about shared meaning in cases of alternations. In Sections 2.4 and 4.4 I will discuss cases where the shared meaning is slightly less easy to pin down. Neither of these points is particular to Hebrew within root-based approaches like DM (and both require en-gaging more seriously with the lexical semantics literature), but they do appear more prominent because of the nature of the morphological system.

It is important to delve a bit deeper into the idea of one root across a few templates. Consider√pk̯d in (15). One could find a general semantic notion of

“counting” or “surveying” running through the use of this root but the alterna-tions are in no way obvious.

(15) a. XaYaZ:pakad‘ordered’.

b. niXYaZ:nifkad‘was absent’.

c. XiY̯eZ:piked ‘commanded’ (and a passiveXuY̯aZ form).

d. heXYiZ:hefkid‘deposited’ (and a passivehuXYaZ form).

e. hitXaY̯eZ:hitpaked‘allied himself’, ‘conscripted’.

The problem is exacerbated when considering nominal forms as well: pakid

‘clerk’, mifkada ‘headquarters’, pikadon ‘deposit’. Templates, then, do not pro-vide us with deterministic mappings from phonological form (the template) to semantics (interpretation of a root), again with the exception of the passive tem-plates.

So the question is whether verbs such as those in (15) do in fact share the same root. For example, it could be argued that (15a,b,c,e) as well as the noun

‘headquarters’ share one root that has to do with military concepts, and that (15d) as well as the nouns ‘clerk’ and ‘deposit’ stem from a homophonous root that has to do with financial concepts. There are a number of reasons to reject this claim.

First, there are no “doublets”; if we were dealing with two roots, call them√pk̯d1

and √pk̯d2, then each should be able to instantiate any of the templates. But hefkidcan only mean ‘deposited’, never something like ‘installed into command’.

The choice of verb for that root in that template has already been made. Second, experimental studies have found roots to behave uniformly across their different meanings (Deutsch 2016; Deutsch et al. 2016; Deutsch & Kuperman 2018; Kastner et al. 2018), although this is not a consensus yet (Moscoso del Prado Martín et al.

2005; Heller & Ben David 2015).

1.3.2.2 Contextual allomorphy

A morpheme is an abstract element, comprised of a bundle of syntactic features (or, in the case of roots, comprised of a pointer to lexical information). In DM, a morpheme is matched up with its exponent, or Vocabulary Item, in a postsyn-tactic process of Vocabulary Insertion. Which exponent is chosen depends on the phonological and syntactic environment the morpheme is in (see Bonet &

Harbour 2012 and Gouskova & Bobaljik submitted for overviews).

It may be the case that a morpheme has a number of contextual variants or allomorphs. For example, the English past tense marker has a number of possible exponents, depending on the phonological environment it is inserted in.

(16) a. grade[əd]

b. jam[d]

c. jump[t]

This can be formalized as follows (regardless of what the default form is):

(17) T[Past]↔⎧

⎨⎩

əd /[+cor –cont –son]

d /[+voice]

t

The English definite article also has two contextual allomorphs conditioned by the phonological environment (but cf. Gouskova et al. 2015; Pak 2016).

(18) a. a dog b. an apple Similarly:

(19) D[−def]↔{ə / #C

ən / #V

Some roots also supplete based on their environment. Here the context for allomorphy is not the phonological features of the local trigger but the syntactic features.

(20) a. go(today) b. went(yesterday) (21) go5↔{go / [prs]

went / [past]

Similarly for adjectives:

(22) a. good b. better c. best (23) good↔⎧

⎨⎩

good / [norm]

better / [cmpr]

best / [sprl]

The question occupying many theorists at the moment regards the exact na-ture of “ ”: is it linear adjacency, syntactic adjacency, or something else? In previous work I have adopted the idea that allomorphy can only be triggered under linear adjacency of overt elements (Embick 2010; Marantz 2013). This hy-pothesis helps explain a range of allomorphic interactions in Hebrew, as I argued for in Kastner (2019b). Some of these points will be mentioned in the following chapters – in particular because I think the current analysis makes the right pre-dictions – but the discussion does not revolve around them.

5This is a simplified version for expository purposes. The element to be spelled out should be something likego in the context of the verbalizer v, in addition to phi-features.

In my formal analysis I will assume that the stem vowels spell out Voice and that affixes spell out higher material (this can be seen as a Mirror Principle ef-fect following directly from cyclic spell out; Baker 1985; Muysken 1988; Wallace 2013; Zukoff 2017; Kastner 2019b). Alternatively, we may assume that a dissoci-ated Theme node is projected (“sprouted”) from Voice postsyntactically (Oltra Massuet 1999; Embick 2010); the same holds for Agr (agreement suffixes based on phi-features), be it on T or sprouted from T. But for simplicity I will represent the stem vowels as the overt spell-out of Voice and agreement as the spell out of a joint T+Agr head.

1.3.2.3 Contextual allosemy

The phenomenology of contextual allomorphy is fairly well understood, even if the exact mechanisms are under debate. A similar concept that has only recently gained currency is contextual allosemy. The idea is the same. One morpheme may have a number of interpretations competing for insertion at PF; this is allo-morphy. One morpheme may also have a number of interpretations competing for insertion at LF; this is allosemy. Recent discussions can be found in Wood &

Marantz (2017) and Myler & Marantz (submitted).

Kratzer (1996) proposed that Voice introduces the Agent role for eventuali-ties (24) and that Holder introduces the Holder role for states (25).

(24) feed the dog:

a. Jfeed the dogK= λe.feed(the dog,e) b. JVoiceK= λxλe.Agent(x,e)

c. JVoicefeed the dogK= λxλe.Agent(x,e) & feed(the dog,e) (25) own the dog:

a. Jown the dogK= λs.own(the dog, e) b. JHolderK= λxλs.Holder(x,s)

c. JHolderown the dogK= λxλs.Holder(x,s) & own(the dog,e)

Yet nothing forces Voice and Holder to be separate heads; in fact, this would be surprising given that their syntax and morphology are identical. As explained by Wood (2015), we could just as well posit that Voice has two contextual allosemes:

one when it combines with a dynamic event and another one when it combines with a stative event, (26).

(26) JVoiceK↔ {λxλe.Agent(x,e) / (eventuality) λxλs.Holder(x,s) / (state)

Here the contexts are purely semantic, as they should be, given that we are now in LF.

I make extensive use of this formalism in order to encode the semantics of functional heads in this book. An alternative could also be considered, whereby there is a proliferation of homophonous heads similar to Voice and Holder. I see no reason to adopt this perspective, especially considering how naturally contextual allosemy fits into the Trivalent framework. We can now overview what this framework does for the puzzles of Hebrew.

Im Dokument Voice at the interfaces (Seite 33-38)