• Keine Ergebnisse gefunden

Review of: Tesar, Bruce; Smolensky, Paul: Learnability in Optimality Theory. Cambridge: MIT Press, 2000

N/A
N/A
Protected

Academic year: 2022

Aktie "Review of: Tesar, Bruce; Smolensky, Paul: Learnability in Optimality Theory. Cambridge: MIT Press, 2000"

Copied!
4
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

LINGUIST List 11.2024

Sat Sep 23 2000

Review: Tesar & Smolensky: Learnability in OT

Editor for this issue: Andrew Carnie <carnie@linguistlist.org>

What follows is another discussion note contributed to our Book Discussion Forum. We expect these discussions to be informal and interactive; and the author of the book discussed is cordially invited to join in. If you are interested in leading a book discussion, look for books announced on LINGUIST as "available for discussion." (This means that the publisher has sent us a review copy.) Then contact Andrew Carnie at

carnie@linguistlist.org

Directory

1. Tania Avgustinova, Book review: Tesar and Smolensky (2000): Learnability in OT

Message 1: Book review: Tesar and Smolensky (2000): Learnability in OT Date: Wed, 20 Sep 2000 13:28:53 +0200

From: Tania Avgustinova <tania@CoLi.Uni-SB.DE>

Subject: Book review: Tesar and Smolensky (2000): Learnability in OT

Bruce Tesar and Paul Smolensky (2000) Learnability in Optimality Theory

The MIT Press, Cambridge, Massachusetts 140 pages

Reviewed by Tania Avgustinova, Saarland University

SYNOPSIS

The book is concerned with the application of formal

learning theory to the problem of language acquisition. The OT implication for language learnability is examined. The main claim is that the very core principles of OT lead to learning principles of constraint demotion which, in turn, are the basis for a family of algorithms for inferring

(2)

constraint rankings from linguistic forms. The learning procedure proposed here by Tesar and Smolensky (T&S,

henceforth) learns both the correct interpretations and the correct grammar simultaneously.

The book is organised as follows.

Chapter 1 (pages 1-18) is devoted to laying out the larger context of this work, and addresses issues of learnability and Universal Grammar (UG), as well as the problem of learning hidden structure. The background, as presented, naturally leads to the central claim of the book, i.e. "that OT provides sufficient structure at the level of grammatical framework itself to allow general but grammatically informed learning algorithms to be formally defined". T&S's approach employs a decomposition of learning into two central sub- problems: (i) assigning a structural description to an overt linguistic form given a grammar that may not be correct (RIP: Robust Interpretative Parsing), and (ii) learning of a constraint ranking from a set of full structural

descriptions (CD: Constraint Demotion).

Chapter 2 (pages 19-32) offers an overview of OT, including illustrations with OT analyses of syllable structure and clausal subject distribution.

Chapter 3 (pages 33-52) discusses the CD principle stating that constraints violated by grammatical structural

descriptions must be demoted in the ranking below

constraints violated by competing structural descriptions.

Chapter 4 (pages 53-74) presents experimental results in overcoming ambiguity in overt forms, using a computer implementation of RIP/CD which is applied to an OT system for metrical stress. This is an illustration of how the strategy of iterating between structure assignment and ranking adjustment actually works.

Chapter 5 (pages 75-84) addresses key issues in language learning, e.g., the subset principle, richness of the base and acquisition theory. T&S consider the prospects for extending the same iterative strategy (embodied by RIP/CD) with respect to the language-specific inventory, in order to include the simultaneous learning of rankings and lexical underlying forms.

Chapter 6 (pages 85-90) revisits the relationship between learnability and linguistic theory (or UG).

Chapter 7 (pages 91-110) contains formalisation and proofs of the correctness and data complexity of CD.

Chapter 8 (pages 111-128) contains algorithms for performing production-directed parsing.

Finally, there are notes (pages 129-132), a list of references (pages 133-138) and an index (pages 139-140).

(3)

COMMENTS

As the learning proposal presented and evaluated in this book is tightly bound to the central principles of OT, its success can be taken as evidence in favour of T&S's major claim that OT makes possible a particularly strong union of the interests of language learnability and linguistic

theory.

In OT, interaction of constraints is not only possible but explanatory crucial. Cross-linguistic variation is explained by variation in the relative ranking of the same

constraints, and hence, is only possible to the extent that constraints interact. The CD learning algorithm not only tolerates constraint interaction, but is based entirely on it. Operates on loser/ winner pairs, CD deduces consequences for the grammar from the fact that the winner (a positive example provided to the grammar learner) must be more

harmonic than the loser (an alternative sub-optimal parse on the same input presumably generated by the grammar learner).

Whether the winner/loser pair is informative depends both on the winner and on the loser.

Importantly, constraint re-ranking is defined entirely in terms of demotion, i.e. all "movement" of constraints is downward in the hierarchy. This allows to avoid disjunctions which are notoriously problematic in general computational theory. Using demotion only - rather than promotion - results in moving the constraints corresponding to the winner's violation marks (which are contained in a

conjunction), while a hypothetical promotion would move the constraints corresponding to the loser's marks (which are contained in a disjunction). In the case of promotion, it is not clear which of the loser's violations should be

promoted: all, some or just one of them. With demotion, there is no such choice to be made, since all constraints violated by the winner must be dominated by the highest ranked loser mark. The impressing result is that because CD only demotes constraints as far as necessary, a constraint never gets demoted below its target position, and will not be demoted further once reaching it.

Starting with all constraints in Con ranked in a top

stratum, and applying CD to informative positive evidence as long as such exists, the process converges on a stratified hierarchy, such that all totally ranked refinements of that hierarchy correctly account for the learning data. Note that while the target (e.g., adult) grammars are taken to be totally ranked hierarchies, CD operates within a hypothesis space constituted by stratified hierarchies, which space is widely uncommitted on the relative ranking of constraints.

The components of T&S's learning system are all strongly shaped by the optimisation character of the grammar being acquired. The particular structure of grammar under OT -

(4)

optimisation relative to a hierarchy of constraints -

enables them to tie learning the lexicon of underlying forms to the basic operation of the grammar - pairing output

structures to inputs - as well as to the assignment of hidden structure to overt learning data.

Defining grammaticality in terms of optimisation over

violable constraints, so that constraint interaction can be made the main explanatory mechanism, is an attractive

feature of OT in general. The results of T&S provide convincing evidence that OT, linguistic explanation and learnability work together. The authors give a positive answer to the question whether there are reliable, efficient means for finding a ranking of a given set of constraints that correctly yields a given set of grammatical structural descriptions. On the other hand, the question if it is necessary that informative sub-optimal forms or full

structural descriptions of positive examples be provided to the learner is answered negatively.

T&S's work is an excellent and rigorous presentation of OT in action. It contains an interesting proposal for how a learner, provided with the universal elements of any OT UG system, and the overt parts of forms grammatical with

respect to some grammar admitted by the UG, could learn the grammar, the structural descriptions and the lexicon.

This book can be strongly recommended for introductory and advanced courses in both theoretical and applied

linguistics.

===================================

Dr. Tania Avgustinova

Computational Linguistics, Saarland University Postfach 151150, 66041 Saarbruecken, Germany

tania@coli.uni-sb.de, http://www.coli.uni-sb.de/~tania/

(+49) (681) 302.4504 (phone) (+49) (681) 302 4115 (secretary) (+49) (681) 302.4700 (fax)

Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Referenzen

ÄHNLICHE DOKUMENTE

1. There is no theoretically significant difference between concatenative and nonconccatenative inflection. Exponence ids the only association between inflectional markings

underestimation of the grammatical category of number, namely, that number is just an opposition of singular vs. plural, that all relevant items will mark number, that items which

It is the number of violations of a single constraint by a single element that is responsible for multiply violable constraint violation, making such constraints different from

[r]

The frequently observed computational behavior of the Dantzig-Wolfe decomposition principle consists of' rather rapid improvement on the initial iterations of the

the r a t h e r complicated nature of these piecewise linear functions appearing after such decomposition of large scale problems makes it dificuit to develop fast

A Response to Carla Moscoso ’ s “ Populism, the Press and the Politics of Crime in Venezuela: a Review of Robert Samet ’ s Deadline: Populism and the Press in Venezuela

There are other Gyeli varieties which are less similar to Kwasio, but instead more influenced by other neighboring farmer languages as I will explain in §1.1.3 and §1.1.4 on