• Keine Ergebnisse gefunden

2. Syntax and Type Analysis

N/A
N/A
Protected

Academic year: 2022

Aktie "2. Syntax and Type Analysis"

Copied!
45
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Compilers and Language Processing Tools

Summer Term 2013

Arnd Poetzsch-Heffter Annette Bieniusa

Software Technology Group TU Kaiserslautern

(2)

Content of Lecture

1. Introduction

2. Syntax and Type Analysis 2.1 Lexical Analysis

2.2 Context-Free Syntax Analysis

2.3 Context-Dependent Analysis (Semantic Analysis) 3. Translation to Intermediate Representation

3.1 Languages for Intermediate Representation 3.2 Translation of Imperative Language Constructs 3.3 Translation of Object-Oriented Language Constructs 3.4 Translation of Procedures

4. Optimization and Code Generation 4.1 Assembly and Machine Code 4.2 Optimization

4.3 Register Allocation

(3)

Content of Lecture (2)

5. Selected Topics in Compiler Construction 5.1 Garbage Collection

5.2 Just-in-time Compilation

5.3 XML Processing (DOM, SAX, XSLT)

(4)

2. Syntax and Type Analysis

(5)

Main learning objectives

Know the tasks of different syntax analysis phases

Understand how syntax analysis phases cooperate

Know the specification techniques for syntax analysis how to apply them

Understand the generation techniques

Be able to use the mentioned tools

Understand lexical analysis

Understand context-free analysis (parsing)

Understand name and type analysis (context-sensitive)

(6)

Tasks of syntax analysis

Check if input is syntactically correct

Dependent on result:

I Error message

I Generation of appropriate data structure for subsequent processing

(7)

Syntax and type analysis phases

Lexical analysis:

Character stream→token stream(or symbol stream)

Context-free analysis:

Token stream→syntax tree

Context-sensitive analysis/semantic analysis:

Syntax tree→syntax tree with cross references

Source Code

Scanner

Parser

Name and Type Analysis Character Stream

Token Stream

Syntax Tree

Attributed Syntax Tree

SyntaxandTypeAnalysis

(8)

Reasons for separation of phases

Lexical and context-free analysis

I Reduced load for context-free analysis, e.g., whitespaces are not required for context-free analysis

Context-free and context-sensitive analysis

I Context-sensitive analysis uses tree structure instead of token stream

I Advantages for construction of target data structure

For both cases

I Increased efficiency

I Natural process (cmp. natural language)

I More appropriate tool support

(9)

Lexical Analysis

2.2 Lexical Analysis

(10)

Lexical Analysis Introduction

2.2.1 Introduction

(11)

Lexical Analysis Introduction

Tasks of lexical analysis

Break input character stream into a token stream wrt. language definition

Classify tokens into token classes

Representation of tokens

I Hashing of identifiers

I Conversion of constants

Elimination of

I whitespaces (spaces, comments...)

I external constructs (compiler directives...)

(12)

Lexical Analysis Introduction

Tasks of lexical analysis (2)

Terminology

Token/symbol: a word over an alphabet of characters (often with additional information, e.g. token class, encoding, position..)

Token class: a set of tokens (identifier, constants, ...); correspond to terminal symbols of a context-free grammar

Remark:the terms token and symbol refer to the same concept. The term token is in general used when talking about parsing technology, whereas the term symbol is used when talking about formal languages.

(13)

Lexical Analysis Introduction

Lexical analysis: Example

Input Line 23:

if( A <= 3.14 ) B = B

Token Class String Token Information Col:Row

IF “if” 23:3

OPAR “(” 23:5

ID “A” 72 (Hash) 23:7

RELOP “<=” 4 (Encoding) 23:9

FLOATCONST “3.14” 3,14 (Constant Value) 23:12

CPAR “)” 23:16

ID “B” 84 (Hash) 23:20

. . .

(14)

Lexical Analysis Specification of Scanners

2.2.2 Specification of Scanners

(15)

Lexical Analysis Specification of Scanners

Specification

The specification of the lexical analysis is a part of the language specification.

The two parts of lexical analysis specification:

Scanning algorithm (often only implicit)

Specification of tokens and token classes

(16)

Lexical Analysis Specification of Scanners

Examples: Scanning

1. Statement in C B = B --- A;

Problem: Separation ( - - and - are tokens) Solution: Longest token is chosen, i.e, B = B -- - A;

2. Java Fragment

class public { public m() {...} } Problem: Ambiguity (keyword, identifier) Solution: Precedence rules

(17)

Lexical Analysis Specification of Scanners

Standard scan algorithm (concept)

Scanning is often implemented as a procedure:

Procedure returns next token

State is remainder of input

In error cases, returns the tokenUNDEFwithout updating input

(18)

Lexical Analysis Specification of Scanners

Standard scan algorithm (pseudo code)

CharStream inputRest := input;

Token nextToken() {

Token curToken := longestTokenPrefix(inputRest);

inputRest:= cut(curToken, inputRest);

return curToken;

}

wherecutis defined as

ifcurToken6=UNDEF,curTokenis removed frominputRest

elseinputRestremains unchanged.

(19)

Lexical Analysis Specification of Scanners

Standard scan algorithm (2)

Token longestTokenPrefix(CharStream ir) { requires availableChar(ir) > 0

int curLength = 1;

String curPrefix := prefix(curLength,ir);

Token longestToken := UNDEF;

while( curLength <= availableChar(ir)

&& isTokenPrefix(curPrefix) ) { if (isToken(curPrefix) {

longestToken := token(curPrefix);

}

curLength++;

curPrefix := prefix(curLength,ir);

}

return longestToken;

}

(20)

Lexical Analysis Specification of Scanners

Standard scan algorithm (3)

Predicates to be defined:

isTokenPrefix: String→boolean

isToken: String→boolean

token: String→Token

I yields token UNDEF if argument doesn’t represent a token

I selects one of the tokens if there are several tokens possible

Remarks:

Standard scan algorithm is used in many modern languages, but not, e.g., in FORTRAN:

I DO 7 I = 1.25 “DO 7 I” is an identifier.

I DO 7 I = 1,25 “DO” is a keyword.

Error cases are not handled

Complete realization oflongestTokenPrefixis discussed

(21)

Lexical Analysis Specification of Scanners

Specification of token classes

Token classes are defined byregular expressions(REs).

REs specify the set of strings, which belong to a certain token class.

(22)

Lexical Analysis Specification of Scanners

Regular Expressions

LetΣbe an alphabet, i.e. an non-empty set of characters.Σ is the set of all words overΣ,is the empty word.

Definition (Regular expressions, regular languages)

εis a RE and specifies the languageL={}.

Eacha∈Σis a RE and specifies the languageL={a}.

Letr andsbe two RE specifying the languagesRandS, resp.

Then the following are RE and specify the languageL:

I (r|s)withL=RS (union)

I rswithL={vw|v R,wS} (concatenation)

I rwith{v1. . .vn|vi R,0in} (Kleene star)

The languageL⊆Σis calledregularif there exists REr definingL.

(23)

Lexical Analysis Specification of Scanners

Regular Expressions (2)

Remarks:

L=∅is not regular according to the definition, but is often considered regular.

Other Operators, e.g. +, ?, ., [] can be defined using the basic operators, e.g.

I r+(r r)r\ {}

I [aBd]a|B|d

I [ag]a|b|c|d|e|f|g

Caution: Regular expressions only define valid tokens and do not specify the program or translation units of a programming language.

(24)

Lexical Analysis Implementation of Scanners

2.2.3 Implementation of Scanners

(25)

Lexical Analysis Implementation of Scanners

Implementation of scanners

sequence of regular expressions and actions (input language of scanner generator)

Scanner Generator

scanner program

(usually in a programming language)

(26)

Lexical Analysis Implementation of Scanners

Scanner generator: JFlex

Typical use of JFlex:

java -jar JFlex.jar Example.jflex javac Yylex.java

Actions are written in Java

Examples :

1. Regular expression in JFlex

[a-zA-Z_0-9] [a-zA-Z_0-9] * 2. JFlex input with abbreviations

ZI = [0-9]

BU = [a-zA-Z_]

BUZI = [a-zA-Z_0-9]

%%

(27)

Lexical Analysis Implementation of Scanners

A complete JFlex example

enum Token { DO, DOUBLE, IDENT, FLOATCONST, STRING;}

%%

%line

%column

%debug

%type Token // declare token type

ZI = [0-9]

BU = [a-zA-Z_]

BUZI = [a-zA-Z_0-9]

ZE = [a-zA-Z_0-9!?\]\[\. \t...]

WhiteSpace = [ \t\n]

%%

{WhiteSpace} { }

"double" { return Token.DOUBLE; }

"do" { return Token.DO; } {BU}{BUZI}* { return Token.IDENT; } {ZI}+\.{ZI}+ { return Token.FLOATCONST; }

\"({ZE}|\\\")*\" { return Token.STRING; }

(28)

Lexical Analysis Implementation of Scanners

Scanner generators

Scanner generation uses the equivalence between

I Regular expressions

I Non-deterministic finite automata (NFA)

I Deterministic finite automata (DFA)

Construction methods is based in two steps:

I Regular expressionsNFA

I NFADFA

(29)

Lexical Analysis Implementation of Scanners

Definition of NFA

Definition (Non-deterministic finite automaton)

A non-deterministic finite automaton is defined as a 5-tuple M = (Σ,Q,∆,q0,F)

where

Σis the input alphabet

Qis the set of states

q0∈Q is the initial state

F ⊆Qis the set of final states

∆⊆Q×Σ∪ {} ×Qis the transition relation.

(30)

Lexical Analysis Implementation of Scanners

Regular expressions → NFA

Principle:For each regular sub-expression, construct NFA with one start and end state that accepts the same language.

1. Schritt: Reguläre Ausdrücke ! NEA Übersetzungsschema:

• !

• a

• (r|s)

• (rs)

• r*

Prinzip: Konstruiere für jeden regulären Teilausdruck NEA mit genau einem Start- und Endzustand, der die gleiche Sprache akzeptiert.

s

0

f

0

s

0

a

s

0

f

0

! s

1

R f

1

s

2

S f

2

!

! !

s

1

R f

1

! s

2

S f

2

s

1

R f

1

!

f

0

s

0

!

!

c

Arnd Poetzsch-Heffter Syntax and Type Analysis 30

(31)

Lexical Analysis Implementation of Scanners

Regular expressions → NFA (2)

43

© A. Poetzsch-Heffter, TU Kaiserslautern 25.04.2007

1. Schritt: Reguläre Ausdrücke !NEA Übersetzungsschema:

!

• a

• (r|s)

• (rs)

• r*

Prinzip: Konstruiere für jeden regulären Teilausdruck NEA mit genau einem Start- und Endzustand, der die gleiche Sprache akzeptiert.

s0 f0

s0 a

s0 f0

! s1 R f1

s2 S f2

!

! !

s1 R f1 ! s2 S f2

s1 R f1 ! f0

s0 !

!

!

1. Schritt: Reguläre Ausdrücke ! NEA Übersetzungsschema:

•!

• a

• (r|s)

• (rs)

• r*

Prinzip: Konstruiere für jeden regulären Teilausdruck NEA mit genau einem Start- und Endzustand, der die gleiche Sprache akzeptiert.

s0 f0

s0 a

s0 f0

! s1 R f1

s2 S f2

!

! !

s1 R f1 ! s2 S f2

s1 R f1 ! f0

s0 !

!

!

1. Schritt: Reguläre Ausdrücke ! NEA Übersetzungsschema:

•!

• a

• (r|s)

• (rs)

• r*

Prinzip: Konstruiere für jeden regulären Teilausdruck NEA mit genau einem Start- und Endzustand, der die gleiche Sprache akzeptiert.

s0 f0

s0 a

s0 f0

! s1 R f1

s2 S f2

!

! !

s1 R f1 ! s2 S f2

s1 R f1 !

f0 s0 !

!

!

(32)

Lexical Analysis Implementation of Scanners

Example: Construction of NFA

44© A. Poetzsch-Heffter, TU Kaiserslautern

Übersetzung am Beispiel von Folie 41: 01 s5s6s7s8s9s10s11

s2s4 s13s12 s17s16s14s15

d elbuods3o

, TAB BUZI BU ZI ZI.ZI

ZI s19

s20ZE s21 s22s23s24\

s26s25!! !

!!

! ! !

! !

(33)

Lexical Analysis Implementation of Scanners

-closure

Functionclosurecomputes the-closure of a set of statess1, . . . ,sn. Definition (-closure)

For an NFAM= (Σ,Q,∆,q0,F)and a stateq∈Q, the-closureofq is defined by

-closure(q) ={p∈Q|p reachable from q via-transitions}

ForS⊆Q, the-closureofSis defined by -closure(S) = [

s∈S

-closure(s)

(34)

Lexical Analysis Implementation of Scanners

Longest token prefix with NFA

Token longestTokenPrefix(char[] ir) { requires length(ir) > 0

// ir[0] contains the first character StateSet curState := closure( {s0} );

int curLength := 0;

int tokenLength := undef;

while (curLength <= length(ir) && !isEmptySet(curState) ) { if( containsFinalState(curState) ) {

tokenLength := curLength;

}

curState := closure(successor(curState,ir[curLength]));

curLength++;

}

return token(prefix(ir,tokenLength));

(35)

Lexical Analysis Implementation of Scanners

Longest token prefix with NFA (2)

Remark:

Problem of ambiguity:

If there are more than one token matching the longest input prefix, proceduretokennondeterministically returns one of them.

(36)

Lexical Analysis Implementation of Scanners

NFA → DFA

Principle:

For each NFA, a DFA can be constructed that accepts the same language. (In general, this does not hold for NFA with output.) Properties of DFA:

No-transitions

Transitions are deterministic given the input char

(37)

Lexical Analysis Implementation of Scanners

NFA → DFA (2)

Definition (Deterministic finite state automaton) A deterministic finite automaton is defined as a 5-tuple

M = (Σ,Q,∆,q0,F) where

Σis the input alphabet

Qis the set of states

q0∈Q is the initial state

F ⊆Qis the set of final states

∆ :Q×Σ→Qis the transitionfunction.

(38)

Lexical Analysis Implementation of Scanners

NFA → DFA (3)

Construction: (according to John Myhill)

The states of the DFA are subsets of NFA states

(powerset construction). Subsets of finite sets are also finite.

The start state of the DFA is the-closure of theNFAstart state

The final states of the DFA are the sets of states that contain an NFA final state.

The successor state of a stateSin the DFA under inputais obtained by

I computing all successorspofqSunderain the NFA

I and adding the-closure ofp

(39)

Lexical Analysis Implementation of Scanners

NFA → DFA (4)

If working with character classes (e.g. [a-f]), characters and character classes at outgoing transitions must be disjoint.

Completion of automaton for error handling:

I Insert additional (final) statenT (nonToken state)

I Add a transition from statestonTfor each character for which no outgoing transition fromsexists.

(40)

Lexical Analysis Implementation of Scanners

NFA → DFA (5)

Definition (DFA for NFA)

LetM= (Σ,Q,∆,q0,F)be a NFA. Then, the DFAM0corresponding to the NFAMis defined asM0 = (Σ,Q0,∆0,q00,F0)where

the set of states isQ0 ⊆ P(Q), power set ofQ

the initial stateq00 is the-closure ofq0

the final states areF0 ={S ⊆Q|S∩F 6=∅}

0(S,a) =-closure({p|(q,a,p)∈∆,q ∈S})for alla∈Σ.

(41)

Lexical Analysis Implementation of Scanners

Example: DFA

48© A. Poetzsch-Heffter, TU Kaiserslautern2007

s0,1,2,5,12,14,18

s1 ,

LZ, TAB s3,6,13

s4,7,13

s8,13 s13 BU\{d}d

e l b u o

BUZI\{b} BUZI\{u} BUZI\{o} BUZI

BUZI\{l}

BUZI\{e}

BUZI s17s16s15

ZI ZI.ZI

ZI 19,20,22,25 s19,20,21,22,25 s26 s19,20,21,22,23,25s19,20,22,24,25,26

s9,13 s10,13

s11,13 ZE \ ““\

ZE “ “\ZEZE \

ksWg. Übersichtlichkeit Kanten zu ks nur angedeutet.

Transitions to nT sketched.

nT

(42)

Lexical Analysis Implementation of Scanners

Longest token prefix with DFA

Token longestTokenPrefix(char[] ir) { requires length(ir) > 0

// ir[0] contains the first character State curState : = StartState;

int curLength := 0;

int tokenLength := undef;

while (curLength <= length(ir) && curState != nT) if (curState is FinalState) {

tokenLength := curLength;

}

curState := successor(curState,ir[curLength]));

curLength++;

}

return token(prefix(ir,tokenLength));

(43)

Lexical Analysis Implementation of Scanners

Longest token prefix with DFA (2)

Remarks:

Computation of closure at construction time, not at runtime.

(Principle: Do as much statically as you can!)

Problem of ambiguity still not solved. However, many scanner generators allow the user to control which token is returned. For example, JFlex returns the token of the first rule in the JFlex file that matches the longest input prefix.

(44)

Lexical Analysis Implementation of Scanners

Longest token prefix with DFA (3)

Implementation Aspects:

Constructed DFA can be minimized.

Input buffering is important: often use of cyclic arrays (caution with maximal token length, e.g. in case of comments)

Encode DFA in table

Choose suitable partitioning of alphabet in order to reduce number of transitions (i.e. size of table)

Interface with parser: usually parser asks proactively for next token

(45)

Lexical Analysis Implementation of Scanners

Recommended reading

Wilhelm, Seidl, Hack: Band 2, Kap. 2 (S. 13–44)

Wilhelm, Maurer: Chap. 7, pp. 239–269 (More theoretical)

Appel: Chap. 2, pp. 16–37 (More practical) Additional Reading:

Aho, Sethi, Ullman: Chap. 3 (very detailed)

Referenzen

ÄHNLICHE DOKUMENTE

This report first describes the requirement analysis for the corresponding class of problems, and then summarizes the implementation of the tool used for interactive

(17) Notice that successful completion of specific approve operations is necessary to reach a synchroniza- tion state q from which we can wait-free implement consensus for

If there are more than one token matching the longest input prefix, procedure token nondeterministically returns one of them... Lexical Analysis Implementation

If there are more than one token matching the longest input prefix, procedure token nondeterministically returns one of

If there are more than one token matching the longest input prefix, procedure token nondeterministically returns one of

If there are more than one token matching the longest input prefix, one of these tokens is returned by the function symbol. Ina Schaefer Syntax and Type

Our results suggest a larger gender difference in the charity DG compared to the standard DG, although the strength of this evidence depends on if the “all or nothing” charity

the reported bacterial pathogen, region, geographic area, district or town, actual study year, year of publi- cation, sample type and categorized sample size in- cluded in the human