• Keine Ergebnisse gefunden

Program Optimization

N/A
N/A
Protected

Academic year: 2022

Aktie "Program Optimization"

Copied!
45
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Helmut Seidl

Program Optimization

TU München

Winter 2011/12

(2)

Organization

Dates:

Lecture: Monday, 12:30-14:00 Wednesday, 12:30-14:00 Tutorials: Thursday, 12:30-14:00

Kalmer Apinis: apinis@in.tum.de Material: slides, recording :-)

simulator environment

Programmanalyse und Transformation Springer, 2010

(3)

Grades:

• Bonus for homeworks

• written exam

(4)

Proposed Content:

1. Avoiding redundant computations

→ available expressions

→ constant propagation/array-bound checks

→ code motion

2. Replacing expensive with cheaper computations

→ peep hole optimization

→ inlining

→ reduction of strength

(5)

3. Exploiting Hardware

→ Instruction selection

→ Register allocation

→ Scheduling

→ Memory management

(6)

0 Introduction

Observation 1:

Intuitive programs often are inefficient.

Example:

void swap (int i, int j) { int t;

if (a[i] > a[j]) { t = a[j];

a[j] = a[i];

a[i] = t;

(7)

Inefficiencies:

• Addresses a[i], a[j] are computed three times :-(

• Values a[i], a[j] are loaded twice :-(

Improvement:

• Use a pointer to traverse the array a;

• store the values of a[i], a[j]!

(8)

void swap (int *p, int *q) { int t, ai, aj;

ai = *p; aj = *q;

if (ai > aj) { t = aj;

*q = ai;

*p = t; // t can also be

} // eliminated!

}

(9)

Observation 2:

Higher programming languages (even C :-) abstract from hardware and efficiency.

It is up to the compiler to adapt intuitively written program to hardware.

Examples:

. . . Filling of delay slots;

. . . Utilization of special instructions;

. . . Re-organization of memory accesses for better cache behavior;

. . . Removal of (useless) overflow/range checks.

(10)

Observation 3:

Programm-Improvements need not always be correct :-(

Example:

y = f() + f(); ==⇒ y = 2 * f();

Idea:

Save second evaluation of f() ...

Problem:

The second evaluation may return a result different from the first; (e.g., because f() reads from the input :-)

(11)

Observation 3:

Programm-Improvements need not always be correct :-(

Example:

y = f() + f(); ==⇒ y = 2 * f();

Idea:

Save the second evaluation of f() ???

Problem:

The second evaluation may return a result different from the first; (e.g., because f() reads from the input :-)

(12)

Consequences:

=⇒ Optimizations have assumptions.

=⇒ The assumption must be:

• formalized,

• checked :-)

=⇒ It must be proven that the optimization is correct, i.e., preserves the semantics !!!

(13)

Observation 4:

Optimization techniques depend on the programming language:

→ which inefficiencies occur;

→ how analyzable programs are;

→ how difficult/impossible it is to prove correctness ...

Example: Java

(14)

Unavoidable Inefficiencies:

∗ Array-bound checks;

∗ Dynamic method invocation;

∗ Bombastic object organization ...

Analyzability:

+ no pointer arithmetic;

+ no pointer into the stack;

− dynamic class loading;

− reflection, exceptions, threads, ...

(15)

Correctness proofs:

+ more or less well-defined semantics;

− features, features, features;

− libraries with changing behavior ...

(16)

... in this course:

a simple imperative programming language with:

• variables // registers

• R = e; // assignments

• R = M[e]; // loads

• M[e1] = e2; // stores

• if (e) s1 else s2 // conditional branching

• goto L; // no loops :-)

(17)

Note:

• For the beginning, we omit procedures :-)

• External procedures are taken into account through a statement f() for an unknown procedure f.

==⇒ intra-procedural

==⇒ kind of an intermediate language in which (almost) everything can be translated.

Example: swap()

(18)

0 : A1 = A0 + 1 ∗ i; // A0 == &a 1 : R1 = M[A1]; // R1 == a[i]

2 : A2 = A0 + 1 ∗ j;

3 : R2 = M[A2]; // R2 == a[j]

4 : if (R1 > R2) {

5 : A3 = A0 + 1 ∗ j;

6 : t = M[A3];

7 : A4 = A0 + 1 ∗ j;

8 : A5 = A0 + 1 ∗ i;

9 : R3 = M[A5];

10 : M[A4] = R3;

11 : A = A + 1 ∗ i;

(19)

Optimization 1:

1 ∗ R ==⇒ R

Optimization 2:

Reuse of subexpressions

A1 == A5 == A6 A2 == A3 == A4 M[A1] == M[A5] M[A2] == M[A3]

R1 == R3

(20)

By this, we obtain:

A1 = A0 + i;

R1 = M[A1];

A2 = A0 + j; R2 = M[A2];

if (R1 > R2) {

t = R2;

M[A2] = R1; M[A1] = t;

}

(21)

Optimization 3:

Contraction of chains of assignments :-)

Gain:

before after

+ 6 2

∗ 6 0

load 4 2

store 2 2

> 1 1

(22)

1 Removing superfluous computations

1.1

Repeated computations Idea:

If the same value is computed repeatedly, then

→ store it after the first computation;

→ replace every further computation through a look-up!

(23)

Problem:

Identify repeated computations!

Example:

z = 1;

y = M[17];

A : x1 = y + z ; . . .

B : x2 = y + z ;

(24)

Note:

B is a repeated computation of the value of y + z , if:

(1) A is always executed before B; and

(2) y and z at B have the same values as at A :-)

==⇒ We need:

→ an operational semantics :-)

→ a method which identifies at least some repeated computations ...

(25)

Background 1: An Operational Semantics

we choose a small-step operational approach.

Programs are represented as control-flow graphs.

In the example:

start

stop

A1 = A0 + 1i;

R1 = M[A1];

A2 = A0 + 1j;

R2 = M[A2];

A3 = A0 + 1j;

Pos (R1 > R2) Neg (R1 > R2)

(26)

Thereby, represent:

vertex program point start programm start stop program exit

edge step of computation

Edge Labelings:

Test : Pos (e) or Neg (e) Assignment : R = e;

Load : R = M[e];

Store : M[e ] = e ;

(27)

Thereby, represent:

vertex program point start programm start stop program exit

edge step of computation

Edge Labelings:

Test : Pos (e) or Neg (e) Assignment : R = e;

Load : R = M[e];

Store : M[e ] = e ;

(28)

Computations follow paths.

Computations transform the current state

s = (ρ, µ)

where:

ρ : Vars → int contents of registers µ : N → int contents of storage

Every edge k = (u,lab, v) defines a partial transformation [[k]] = [[lab]]

(29)

[[; ]] (ρ, µ) = (ρ, µ)

[[Pos (e)]] (ρ, µ) = (ρ, µ) if [[e]]ρ 6= 0 [[Neg (e)]] (ρ, µ) = (ρ, µ) if [[e]]ρ = 0

// [[e]] : evaluation of the expression e, z.B.

// [[x + y]]{x 7→ 7, y 7→ −1} = 6 // [[!(x == 4)]] {x 7→ 5} = 1

[[R = e; ]] (ρ, µ) = ( ρ ⊕ {R 7→ [[e]]ρ} , mu)

(30)

[[; ]] (ρ, µ) = (ρ, µ)

[[Pos (e)]] (ρ, µ) = (ρ, µ) if [[e]]ρ 6= 0 [[Neg (e)]] (ρ, µ) = (ρ, µ) if [[e]]ρ = 0

// [[e]] : evaluation of the expression e, e.g.

// [[x + y]]{x 7→ 7, y 7→ −1} = 6 // [[!(x == 4)]] {x 7→ 5} = 1

[[R = e; ]] (ρ, µ) = ( ρ ⊕ {R 7→ [[e]]ρ} , µ)

(31)

[[; ]] (ρ, µ) = (ρ, µ)

[[Pos (e)]] (ρ, µ) = (ρ, µ) if [[e]]ρ 6= 0 [[Neg (e)]] (ρ, µ) = (ρ, µ) if [[e]]ρ = 0

// [[e]] : evaluation of the expression e, e.g.

// [[x + y]]{x 7→ 7, y 7→ −1} = 6 // [[!(x == 4)]] {x 7→ 5} = 1

[[R = e; ]] (ρ, µ) = ( ρ ⊕ {R 7→ [[e]]ρ} , µ)

(32)

[[R = M[e]; ]] (ρ, µ) = ( ρ ⊕ {R 7→ µ([[e]]ρ))} , µ) [[M[e1] = e2; ]] (ρ, µ) = (ρ, µ ⊕ {[[e1]]ρ 7→ [[e2]]ρ} )

Example:

[[x = x + 1; ]] ({x 7→ 5}, µ) = (ρ, µ) where:

ρ = {x 7→ 5} ⊕ {x 7→ [[x + 1]] {x 7→ 5}}

= {x 7→ 5} ⊕ {x 7→ 6}

(33)

A path π = k1k2 . . . km is a computation for the state s if:

s ∈ def ([[km]] ◦ . . . ◦ [[k1]]) The result of the computation is:

[[π]]s = ([[km]] ◦ . . . ◦ [[k1]]) s

Application:

Assume that we have computed the value of x + y at program point u:

u v

x+y π

(34)

Idea:

If x and y have not been modified in π, then evaluation of x + y at v must return the same value as evaluation at u :-)

We can check this property at every edge in π :-}

More generally:

Assume that the values of the expressions A = {e1, . . . , er} are available at u.

Every edge k transforms this set into a set [[k]] A of expressions whose values are available after execution of k ...

(35)

Idea:

If x and y have not been modified in π, then evaluation of x + y at v must return the same value as evaluation at u :-)

We can check this property at every edge in π :-}

More generally:

Assume that the values of the expressions A = {e1, . . . ,er} are available at u.

Every edge k transforms this set into a set [[k]] A of expressions whose values are available after execution of k ...

(36)

Idea:

If x and y have not been modified in π, then evaluation of x + y at v must return the same value as evaluation at u :-)

We can check this property at every edge in π :-}

More generally:

Assume that the values of the expressions A = {e1, . . . ,er} are available at u.

Every edge k transforms this set into a set [[k]] A of expressions whose values are available after execution of k ...

(37)

... which transformations can be composed to the effect of a path π = k1 . . .kr:

[[π]] = [[kr]] ◦ . . . ◦ [[k1]]

The effect [[k]] of an edge k = (u,lab, v) only depends on the label lab, i.e., [[k]] = [[lab]] where:

[[;]] A = A

[[Pos(e)]] A = [[Neg(e)]] A = A ∪ {e}

[[x = e;]] A = (A ∪ {e})\ itExprx where Exprx all expressions which contain x

(38)

... which transformations can be composed to the effect of a path π = k1 . . .kr:

[[π]] = [[kr]] ◦ . . . ◦ [[k1]]

The effect [[k]] of an edge k = (u,lab, v) only depends on the label lab, i.e., [[k]] = [[lab]] where:

[[;]] A = A

[[Pos(e)]] A = [[Neg(e)]] A = A ∪ {e}

[[x = e;]] A = (A ∪ {e})\Exprx where Exprx all expressions which contain x

(39)

... which transformations can be composed to the effect of a path π = k1 . . .kr:

[[π]] = [[kr]] ◦ . . . ◦ [[k1]]

The effect [[k]] of an edge k = (u,lab, v) only depends on the label lab, i.e., [[k]] = [[lab]] where:

[[;]] A = A

[[Pos(e)]] A = [[Neg(e)]] A = A ∪ {e}

[[x = e;]] A = (A ∪ {e})\Exprx where Exprx all expressions which contain x

(40)

[[x = M[e];]] A = (A ∪ {e})\Exprx [[M[e1] = e2;]] A = A ∪ {e1, e2}

(41)

[[x = M[e];]] A = (A ∪ {e})\Exprx [[M[e1] = e2;]] A = A ∪ {e1, e2}

By that, every path can be analyzed :-)

A given program may admit several paths :-(

For any given input, another path may be chosen :-((

(42)

[[x = M[e];]] A = (A ∪ {e})\Exprx [[M[e1] = e2;]] A = A ∪ {e1, e2}

By that, every path can be analyzed :-)

A given program may admit several paths :-(

For any given input, another path may be chosen :-((

==⇒ We require the set:

A[v] = \

{[[π]]∅ | π : start → v}

(43)

Concretely:

→ We consider all paths π which reach v.

→ For every path π, we determine the set of expressions which are available along π.

→ Initially at program start, nothing is available :-)

→ We compute the intersection ==⇒ safe information

How do we exploit this information ???

(44)

Concretely:

→ We consider all paths π which reach v.

→ For every path π, we determine the set of expressions which are available along π.

→ Initially at program start, nothing is available :-)

→ We compute the intersection ==⇒ safe information

How do we exploit this information ???

(45)

Transformation 1.1:

We provide novel registers Te as storage for the e:

v u

v u

Te = e;

x = Te; x = e;

Referenzen

ÄHNLICHE DOKUMENTE

All elementary names from st (names of type TEXT, BOOL, …), which do not occur in a right side of dtd can be taken to attlist. Further, if a segment x of the source

Although applied to quite different problems, they both use generalized linear program- ming techniques and on each iteration require solution of one linear program-

Tanimoto, "Nondifferentiable Mathematical Programming and Convex- Concave Functions," Journal of Optimization Theory and Applications 31(3) pp. Thibault,

Statements are possible by which criteria values are not deter- mined uniquely on those sets, and the corresponding weighting functions a r e .specified (by the

In the design of solution procedures for stochastic optimization problems of type (1.10), one must come to grips with two major difficulties that are usually brushed aside in the

(1984) &-Solutions in Vector Minimization Problems, Journal of Optimi- zation Theory and Applications, Vol. (1984) On Duality Theory in Multiobjective Programming,

Therefore many problems involving selec- tion of optimal set [13] can b e reformulated as problems of finding optimal measures.. In this situation stochastic

Despite the wide variety of concrete formulations of stochastic optimization problems, generated by problems of the type (1.2) all of them may finally be reduced to the following