• Keine Ergebnisse gefunden

Model setup

Im Dokument Autonomy in the workplace (Seite 60-64)

2.4 A model of investments in Autonomy Support

2.4.1 Model setup

We investigate optimal investments in providing Autonomy Support of a principal with a view to incentivize innovative activity of the worker. We expect optimal investments to vary over time, as leadership behaviour and actions required differ. For example, extensive methods training or occasional feedback require behaviour of different time intensity. We choose a two period model to incorporate this notion in a principal-agent model. It allows us to keep the model as tractable as possible whilst capturing the dynamic aspect of Autonomy Support in-vestments.

For the purpose of the research question, we concentrate only on innovative activityi of the agent as inventivized through Autonomy Support by the principal. When the agent (he) en-gages in innovative activityi the probability for his efforts to result in a successful innovation increases according to the probability function

P r(i)= i 1+i withP r(i)∈[0, 1) fori≥0.

The agent experiences effort costs from innovative activity, which decreases in the amount of Autonomy Support available to the agent. Prior discounted and current Autonomy Support constitute this available amount. The discount factorδaccounts for the fact that Autonomy Support investments fade out over time. A one period intervention does not carry indefinitely into future periods with the same motivational power. We factor in the Autonomy Support provided previously, either in employment or personal relationships, by assuming a personal consolidated start level ¯s of an agent. Alternatively, the start value can be interpreted as the agent’s personal autonomous motivation level for innovation prior to employment (Shalley et al., 2009). Different agents therefore have different initial levels of ¯s. The total value of

¯

s1=δs¯+sa,1

¯

s2=δ2s¯+δsa,1+sa,2

denoting the sum of the discounted start value ¯sand the principal’s investments in the prior, discounted, and the current period.

The payoff the agent experiences from successful innovation is denoted asvAand normalized to one. We understand this payoff to be the utility derived from being creatively active and seeing one’s innovative activity come to fruition, not as a monetary reward. Taken together, the agent’s utility as a function of his innovative activityifor periodst=1, 2 are

UA(i, 1)=vA i

1+ii

δs¯+sa,1 =vA i 1+ii

¯

s1 (2.1)

UA(i, 2)=vA

i

1+ii

δ2s¯+δsa,1+sa,2 =vA

i 1+ii

¯

s2 (2.2)

The principal (she) receives a payoffvP when the agent’s innovation is successful, and bears costs c(sa,t)=αsa,t from providing Autonomy Support at time t. αdenotes the principal’s marginal cost. Alternatively, it can be interpreted as her ability to provide Autonomy Support.

The principal is forward looking and accounts for the fact that Autonomy Support provided in period 1 impacts innovative activity and thus innovation profit in both periods. The principal discounts her profit in period 2 withβand includes it in her first period considerations. The profit functions for periodst=1, 2 are

ΠP(sa,1, 1)=vP i

1+iαsa,1+βΠP(sa,2, 2) (2.3) ΠP(sa,2, 2)=vP i

1+iαsa,2 (2.4)

We assume that the principal can perfectly observe the initial amount of Autonomy Support ¯s the agent enters the work relationship with. We look at situations in which the benefits from Autonomy Support and its associated costs are such that the principal considers investment.

We achieve this by assuming that the benefit-cost ratio satisfies vαP >2. The time line of the model is as follows. In each period, the agent’s total Autonomy Support is depreciated before

Support level, the agent chooses his innovation effort, and the payoffs of the period are real-ized.

t=0

¯

s δ

t=1 sa,1 i1

payoffs δ

t=2 sa,2 i2

payoffs

end

The principal seeks to maximize her profits from innovation by optimally investing in the agent’s Autonomy Support. We solve the maximization problem via Backward Induction be-cause profits are time-interdependent through the investment choices.

2.4.2 Solving the model

Solving for periodt=2

While the principal takes future effects of her own current investment into account, the agent is backward looking. He considers each period separately and only takes past investments in his available Autonomy Support into account. In periodt=2, he maximizesUA(i, 2) from Equation (2.2) by his choice of innovative activityi, resulting in

i2=(δ2s¯+δsa,1+sa,2)12−1 (2.5)

The agent’s innovative efforti2depends on his initial Autonomy Support level, the discounted previous and the current Autonomy Support investment by the principal. The agent only be-comes active if there is overall enough Autonomy Support. If previous Autonomy Support is low or heavily discounted such that δ2s¯+δsa,1=0, no innovative activity takes place if the principal does not sufficiently invest in the current period t =2. If the previous Autonomy Support is high and/or is only mildly discounted, innovative effort is exerted even if the prin-cipal does not invest in the current period at all, withsa,2=0.

mentsa,2, which results in

sa,2=

³vP

´23

δ2s¯−δsa,1≥0 (2.6)

sa,2increases in profitvP from successful innovation and decreases in the marginal cost pa-rameter αfor providing Autonomy Support. The benefit-cost ratio vαP also comes into play when determining if the principal invests at all in the second period. Only if the agent’s ini-tial Autonomy Support level and previous investment is low or heavily discounted such that it does not exceed the benefit-cost ratio vαP, does the principal choose a positive investment sa,2>0. Otherwise, the principal does not need to replenish the stock of Autonomy Support and choosessa,2=0.

The principal’s profit in periodt=2 when she engages in current Autonomy Support invest-mentssa,2 >0 is

ΠP,2(sa,2|sa,2 >0)=vP

¡vP

2α

¢13

−1

¡vP

2α

¢13α³vP

´23

+α(δ2s¯+δsa,1)

=vP−3α³vP

´23

+α(δ2s¯+δsa,1) and for her optimal choice ofsa,2 =0

ΠP,2(sa,2|sa,2=0)=vPvP

(δ2s¯+δsa,1)12 >0 Solving for periodt=1

The agent maximizes his utilityUA(i, 1) of periodt=1 in Equation (2.1) by choosing

i1=(δs¯+sa,1)12−1≥0

The agent’s innovation effort in period t=1 increases in both his initial Autonomy Support level and the principal’s investment in the current period.

future impact of her investments in periodt=1 for the subsequent period into account. Her optimal investment choice is

sa,1=

µ vP

2α(1−βδ)

23

δs¯≥0 (2.7)

Her investment increases when the benefit-cost ratio increases and increases in how strongly she values future periods, as described byβ. The effect of the agent’s discount parameterδis ambiguous and will be part of the discussion on the different investment patterns. Contingent on the parameter constellation, the principal may or may not invest in Autonomy Support in periodt=1.

The resulting profit in periodt=1, if she invests in Autonomy Support in both periods such thatsa,1>0 andsa,2 >0, is

ΠP,1(sa,1>0)=vP− 1

vp

(2α(1−βδ)

2 3

α

µ vp

(2α(1−βδ)

23 +αδs¯

+β

·

vP−3α³vP

´23¸ +βδ

µ vp

(2α(1−βδ)

23

(2.8)

and forsa,1=0

Π1(sa,1=0)=vP− µ 1

δs¯

12 +βvP

µ 1 δ2s¯

12

(2.9)

Im Dokument Autonomy in the workplace (Seite 60-64)