• Keine Ergebnisse gefunden

Organizational structure and innovative activity

N/A
N/A
Protected

Academic year: 2022

Aktie "Organizational structure and innovative activity"

Copied!
28
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

c Springer-Verlag 2003

Organizational structure and innovative activity

Dirk Sliwka

Betriebswirtschaftliche Abteilung, University of Bonn, Adenauerallee 24-42, 53113 Bonn, Germany (e-mail: dirk.sliwka@uni-bonn.de.)

Received: June 2001 / Accepted: May 2002

Abstract. A model is analyzed in which agents exert effort to create innovations within an organization. When payments are infeasible, the decision on the imple- mentation of a proposal is shown to bemadeby simplemonotonic decision rules.

Organizational structure is then determined by a collection of decision rules. A trade-off arises between the use of information and the incentives created by a rule.

If the former dominates it will currently be optimal to install a hierarchy. Other- wise decentralization by granting autonomy to innovators may be better. Requiring unanimous decision-making is optimal if a strong filtering of proposals is necessary.

Key words: Decision rules, delegation, authority, incentives, organizational struc- ture

JEL classification: D23, D71, L22

1. Introduction

Theway in which an organization deals with proposals for innovations madeby its members is an important part of theorganizational policy. On theonehand, an or- ganization only wants to adopt proposals that improve organizational performance.

On theother hand, it should try to encourageinitiativeby subordinates to come up with new ideas and therefore not impose too narrow requirements for adopting innovations. But how should the selection of proposals proceed to achieve these two goals? Thosearethequestions wewant to analyzemoreclosely in this paper.

I am grateful to Anke Kessler, Matthias Kr¨akel, Sabine Lindenthal, Patrick Schmitz, Urs Schweizer and Thomas Tr¨oger as well as two anonymous referees for helpful comments and discussions. I am especially grateful to one anonymous referee who made numerous extremely useful suggestions.

(2)

To achievethefirst goal – an optimal filtering of innovativeproposals – a centralized structure seems reasonable. People at the top of an organization will usually have preferences more closely aligned to the organizational goals. For instance, an owner manager of a firm may select only those proposals that increase thevalueof thefirm – at least if a lack of expertiseis not an issue– whereas an employee’s choice may also be guided by private interests. One can therefore expect that if the implementation of innovations needs approval of superiors, project selection should achieve better results.

But, such centralized decision making may discourage initiative of subordinates to come up with innovative ideas. About a large British enterprise, that apparently had a structural problem in generating innovations, the following verse has been written: “Along this tree from foot to crown, ideas flow up and vetoes down”.1It is not only in recent years, that more people tend to believe that more decentral- ized structures may encourage innovation. Chandler (1977, p. 181) summarizes for instance the arguments of Charles E. Perkins, who adopted a decentralized struc- tureas a manager of theChicago, Burlington and Quincy railroad in the1880s, and claimed that “such an organization encouraged initiative and independent thought”.

In this paper, wewant to analyzetheconflicting goals in a formal model and il- lustrate the trade-offs associated with problems of designing optimal organizational structures and the impact on innovative activity. In our model agents individually exert effort to create project ideas that affect the utility of the owner as well as the other members of the organization. Once an agent has made an innovation and disclosed his idea, his fellow members of the organization learn how this pro- posal affects their own utility and announce their opinion on whether this proposal should be implemented. A decision rule then simply determines the probability that a project is implemented conditional on the messages of the affected members of the organization. As a first step, weshow that optimal mechanisms aresimplemono- tonic voting rules if message contingent payments are infeasible and the agents are required to have weakly dominant strategies. Each agent only announces whether he agrees or disagrees with the project or is indifferent on wether the project is to be implemented. The decision rule is simply a function of the agents’ votes. We will determine optimal rules for proposals of each agent separately.

A key idea is that the collection of rules for all agents determinesorganizational structure. To understand this idea just note that a stricthierarchyis oneexample for a collection of very simple monotonic decision rules. It is simply the special case in which the implementation of proposals of any agent only depends on the vote of a single agent, who is then their superior. Hence, the roles of subordinate and superior are endogenously determined in the model in that case. A different collection of decision rules is a flat structure, in which each agent is allowed to decide autonomously on whether to implement his own proposals. Another one is a unanimity structurein which theimplementation of a prosal can only go forward if all agents agree. We will see that in many cases such clear-cut structures naturally arise when we search for optimal voting rules. When designing a decision rule, the owner of the organization trades off the two goals mentioned above: she tries to

1 Cited by Child ((1984), p. 276).

(3)

achievean optimal filtering of proposals such that only thoseyielding a positive expected benefit for her are selected. But she also should take into account the motivational consequences of such a rule.

As a benchmark casewefirst assumethat theprincipal directly learns the consequences on profit of a proposal made by any agent. An optimal decision rule will be such that the principal will either retain decision making authority herself or delegateit to thesubordinatewith a certain probability. A key variablein this context is the incentive elasticity of effort spent by an agent on creating innovations – that is the extent to which an increased expected benefit of an innovation for an agent increases the effort choice and, hence, the frequency of innovations made by him. We obtain the intuitive result that high incentive elasticities lead to more delegation: theprincipal then commits to block proposals by thesubordinateless often.

Then wedrop theassumption that theprincipal is ableto seetheimpact of an innovation for her own payoffs directly. This is reasonable if decision making within large organizations is analyzed, where the owner is not able to see the consequences of each decision made in any department. Centralization is hence infeasible. But she can give an agent whose utility is most highly correlated with her own more power in decisions on proposals made by other agents. A two agent model is analyzed and we describe the properties of optimal decision rules in this context.

We then continue by analyzing optimal organization structures defined by the collection of optimal decision rules. We show that for symmetric distributions of a project’s payoffs it is optimal to set up a hierarchy if the incentive elasticities of effort are sufficiently small. That is, one of the agents gets full autonomy and proposals made by the other agent are only accepted if his colleague agrees. The first is then the latter’s superior. On the other extreme complete decentralization is optimal if theincentiveelasticity is largeenough. Then weexplorea simple payoff distribution in moredetail. In this caseweobtain comparativestatics results concerning the correlations between the principal’s and the agents’ utilities. We get the surprising result that sometimes a higher correlation of one agent’s utility with the principal’s may lead to a higher autonomy of theotheragent. Finally, the optimality of theunanimity ruleis explored. Although it will turn out that requiring unanimous decision weakens incentives to come up with innovations, we will show that such a structure is beneficial if a strong filtering of proposals is necessary.

Among the most closely related papers to our work is Aghion and Tirole (1997).

We depart from their setup in several respects. First, in our model, agents indepen- dently exert effort to come up with innovations whereas in Aghion-Tirole both principal and agent exert effort to get to know the same thing, i.e. the payoff struc- ture of a given project list. Whereas Aghion and Tirole restrict the class of feasible contracts to thosethat either givetheprincipal or theagent formal authority to select a project2, we allow for more general procedures – of course with the still strong restriction of not allowing payments conditional on messages. This allows us to give conditions for the optimality of decision structures other than simple

2 Tirole (1999) and Aghion and Tirole’s working paper (1994) contain possible justifications for restricting contracting to simple allocation of authority.

(4)

allocations of authority and including theopinion of third persons. In this regard, we can explore the design of decision rules if the principal herself is uninformed.

As a consequence, we can analyze how the information gained through the opinion of different agents should be used in a decision rule and how this problem interacts with theincentiveproblem.

Baker, Gibbons and Murphy (1999) discuss delegation of authority in a repeated game setting. They do not seek optimal decision rules but assume that the principal always retains formal authority. They show that in an infinitely repeated game the principal will beableto promisecredibly to allow theagent theselection of a project that is bad for herself, as this promise will increase incentives. Whereas thosemodels consider theimpact of theallocation of decision rights within an organization on incentives for innovations, Aghion and Tirole (1994) study the impact of the allocation of property rights on incentives for innovations between firms. Sah and Stiglitz (1986) analyze the relative advantages of hierarchies and polyarchies in a model where incentive problems are ignored and the focus is set on the importance of judgement errors.3

This paper is organized as follows. Section 2 presents the model. In the following Sect. 3 weshow theoptimality of monotonic voting rules. Weapply this to simple delegation problems when the principal is informed on a proposal’s impact on her own utility in Sect. 4. Optimal decision making when the principal is uninformed is explored in Sect. 5. Section 6 concludes.

2. The model

In thefollowing wedescribethegeneral framework wewant to adapt later on in this chapter to study different special cases. A principalP employs a group of agents Ai, wherei= 1, .., n. Principal as well as agents are assumed to be risk neutral.

The agents’ task is to create project ideas. First, one agent is randomly drawn who gets the chance of making an innovation. Alternatively it can be assumed that all agents at a random order get thechanceto comeup with proposals from timeto time and that a decision mechanism on one agent’s proposal entails no memory, which seeems an appropriate assumption to describe real organizational structures.

If an agent gets this opportunity, he chooses effortei [0,1]as a hidden action which is theprobability that hefinds a new project. Theagent incurs costsci(ei) for exerting effortei. To obtain comparativestatics results weusethefollowing approximation for theagents’ cost functions4:

ci(ei) =eiεiεi+1.

3 A large branch of theoretical literature has studied the optimality of multi-layer hierarchies focussing on theminimization of information processing and communication costs. Seefor instanceRadner (1992) for an overview. A more recent contribution is Bolton and Dewatripont (1994). Three-layer hierarchies have been analyzed from a quite different perspective focussing on problems of delegated supervision and collusion for instance by Tirole (1992) and the subsequent literature.

4 To usethis approximation wehaveof courseto ensurethat benefits making innovations aresuffi- ciently small as efforts are probabilities.

(5)

Hereεi >0is the incentive elasticity of effort; if the agent expects utilitybfrom creating a proposal, increasingbby one percent increases his effort level byεi percent.

A projectPi∈Rn+1now constitutes simply a vector of changes in the utility of the principal as well as the agents if the project is carried out:

bi=



bi0 bi1 ...bin



.

Thebij aredrawn from a distribution initially known to all agents. Theprincipal’s benefit is denoted bybi0.

Wemaketheimportant assumption that wages areexogenous to our model.

This implies that no performance or message contingent payments can be made.

We concentrate solely on finding optimal decisions depending on the agent’s opin- ions on a certain project. Furthermore the agents are assumed to receive enough utility from the exogenous wages such that although realizations ofbijmay well be negative, the agent still has an interest to work for the firm.

After the agentihas madean innovationPihe observes his private benefitbii. He then decides whether to make his idea public. If he does so, the other agents learn theirbij. We will later analyze two cases: the principal may either also learn the project’s effect on her utilitybi0at this stageor shemay not. After theagents have learned the impact of the project on their private utility, a decision is made on whether the project is carried out. The probability that a project proposed by agenti is carried out is denoted byηi. We will look for optimal decision rules determining thisηi. We intentionally consider decision rules that are contitional on the proposers identity, thereby allowing that proposals of different agents are treated differently.

This is a natural featureof real organizations as therearealways someagents with high individual autonomy but others who need the consent of their superior to carry on with a project. Because no payments can be made contingent on messages sent by the agents, an optimal decision mechanism is then a mapping from the space of messages sent by the agents into the interval[0,1].

Thetiming of themodel is as follows:

t= 0 t= 1 t= 2 t= 3 t= 4 t= 5

Decision One Agenti Innovation Aidecides Decision

mechanism agenti exerts with on dis- made

chosen drawn effortei prob. ei closure

3. The optimality of simple decision rules

First, we look at optimal decision mechanisms at stage 5 when utility changes are known to theagents. Becauseof theassumption that no payments can bemade, the

(6)

decision mechanism gives just the probability that a project proposed by agentiis implemented as a function of messages sent by all agents. The revelation principle tells us that we can focus on incentive compatible mechanisms where the members reveal their private information. For everyone a message is now his private utility changebij. If the principal receives information on her own payoff, we get a function for each of the agents proposals,

ηi:Rn+1−→[0,1].

We consider dominant strategy mechanisms, such that each agent’s optimal an- nouncement is indepenent of the announcements of the other agents. The truth- telling5conditions for the agents are then given by:

bijηi bij, bi−j

≥bijηi bij, bi−j

∀bij,bij, bi−j. (1) As a consequence we obtain that an optimal meachnism has a very simple structure, and therefore shall be called henceforth a ‘monotonic voting rule’:

Lemma 1A dominant strategy decision mechanism without payments that induces truth telling can be replaced without loss of generality by a monotonic voting rule where the agents only report whether the project will have a positive, a negative or no impact on their private utility. Reporting a positive private utility weakly increases the probabilityηithat the proposal is carried out, reporting a negative one weakly decreases it.

Proof. Takeany two valuesb, b>0. By applying condition (1) twice(forbij=b, bij =bandbij =b,bij=b) weget that

ηi b, bi−j

=ηi b, bi−j

∀bi−j.

Hence, all reported strictly positive private benefits must affect the probability of implementation in thesameway otherwisetheagent will lieon therealization.

Thesamemust betruefor anyb, b < 0. By proceeding in the same way for b > 0 b (or b 0 > b) weget that thetruth telling condition (1) simply requires monotonicity

ηi b, bi−j

≥ηi b, bi−j

∀bi−j.

If an agent reports a positive benefit this must lead to a weakly higher implemen-

tation probability than a negative report.

5 Note that this condition corresponds with the incentive compatibility conditions for implementation in dominant strategies in mechanism design with the difference that here payments conditional on mes- sages are excluded. In the following we will refer to those conditions concerning the truthfull revelation of privateinformation as truth telling conditions in contrast to theconditions assuring appropriateeffort choiceby theagents which wewill call incentivecompatibility conditions.

(7)

Any dominant strategy mechanism without payments can hence be replaced by amonotonic voting rule, determining the probability that a project is carried out as a function of a vector of those messages. Saying “yes” weakly increases the probability that the project is implemented and saying “no” weakly reduces it. We can restrict ourselves to mechanisms where each agent can just send one out of three messages which we interpret as “yes” or “no” or “abstention”

sj∈ {y, n, a}. For simplicity wedenotethedecision rulebyηi

sij, si−j

and truth-telling then simply requires that

ηi(y, s−j)≥ηi(a, s−j)≥ηi(n, s−j) ∀i, j, s−j.

For easeof notation wedefinea new random variableSi ∈ {y, n, a}n+1which is induced by the private benefits random variableb.6Hence

sij =







y forbij>0 aforbij= 0 nforbij<0.

Thusηi(s)is the probability that a project proposed by agentiis carried out given realizationSi=s. We will call a specific realization ofSiavoting constellation.

Thedistribution ofSiis thedistribution of all voting constellations.

Going back to staget = 4we observe that an agent only proposes a project when he expects a positive private benefitbii 0. Sincehegets no payment an agent cannot be compensated for a project yielding a utility loss so he will never proposeit. Hence, wehavethatηi(n, s−i) = 0for all values ofs−i. Wedefinethe following:

pi(s) : =P

Si=s , βji(s) : =E

bijSi=s

Hence,pi(s)is theprobability that a voting constellationsoccurs given that a project has been proposed by agentiandβji(s)is theconditional expectation of agentj ’s benefit in such a constellation. These expressions can be calculated for a given distribution of the vector of utility changesbi. An agentj’s (forj = 0 theprincipal’s)ex-anteexpected benefit of a project proposed by agentigiven a decision structureηiis now:

s

pi(s)ηi(s)βji(s). (2) With probabilitypi(s)constellationsoccurs, with probabilityηi(s)theproject will then be implemented giving expected utilityβji(s)to theagent.

6 OrSi∈ {y, n, a}nfor the case that she does not learn her utility directly.

(8)

We can now separately look for the optimal decision rule for projects proposed by each agent – this is of course due to our assumption that one agent is drawn in the beginning who gets the opportunity to invent a project. Hence, there is at most one proposal at a time which can be implemented. The rules obtained in this way for each agent together determine the decision structure adopted as will become clear in the following sections. The principal’s optimization problem when designing an optimal rulefor proposals madeby agentiis:

emaxii(.) ei·

s

pi(s)ηi(s)β0i(s)

(3)

s.t. ei= arg max

ˆ ei eˆi·

s

pi(s)ηi(s)βii(s)

−cei) (4) 1≥ηi(y, s−j)≥ηi(a, s−j)≥ηi(n, s−j)0 ∀j, s−j (5)

ηi(n, s−i) = 0 ∀s−i. (6)

Constraint (4) is theincentivecompatibility constraint, (5) is thetruth telling con- straint and constraint (6) is dueto thefact that an agent will never proposea project hedislikes. Sincetheagent’s objectiveis a strictly concavefunction ineiwecan re- place(4) by its first order condition. Applying thespecification of thecost function, the incentive constraint (4) is equivalent to

ei= εi

εi+ 1

s

pi(s)ηi(s)βii(s) εi

.

Agenti’s effort clearly increases in his incentive elasticityεi. It also increases in thesizeofηi(s): The higher the probability that a project he proposes is accepted given any voting constellation, the higher his motivation to work hard and create innovativeideas.

Theprincipal’s program can then besimplified to maxηi(.)

εi

εi+ 1

s

pi(s)ηi(s)βii(s) εi

s

pi(s)ηi(s)β0i(s)

(7) s.t. 1≥ηi(y, s−j)≥ηi(a, s−j)≥ηi(n, s−j)0 ∀j, s−j

ηi(n, s−i) = 0 ∀s−i.

If the effort incentive problem could be ignored and the principal perfectly learned her own profit this of course would require that simplyηi(s) = 1if and only if si0, sii=n. The truth-telling condition clearly would imposes no restriction in this case, as only her own vote matters.

When designing the optimal decision rule the principal trades off the optimal choice concerning her own private utility with the proposing agent’s incentives.

Increasingηi(s)away from the optimal decision rule when incentives do not matter may turn out to be beneficial if thereby the agents’ initiative and hence the rate of innovation is raised.

(9)

When examining the optimal design of such decision rules, in many cases it will turn out that it will be optimal to choose randomly whether a project should be implemented. That may seem strange at first glance. But we believe that such randomizing may be implemented by simple practical mechanisms giving it a nat- ural interpretation. Think for instance of a superior who reviews a subordinate’s decisions from time to time. If he detects a decision he dislikes, he will block it.

The autonomy of his agent is then determined by the frequency of such a review.

Aghion and Tirole(1997) for instancearguethat theprobability of being overruled crucially depends on the span of control. A large span of control by a superior with limited time to interfer with his subordinates decisions increases the autonomy of his subordinates as the probability that their decision is overruled is reduced.

4. Delegation when the principal is informed

As a starting point we consider a very simple setting in which the principal perfectly learns her own benefitb0from a project once it is proposed. Since, as we assumed, participation constraints do not matter here, the optimal decision rule only depends on theprincipal’s vote(s0) and that of theproposing agent (si):7

ηi(s) =ηi(s0, si).

From now on we will ignore the possibility of an agent being indifferent on whether theproject should becarried out (bij= 0). If for instanceprivatebenefits aredrawn from a continuous distribution those are zero-probability events and hence do not matter for the optimal solution.8

Recall that an agent only proposes projects giving him a non-negative benefit.

Hence, we haveηi(s0, n) = 0. Clearly project’s on which principal and agent agree should always be carried out, henceηi(y, y) = 1. What remains to be determined is thevalueof ηi(n, y)which wewill for simplicity denotebyηi. That is the probability that a project is adopted which has been proposed by agenti, but which the principal would prefer to block.

Thecalculation of ηi hence determines the optimal extent of delegation and hence agenti’sautonomy. Forηi= 1the decision is delegated entirely to the agent, if hewants theproject to becarried out, it is carried out. Theprincipal’s opinion does not matter at all. Forηi = 0however the principal retains the decision-making authority, she can veto any project she dislikes. The higher agenti’s autonomyηi, the less frequent the principal interferes and therefore we expect the higher is also the agent’s motivation to invent new projects.

Solving program (7) for this caseweget thefollowing result:

Proposition 1If the principal perfectly learns how a proposed project affects her utility, an agent’s autonomy is weakly increasing in his incentive elasticityεi. For εi sufficiently small the principal will retain complete authority. The probability

7 Correspondingly we will also writepi(s) =pi(s0, si)andβij(s) =βji(s0, si)

8 In the examples considered later we will look at discrete distributions, but there a private benefit of zero will neither occur.

(10)

that a proposal made by agenti is accepted although the principal disagrees is given by

ˆ

ηi= εi

εi+ 1

pi(y, y) pi(n, y)

β0i(y, y)

−βi0(n, y) 1 εi+ 1

pi(y, y) pi(n, y)

βii(y, y)

βii(n, y) (8) ifηˆi [0,1]. Forηˆi <0it is optimal to setηi = 0, forηˆi >1to chooseηi = 1.

The agent’s autonomy increases (decreases) in the principal’s (agent’s) expected payoff in case of a consensus. It decreases in the principal’s utility loss in case of a disagreement and increases in the agents utility gain in this case.

Proof. See Appendix.

Recall thatβi0(n, y)<0. To understand that the agent’s autonomy increases in theprincipal’s payoff in caseof a consensusβ0i(y, y)consider the following: the moretheprincipal expects to receivein this casethemorevaluableinnovations are to her. But theprobability of an innovation can only beraised by granting more autonomy to the agent. Conversely, the principal will increase the frequency of intervention if her damage in case of a disagreementβi0(n, y)worsens.

The agent’s autonomy is reduced by a higher payoff in case the principal agrees with himβii(y, y). Then it is less necessary to motivate the agent by accepting projects the principal dislikes. But his autonomy is higher for higher values of βii(n, y), as a veto by the principal reduces the agent’s expected private value of an innovation to a higher degree.

The agent’s autonomy increases in the incentive elasticity of effort. The larger εi, the stronger is the incentive effect of autonomy: A higher autonomy increases the agent’s expected private benefit from an innovation and for larger values ofεi this has a stonger effect on his incentives to come up with innovations.

Finally notethat if theagent has someautonomy, i.e.ηi >0, then it increases in the probability of consensus and decreases in the probability of a disagreement.

To illustrate this result further consider the following simple example:

Example 1: Binary distributions

A project can either be good or bad to the principal as well as the agent. The principal’s utility from a good project isd0and her utility from a bad one is−d0. Similarly the agent receives eitherdior−di. Weassumethat theprincipal’s utility is determined as follows

b0=

d0 with probµ

−d0with prob1−µ.

Given that theproject is good (bad) for theprincipal weassumethat it is also good (bad) for theagent with probabilityκ. Hence

κ=P{bi=di|b0=d0}.

(11)

Equation (8) is then simply

ηi = εi

εi+ 1

µκ

(1−µ) (1−κ)− 1 εi+ 1

µκ (1−µ) (1−κ)

= εi1 εi+ 1

µκ (1−µ) (1−κ).

For values of the incentive elasticityεi<1increasing thepossible“reward” from creating an innovation by one percent increases the innovative activity of agentiby less than one percent. Hence, the agent gets no autonomy irrespective of the value ofκandµ. There is no point for the principal in giving up veto power to strengthen the agents incentives. Sacrificing some of her own utility once a project is found by increasingηidoes not raisetherateof innovation enough to compensatefor this loss.

Forεi>1however the agent gets some autonomy. The agent’s effort supply is elastic in the expected reward: increasing the prospective utility gain for the agent by one percent increases the rate of innovation by more than one percent. In this case the frequency of intervention increases inκandµ. Thehigher thecorrelation between the principal and the agent’s benefits of a project, the higherκand thus the higher the agent’s autonomy. Furthermore higher values ofµmakethecreation of innovations moreattractiveand hencetheagent’s autonomy should increase inµ.

5. An uninformed principal

In large organizations the owner or central designer can rarely monitor and view the direct consequences of decisions made lower in the hierarchy. This may be either simply due to a limited information processing capacity or to limited competencies in certain fields. However, it is one of her most important tasks to determine the organizational structure and decision procedures adopted in the organization. For instance she will decide on whether hierarchies are steep or flat, whether some decisions require a consent of certain people and so on. To consider those cases we analyze in this section a situation in which the principal cannot learn her payoff directly when a project is proposed by an agent. Her task is then to design decision procedures appropriately such that on the one hand proposals are implemented that give her a high payoff, but on the other hand the agents have sufficient incentives to comeup with new ideas.

To achievetheformer goal, shemay giveagents morepower who havepref- erences closer to her own. For instance, an agent may be more “aligned” to the firm’s interests than another one because his career prospects depend stronger on the firm’s performance. Hierarchical decision-making may thus lead to a higher quality of the selected projects. But this may discourage innovative activity. For our purpose we treat the extent of alignment to the firm’s interests as exogenous as it may be determined by many other factors which are not considered in our model.

Weanalyzea model with two agents. In thebeginning a decision ruleis set up for proposals made by either of them. Then nature selects one of them who gets

(12)

the chance to exert effort to create an innovation. Once an agent has an idea, he can decide whether to disclose it. If he does so, his colleague learns the impact this project has on his private utility. We can now separately look for an optimal decision rule for each of the agents’ proposals which is a function9

ηi(s) =ηi(si, s−i)

determining the probability that a proposal made by agentiis implemented given both agent’s opinions. As before an agent will of course never propose a project reducing his own utility, henceηi(n, s−i) = 0. What remains to be determined are the probabilities that a project proposed by an agenti is implemented if his colleague either agrees (ηi(y, y)) or disagrees (ηi(y, n)) with his proposal. To simplify notation wewill writeηiCfor the probability that the project is implemented if there is a consensus andηDi for the case of a disagreement as defined in the following table:

i=1,2 probability of implementation

s−i=y ηiC

s−i=n ηDi

Different organizational structures can now be classified by their decision struc- tures. In particular, corner solutions of the vectorη=

ηC1, ηD1, ηC2, ηD2

imply the following structures:

Full Autonomy Unanimity 1-Hierarchy 2-Hierarchy

η1C 1 1 1 1

η1D 1 0 1 0

η2C 1 1 1 1

η2D 1 0 0 1

When all proposals made by any agent are accepted, we say that both retainfull autonomy. When each agent has veto power on the other’s proposals theunanimity ruleis implemented. If however agentiretains full autonomy and gets the right to veto agentj’s proposals a hierarchy is installed whereiis thesuperior ofj. If ηC1 =η2C= 1andηiD= 1andηD−i = 0wespeak of astrict hierarchy. If howe ve r ηC1 =η2C = 1andηiD= 1and0< ηD−i <1wespeak of aweak hierarchysince agentihas full autonomy on his own proposals and may sometimes veto proposals made by his colleague.

9 Notethat herethefirst component of thedecision ruleηi(si, s−i)denotes the signal sent by the proposing agent. Recall that correspondinglypi(s) =pi(si, s−i)andβij(s) =βji(si, s−i).

(13)

Westart by analyzing theoptimal decision rulefor proposals by oneof the agents. A result is derived that characterizes some properties of an optimal deci- sion rule for arbitrary distribution functions. In the subsequent section we will then consider the collection of rules for both agents to analyze optimal organizational structure. First, we consider optimal structures for all symmetric distribution func- tions. Then we further specify the distribution of private benefits and consider a casewith binary distributions.

5.1. Decision rules

When deciding about the optimal decision rule for projects proposed by the agents theprincipal should consider theincentiveand theinformation aspect of thedecision rule. On theonehand granting oneof theagents moreautonomy – by choosing high values ofηDi – increases his motivation for making innovations and thereby the frequency of innovations. On the other hand his colleague’s opinion may be more highly correlated with the principal’s own. Then giving the latter some veto power – i.e. by choosing lower valuesηiD– may allow a better filtering of proposals madeby agentiand hence the quality of implemented innovations improves. In thefollowing wewill characterizethis trade-off.

Theprincipal choosesηCi andηDi to maximize her expected utility from innova- tions taking into account theincentivecompatibility andj’s truth-telling constraint.

The latter here simply requires thatηCi ≥ηiD, i.e. by announcing a positive opinion on the project proposed by agenti, his colleague must weakly increase the prob- ability that it is implemented. Recall that byβ0i(y, s−i)wedenotetheprincipal’s expected payoff of a proposal made by agention which agent−ivoteds−iand pi(y, s−i)is the probability that such a voting constellation occurs given thati proposes a project.

eimaxiCDi ei

pi(y, y)ηiCβ0i(y, y) +pi(y, n)ηiDβ0i(y, n)

(9) s.t.ei = arg max

ˆ ei ˆei

pi(y, y)ηCiβii(y, y) +pi(y, n)ηDi βii(y, n)

−cei) 1≥ηCi ≥ηiD0.

Weget thefollowing characterization of theoptimal rulefor proposals madeby an agenti:

Lemma 2The optimal decision rule for agentihas the following properties:

(i) If bothβ0i(y, y)andβ0i(y, n)are negative then no proposal made by agentiis carried out.

(ii) Ifβi0(y, y)is positive thenηiC = 1. If in addition β0i(y, n)is positive, then alsoηDi = 1 and agentigets full autonomy. If howeverβ0i(y, n) < 0then the probability that the proposal is accepted when his colleague disagrees is given by

ˆ

ηiD= εi

εi+ 1

pi(y, y) pi(y, n)

β0i(y, y)

−β0i(y, n) 1 εi+ 1

pi(y, y) pi(y, n)

βii(y, y)

βii(y, n) (10)

(14)

ifηˆiD [0,1]. ForηˆiD < 0 it is optimal to setηDi = 0, for ηˆDi > 1to choose ηDi = 1. Agenti’s autonomy is weakly increasing in his incentive elasticityεi. For εisufficiently small his colleague gets full veto power, i.e.ηiD= 0.

(iii) Ifβ0i(y, y)<0butβi0(y, n)0agentiwill get full autonomy iff E

bi0bii0

0

otherwise no proposal made by him is carried out (ηCi =ηiD= 0).

Proof. See Appendix.

Claim (i) is straightforward as theprincipal will clearly adapt no proposals if all yield a negative expected payoff. She will of course want all proposals to be accepted which give her a positive expected payoff, as this in this case the incentive aspect and the information aspect coincide. This explains the first part of claim (ii). If however β0i(y, n) < 0, the principal’s expected payoff from a proposal madeby agentiis negative given that the other agent disagrees. In that case, shehas to trade-off both aspects. On theonehand, shelooses ex-post from raisingηiD as projects with a negative expected payoff are implemented. On the other hand she may gain, as the agent has a higher autonomy and comes up with more proposals. If the incentive elasticity is sufficiently small, incentives are less important than the filtering of projects, hence, the other agent will be allowed to veto any proposal. If the incentive elasticity is larger then it will be beneficial for theprincipal to grant moreautonomy even at thecost of accepting bad projects with someprobability. Theintuition for expression (10) – theoptimal probability that a project proposed by agentiis selected if his colleague disagrees – is similar to that given for Proposition 1.

To understand claim (iii) note that here the truth-telling constraint does impose a restriction. The principal would prefer to implement proposals by agentionly if the other agent votes against them, and reject those the latter supports. But clearly, if theprincipal used such a rulethis other agent would havean incentiveto lieand announce “no” when he wants the project to be implemented. Hence, the principal can only either accept or reject all proposals made by agenti in this case; she cannot makeuseof his colleagues opinion. But then shewill of courseonly accept proposals by agentiif her expected payoff is positive without this information.

From Lemma 2 we have learned some general properties of the optimal decision rulefor oneagent. Thestructureof our simpleorganization is determined by a combination of two rules – one for the proposals of each agent. In Sect. 5.2 we will consider symmetric payoff distributions and will show that only weak hierarchies or full autonomy can beoptimal in that case. In Sect. 5.3 wewill drop thesymmetry assumption and characterizea situation in which theoptimal structurerequires unanimous decisions.

5.2. Hierarchy or autonomy

Hereweimposetheadditional assumption that thedistribution of payoffs is sym- metric and thesamefor proposals madeby both agents, that is, its density has

(15)

thepropertyf(b) = f(−b)∀b R3.10 This clearly implies thatE bji

= 0

∀i, j. Hence, without further information the principal is indifferent on whether a proposal should be implemented. We can then apply the results of the preceding Lemma to obtain a result on the optimal organizational structure.

First wewill show that thesymmetry of thedistribution function implies that the principal’s payoff in case of a disagreement of both agents has the property thatβ0i(y, n)0 ≥β0−i(y, n)for eitheri= 1ori = 2: SinceE

bi0

= 0the following holds:

pi(y, y)βi0(y, y)+pi(y, n)β0i(y, n)+pi(n, y)β0i(n, y)+pi(n, n)β0i(n, n) = 0.

Because the distribution of the vector of private benefits does not depend on the identity of theagent who makes theproposal wehavethatβij(s) = βj−i(s)and pi(s) =p−i(s)fori, j= 1,2. But symmetry also implies thatpi(y, y) =pi(n, n) andβi0(y, y) =−βi0(n, n). Hence, we get that

p1(y, n)β01(y, n) =−p2(y, n)β02(y, n). (11) In thesimplest caseβ0i(y, n) = β0−i(y, n) = 0and the principal is indifferent between implementing any project on which both agents disagree. If in addition β0i(y, y)>0granting full autonomy to both agents is clearly optimal as it maxi- mizes incentives.

If, however, there is some assymmetry andβ0i(y, n)0−i(y, n), theprincipal can always learn morefrom theopinion of oneagent as (11) implies thatβ0i(y, n)>

0> β0−i(y, n)forieither1or2in that case. We will then call agentithemore congruent agent. From claim (ii) of Lemma 2 we then know that the more congruent agentiwill get full autonomy in that case as all his proposals are implemented. On theother hand, from applying thesameclaim weknow that for sufficiently small incentive elasticities proposals made by the less congruent agent−ican be vetoed by the more congruent agent. But this combination of rules constitutes a hierarchy:

the more congruent agent is the hierarchical superior as he can decide autonomously on his own projects but can veto proposal’s by the other agent, who then will be his subordinate. Furthermore, again from claim (ii) of Lemma 2 we get that higher values of the incentive elasticity reduce the superior’s vetoing probability, such that the hierarchy gets weaker. We can summarize those considerations in the following result:

Proposition 2If the distribution ofbij is symmetric and independent of the pro- poser’s identity the optimal organizational structure has the following properties:

(i) Ifβ0i(y, y) 0andβ0i(y, n) > β−i0 (y, n)the optimal structure is a (weak) hierarchy in which agentiis the superior and agent−ithe subordinate. The au- tonomy of the subordinate is increasing inε−i. Ifε−i is sufficiently small then a strict hierarchy will be optimal.

(ii) Ifβ0i(y, y)0andβ0i(y, n) =β−i0 (y, n)both agents get full autonomy.

10 Note that this does not imply that the marginal distributions of each agents utility coincide, i.e. the distribution is not symmetric with respect to the agents. There may well be one agent whose private benefits for instance have a higher variation or stronger correlation with the principal’s.

(16)

It is important to note, that the roles ofsubordinateandsuperiorarise endoge- nously when we assume symmetric payoff distributions. It will now be interesting to analyze these ideas in a more specific environment to give a complete characteri- zation for theoptimal structureand to obtain comparativestatics results with respect to the correlation between agents’ and the principal’s preferences. To do this, we will again proceed by analyzing an example with a binary payoff distribution:

Example 2: Binary distributions

Weassumethat theutility changes from proposals madeby both agents aredrawn from identical binary distributions. As in example 1, a project is either good for the principal yielding a utility gain ofd0or bad giving her−d0. Similarly, theproject is either good or bad for each of both agents resulting in a utility gain ofdior a loss of−di. Both states for theprincipal’s utility areassumed to beequally probable11. But given that a project is good (bad) for theprincipal weassumethat it is also good (bad) for agentiwith probabilityκi. Wecall κi agenti’s congruence. Note that the correlation coefficient isρb0bi = 2κi1. Hence, forκi > 12there is positive, forκi < 12 thereis negativecorrelation with theprincipal’s utility. For simplicity weassumethat conditional on the principal’s benefit, both agents’ utilities are independently drawn. Hence, correlation among the agents’ utilities from a project stems only from a common correlation with the principal’s utility which may for instancebedueto somecommon interest in thefirm’s well beeing.

It is useful to computethefollowing expressions first:12 pi(y, y)βi0(y, y) = 1

2(κi+κ−i1)d0, pi(y, n)β0i(y, n) = 1

2(κi−κ−i)d0.

Notethat theprincipal’s expected payoff in caseof a consensus is positive (β0i(y, y) > 0) if κ1+κ2 > 1. Furthermore, her expected payoff in case of a disagreement between both agents is positive, if and only if the agent with the higher congruence is in favor of the project, that isβ01(y, n)>0andβ01(n, y)<0 ifκ1> κ2.

It will beinstructiveto consider at first a benchmark casein which theincentive elasticity is suffiently small, such that only the information aspect is considered.

We can then apply Lemma 2 and Proposition 2 to obtain the following full charac- terization of the optimal structure:

Proposition 3 (Binary distribution case)Let agent1be the more congruent agent 1≥κ2). The optimal structure in the binary distribution case has the following properties:

(i) No project gets adopted if neither agent’s utility is positively correlated with the principal’s (κ1, κ2<12).

11 That isµ=12 in thenotation of example1.

12 The derivation is given in the Appendix.

(17)

1 0.8 0.6 0.4 0.2 0

1

0.8

0.6

0.4

0.2

0

1-Hierarchy 2-Hierarchy

κ1

κ2

No projects Implemented Only

2-Autonomy

Only 1-Autonomy

0.8 1 0.6 0.4 0.2 0

1

0.8

0.6

0.4

0.2

0

1-Hierarchy 2-Hierarchy

κ1

κ2

0.8 1 0.6 0.4

0.2 0.4 0.6 0.8

0.2 0

1

0.8

0.6

0.4

0.2

0 1

0.8

0.6

0.4

0.2

0

1-Hierarchy 2-Hierarchy

1-Hierarchy 2-Hierarchy

κ1

κ2

No projects Implemented Only

2-Autonomy

Only 1-Autonomy

Fig. 1.Optimal structurein thebinary distribution case(Example2) with1, ε2)very low

(ii) Ifκ1 > 12 butκ1+κ2 <1 agent1 gets full autonomy, but no proposal by agent2is accepted (ηC1 =η1D= 1andηC2 =η2D= 0).

(iii) Ifκ1+κ2 1 and the incentive elasticity is sufficiently small the optimal decision rule is a hierarchy where agent 1 is agent 2’s superior (i.e.ηC1 = ηC2 = 1andη1D= 1, ηD2 = 0).

Proof. See Appendix.

Note that claims (i) and (ii) are independent of the incentive elasticity. The result can be illustrated as shown in Fig. 1 (which also depicts the cases withκ1≤κ2).

Wedrawκ1on thehorizontal andκ2on the vertical axis and specify the optimal decision rule for any combination(κ1, κ2).13

Theintuition for theresult is thefollowing: If an agent’s congruenceκis less than12 his utility is negatively correlated with the principal’s own utility. Without further information a project proposed by such an agent yields a negative expected payoff for her. If the other agent’s utility is also negatively correlated with the principal’s shecannot usethelatter to filter thoseprojects: if healso recommends the project her expectation is even lower. If the other agent however opposes the proposal, this is good news for theprincipal and shemight want to implement such a project. But truth-telling requires that the probability to carry out a project should decrease if an agent votes against it. Hence, if both agents have congruences κ < 12 no proposal made by either of them is carried out. If however at least one agent has congruenceκ > 12, his opinion is useful to filter projects proposed by his colleague. Ifκ1+κ21all proposals on which both agents agree yield a positive expected payoff for the principal. From Proposition 2 we therefore know that the

13 Note that the hierarchies are strict hierarchies. Ifκ1=κ2>12, the principal is indifferent between any hierarchy and full autonomy for both.

Referenzen

ÄHNLICHE DOKUMENTE

CASE Network Studies &amp; Analyses No.457 – Post-crisis Lesson for EMU Governance from the .... They can be subject to further

Another classical result about binomial coefficients and primes is the following fact:..

Tentative assignments of the normal modes of the molecule were proposed on the basis of the calculated infrared band intensi- ties, Raman activities, depolarization ratios, and the

In this connection it is recalled that Thomas and Kolb 1943 obtained cross reactions in complement-fixation tests with mouse pneumonitis and psittacosis antisera in the presence of

Sheep and goats naturally or experimentally infected with prion disease have shown a significant propagation of the scrapie agent in lymphoid organs including

They might consider instances in which models (or practices involving the model, or modelling) have become sites of disciplinary adoration and/or discursive attention. How has

However, in contrast to Garfinkel and Skaperdas (2000) and Skaperdas (2006), uncontested settlement ap- pears not by assumption but in equilibrium: Both agents, given their

[YS00] do not provide direct incentives for rational agents to report and to do that truthfully but they create a reputation mechanism in which trust building is very hard and