• Keine Ergebnisse gefunden

In this chapter, we studied the long-run eects of head starts in innovation contests in which each rm decides when to stop a privately observed repeated sampling process before a preset deadline.

Unlike an advantage in innovation cost or innovation ability, which encourages a rm to search more actively for innovations and discourages its opponent, a head start has non-monotone eects.

The head starter is discouraged from searching if the head start is large, and its strategy remains the same if the head start is small. The latecomer is discouraged from searching if the head start is large but is encouraged to search more actively if it is in the middle range. Our main nding is that, if the head start is in the middle range, in the long run, the head starter is doomed to lose the competition with a payo of zero and the latecomer will take the entire surplus for the competing rms. As a consequence, our model can exhibit either the preemption eect or the replacement eect, depending on the value of the head start.

Our results have implications on antitrust problems. Market regulators have concerns that the existence of market dominating rms, such as Google, may hinder competitions, and they take measures to curb the monopoly power of these companies. For instance, the European Union voted to split Google into smaller companies.20 Our results imply that in many cases the positions of the dominant rms are precarious. In the long run, they will be knocked o their perch. These rms' current high positions, in fact, may promote competitions in the long run because they encourage their rivals to exert eorts to innovate and reach high targets. Curbing the power of the current dominating rms may benet the society and these rms' rivals in the short run, but in the long run it hurts the society because it discourages innovation. However, the the dominating rms' positions are excessively high, which deters new rms from entering the the market, a market regulator could take some actions.

The results have also implications on R&D policies. When selecting an R&D policy, policy makers have to consider both the nature of the R&D projects and the market structure. If the projects are on radical innovations, subsidizing innovation costs eectively increases competition when the market is blank (no advanced substitutive technology exists in the market). However, when there is a current market dominating rm with an existing advanced technology, a subsidy may not be eective. The dominating rm has no incentive to innovate, and the latecomer, even if it is subsidized, will not innovate more actively.

In our model we have only one head starter and one latecomer. The model can be extended to

20Google break-up plan emerges from Brussels, Financial Times, November 21, 2014.

include more than two rms, and similar results still hold. One extension is to study the designing problem in our framework. For example, one question is how to set the deadline. If the designer is impatient, she may want to directly take the head starter's initial innovation without holding a contest; if she is patient, she may set a long deadline in order to obtain a better innovation. Some other extensions include: to consider a model with a stochastic number of rms; to consider a model with cumulative scores with or without regret instead of a model with repeated sampling.

3.A Appendix 3.A.1 Preliminaries

To justify Assumption 3.1, we show in the following that, for any given strategy played by a rm's opponent, there is a constant cut-o rule being the rm's best response. If the cut-o value is above zero, it is actually the unique best response strategy, ignoring elements associated with zero probability events. We argue only for the case that both rms' initial states are0. The arguments for the other cases are similar and thus are omitted.

Suppose aI1 =aI2 = 0. For a given strategy played by Firmj, we say at timet

ati := inf{˜a∈A|Firmiweakly prefers stopping to continuing searching in statea}˜ is Firm i's lower optimal cut-o and

¯

ati := inf{˜a∈A|Firmistrictly prefers stopping to continuing searching in statea}˜ is Firm i's upper optimal cut-o .

Lemma 3.4. SupposeaI1=aI2 = 0. For any xed strategy played by Firmj, Firmi's best response belongs to one of the three cases.

i. Not to search: ¯ati =ati =−1 for all t∈[0, T],

ii. Search with a constant cut-o rule ˆai ≥0: ¯ati =ati = ˆai ≥0 for all t∈[0, T].

iii. Both not to search and search until being in a state above 0: ¯ati = 0 and ati = −1 for all t∈[0, T).

Proof of Lemma 3.4. Fix a strategy of Firm j. Let P(a) denote the probability of Firm j ending up in a state below a at time T. P(a) is either constant in a or strictly increasing ina. It is a constant if and only if Firm j does not search.21 If this is the case, Firm i's best response

is to continue searching with a xed cut-oaˆti = ¯ati =ati = 0 for allt. In the following, we study the case in whichP(a) is strictly increasing ina.

Step 1. We argue that, given a xed strategy played by Firm j, Firm i's best response is a (potentially history-dependent) cut-o rule. Suppose at time t Firmiis in a state ˜a∈[0,1]. If it is strictly marginally protable to stop (continue) searching att, then it is also strictly marginally protable to continue searching if it is in a state higher (lower) than ˜a. Let the upper and lower optimal cut-os at time tbe¯ati and ati, respectively, as dened previously.

Step 2. We show that {¯ati}Tt=0 and {ati}Tt=0 should be history-independent. We use a discrete version to approximate the continuous version. Take any ˜t∈[0, T). Let {tl}kl=0, where tl−tl−1 =

T−˜t

k =:δforl= 1, ..., k, be a partition of the interval[˜t, T]. Suppose Firmican only make decisions at {tl}kl=0 in the interval [˜t, T]. Let{¯atl}k−1l=0 and {atl}k−1l=0 be the corresponding upper and lower optimal cut-os, respectively, and Gδ(a)be Firmi's probability of discovering no innovation with a value above ain an interval δ.

Attk−1, for Firmiin a statea, if it stops searching, the expected payo isP(a); if it continues searching, the expected payo is

Gδ(a)P(a) + Z 1

a

P(˜a)dGδ(˜a)−δci

=P(a) + Z 1

a

[P(˜a)−P(a)]dGδ(˜a)−δci.

The rm strictly prefers continuing searching if and only if searching in the last period strictly increases its expected payo,

eδ(a) :=

Z 1 a

[P(˜a)−P(a)]dGδ(˜a)−δci >0.

eδ(a) strictly decreases in aand eδ(1) ≤0. Because eδ(0) can be either negative or positive, we have to discuss several cases.

Case 1. If eδ(0)<0, Firmi is strictly better o stopping searching in any statea∈[0,1]. Thus,

¯

atk−1 =atk−1 =−1.

Case 2. If eδ(0) = 0, Firm i is indierent between stopping searching and continuing searching with0as the cut-o, if it is in state0; strictly prefers stopping searching, if it is in any state above 0. Then¯atk−1 = 0 and atk−1 =−1.

Case 3. If eδ(0) >0≥ lima→0eδ(a), Firm i is strictly better o continuing searching in state 0, but stopping searching once it is in a state above 0. Thus,a¯tk−1 =atk−1 = 0.

Case 4. If lima→0eδ(a) > 0, then Firm i's is strictly better o stopping searching if it is in a state aboveaˆtk−1 and continuing searching if it is in a state belowaˆtk−1, where the optimal cut-o

ˆ

atk−1 >0 is the unique value of athat satises, Z 1

a

[P(˜a)−P(a)]dGδ(˜a)−δc= 0.

Thus, in this case a¯tk−1 =atk−1 = ˆatk−1.

Hence, the continuation payo at tk−1 ≥0 for Firmiin a state a∈[0,1]is

ω(a) =

P(a) +R1

a[P(˜a)−P(a)]dGδ(˜a)−δc f or a < atk−1

P(a) f or a≥atk−1.

Then, we look at the time point tk−2. In the following, we argue that ¯atk−2 = ¯atk−1. The argument for atk−2 =atk−1 is very similar and thus is omitted.

First, we show that ¯atk−2 ≤¯atk−1. Suppose ¯atk−2 >¯atk−1. Suppose Firmi is in state ¯atk−2 at timetk−2. Suppose Firm isearches betweentk−2 and tk−1. If it does not discover any innovation with a value higher than ¯atk−2, then at the end of this period it stops searching and takes¯atk−2. However,a¯tk−2 >¯atk−1 implies

0 = Z 1

¯ atk−2

[P(˜a)−P(¯atk−2)]dGδ(˜a)−δci

<

Z 1

¯ atk−1

[P(˜a)−P(¯atk−1)]dGδ(˜a)−δc≤0.

The search cost is not compensated by the increase in the probability of winning from searching betweentk−2andtk−1, and thus the rm strictly prefers stopping searching to continuing searching at time tk−2, which contradicts the assumption that ¯atk−2 is the upper optimal cut-o. Hence, it must be the case that ¯atk−2 ≤a¯tk−1.

Next, we show that ¯atk−2 = ¯atk−1.

In Case 1, it is straightforward that Firm i strictly prefers stopping searching at tk−2, since it is for sure not going to search between tk−1 and tk. Hence, Firm istops searching before tk−1, and ¯atk−2 = ¯atk−1 =atk−2 =atk−1 =−1.

For ¯atk−1 ≥ 0, we prove by contradiction that ¯atk−2 < a¯tk−1 is not possible. Suppose the inequality holds. If Firm i stops searching at tk−2, it would choose to continue searching at tk−1, and its expected continuation payo at tk−2 isω(¯atk−2). If the rm continues searching, its expected continuation payo is

ω(¯atk−2) + Z 1

¯ atk−2

[ω(a)−ω(¯atk−2)]dGδ(a)−δc. (3.6)

In Cases 2 and 3,a¯tk−1 = 0 implies¯atk−2 =−1. Then, Z 1

¯ atk−2

[ω(a)−ω(¯atk−2)]dGδ(a)−δci

= Z 1

−1

[P(a)]dGδ(a)−δci

=eδ(−1)

≥0

which means that Firm i in state 0 is weakly better o continuing searching between tk−2 and tk−1, which implies that¯atk−2 ≥0, resulting in a contradiction.

For Case 4, in which ¯atk−1 >0, we have in(3.6) Z 1

¯ atk−2

[ω(a)−ω(¯atk−2)]dGδ(a)

= Z ¯atk−1

¯ atk−2

P(a) + Z 1

a

[P(˜a)−P(a)]dGδ(˜a)

P(¯atk−2) + Z 1

¯ atk−2

[P(˜a)−P(¯atk−2)]dGδ(˜a)

dGδ(a) +

Z 1

¯ atk−1

P(a)−

P(¯atk−2) + Z 1

¯ atk−2

[P(˜a)−P(¯atk−2)]dGδ(˜a)

dGδ(a)

= Z 1

¯ atk−2

P(a)−P(¯atk−2)

dGδ(a) +

Z ¯atk−1

¯ atk−2

Z 1 a

[P(˜a)−P(a)]dGδ(˜a)

dGδ(a)

− Z 1

¯ atk−2

Z 1

¯ atk−2

[P(˜a)−P(¯atk−2)]dGδ(˜a)

dGδ(a)

=Gδ(¯atk−2) Z 1

¯ atk−2

P(a)−P(¯atk−2)

dGδ(a) + Z a¯tk−1

¯ atk−2

Z 1 a

[P(˜a)−P(a)]dGδ(˜a)

dGδ(a)

>0.

Hence, at tk−2 Firm i would strictly prefer continuing searching, which again contradicts the assumption that ¯atk−2 is the upper optimal cut-o. Consequently,¯atk−2 = ¯atk−1.

By backward induction fromtk−1 to t0, we havea¯t0 = ¯atk−1. Taking the limit we get

¯

at= lim

δ→0¯aT−δ =: ¯a for allt∈[0, T).

Similarly,

at= lim

δ→0aT−δ =:a for allt∈[0, T).

In addition,a¯6=awhen and only whena¯= 0 and a=−1.

As a consequence, Firmi's best response is not to search, if¯a=a=−1; to continue searching

if it is in a state below ¯aand to stop searching once the rm in a state above ¯a, if ¯a=a≥0. In brief, the above property is proved by backward induction. Take Case [ii] for example. If at the last moment a rm is indierent between continuing and stopping searching when it is in a certain state, which means the increase in the probability of winning from continuing searching equals the cost of searching, and therefore there is no gain from searching. Immediately before the last moment the rm should also be indierent between continuing searching and not given the same state. This is because, if the rm reaches a higher state from continuing searching, it weakly prefers not to search at the last moment, and thus the increase in the probability of winning from continuing searching at this moment equals the cost of searching as well. By induction, the rm should be indierent between continuing and stopping searching in the same state from the very beginning.

In Case [iii], Firm i generally has uncountably many best response strategies. By Assump-tion 3.1, we rule out most strategies and consider only two natural strategies: not to search at all and to search with0 as the cut-o.

Lemma 3.5. Suppose a rm's initial state is 0, and she searches with a cut-oˆa≥0. Then, the rm's probability of ending up in a state below a∈[0,1] at timeT is

Z(a|ˆa, T) =









0 if a= 0

e−λT[1−F(a)] if 0< a≤ˆa

e−λT[1−Fa)]+

1−e−λT[1−Fa)]F(a)−F(ˆa)

1−F(ˆa) if a >ˆa.

1−e−λT[1−Fa)] is the probability that the rm stops searching before time T, and F(a)−F(ˆ1−F(ˆa)a) is the conditional probability that the innovation above the threshold the rm discovers is in between ˆ

aand a.

Proof of Lemma 3.5. For a= 0, it is clear that Z(a|ˆa, T) = 0. For0< a≤aˆ,

Z(a|ˆa, T) =

X

l=0

e−λT(λT)l

l! Fl(a) =e−λT[1−F(a)].

Fora >ˆa, we approximate it by a discrete time model. Let {tl}kl=0, where0 =t0< t1< ... <

tk =T, be a partition of the interval[0, T], and letδl=tl−tl−1 forl= 1,2, ..., k. Deneπ as kπk= max |δl|.

Then,

Z(a|ˆa, T) =Z(ˆa|ˆa, T) + lim

kπk→0 k

X

l=1

Z(a|ˆa, tl−1)

" X

n=1

e−λδl(λδl)n

n! [Fn(a)−Fn(ˆa)]

#

=Z(ˆa|ˆa, T) + lim

kπk→0 k

X

l=1

e−λtl−1[1−Fa)]λe−λδl [F(a)−F(ˆa)] +O(δl) δl

=Z(ˆa|ˆa, T) + Z T

0

λe−λt[1−Fa)][F(a)−F(ˆa)]dt

=Z(ˆa|ˆa, T) +h

1−e−λT[1−Fa)]iF(a)−F(ˆa) 1−F(ˆa) ,

where the second term on the right hand side of each equality is the rm's probability of ending up in a state between ˆaanda. The termZ(ˆa|ˆa, tn)used here is a convenient approximation when δl is small. The second equality is derived from the fact that

X

n=2

(λδl)n

n! [Fn(a)−Fn(a)]< λ2δ2l

2(1−λδl) =o(δl).

Lemma 3.6. Given a > a0, Z(a|˜a, T)−Z(a0|˜a, T) 1. is constant ina˜ for ˜a≥a;

2. strictly decreases in ˜afor ˜a∈(a0, a); 3. strictly increases in ˜afor a˜≤a0.

This single-peaked property says that the probability of ending up in a state betweena0 anda is maximized if a rm chooses strategya0.

Proof of Lemma 3.6. First, we show that 1−e−λT xx strictly decreases inx over (0,1]as follows.

Dene s:=λT and take x1, x2,0< x1 < x2 ≤1, we have 1−e−sx1

x1

> 1−e−sx2 x2

, implied by

∂(1−e−sx1)x2−(1−e−sx2)x1

∂s =x1x2(e−sx1 −e−sx2)≥0 (= 0 i s= 0) and (1−e−sx1)x2−(1−e−sx2)x1 = 0 for s= 0.

Next, dene x:= 1−F(a),x0 := 1−F(a0), and x˜:= 1−F(˜a). We have

Z(a|˜a, T)−Z(a0|˜a, T) =









e−λT x−e−λT x0 for ˜a≥a (1−e−λT x0)−(1−e−λTx˜)xx˜ for ˜a∈(a0, a) (1−e−λTx˜)x0x−x˜ for ˜a≤a0.

It is independent of˜afor˜a≥a, strictly increasing inx˜and thus strictly decreasing ina˜for˜a≤a0, and strictly decreasing inx˜ and thus strictly increasing ina˜for a˜≤a0.

Lemma 3.7. Suppose Firm j with initial state 0 plays a strategy ˆaj. Then, the instantaneous gain on payo from searching for Firm i in a state ais

λ Z 1

ai

[Z(a|ˆaj, T)−Z(ai|ˆaj, T)]dF(a)−c.

Proof of Lemma 3.7. For convenience, denoteH(a) asZ(a|ˆaj, T)for short. The instantaneous gain from searching for Firmi in a stateai is

δ→0lim

e−λδH(ai) +λδe−λδ hR1

aiH(a)dF(a) +F(ai)H(ai) i

+o(δ)−δc

−H(ai) δ

= lim

δ→0

−(1−e−λδ)H(ai) +λδe−λδh R1

aiH(a)dF(a) +F(ai)H(ai)i

+o(δ)−δc δ

= lim

δ→0

−λδe−λδH(ai) +λδe−λδh R1

aiH(a)dF(a) +F(ai)H(ai)i

+o(δ)−δc δ

=−λH(ai) +λ Z 1

ai

H(a)dF(a) +F(ai)H(ai)

−c

=λ Z 1

ai

[Z(a|ˆaj, T)−Z(ai|ˆaj, T)]dF(a)−c.

3.A.2 Proofs for the Benchmark Case

Proof of Theorem 3.1. We prove the theorem case by case.

Case[i]. When Firm idoes not search, Firm j's best response is to search with cut-o 0. For

1

2λ(1 +e−λT)≤c, when Firmj searches with any cut-oaj ≥0, Firm i's best response is not to

search, since the instantaneous gain from searching for Firm iin state 0 is λ

Z 1 0

Z(a|aj, T)dF(a)−c

≤λ Z 1

0

Z(a|0, T)dF(a)−c

=λ Z 1

0

e−λT + (1−e−λT)F(a)]dF(a)−c

e−λT +1

2(1−e−λT)

−c

=1

2λ(1 +e−λT)−c

≤0 (= 0i 1

2λ(1 +e−λT) =c),

where the rst inequality follows from Lemma 3.6. Hence, there are two pure strategy equilibria, in each of which one rm does not search and the other rm searches with 0 as the cut-o, and if

1

2λ(1 +e−λT) =c there is also an equilibrium in which both rms search with 0as the cut-o.

Case [ii]. First, we show that any strategy with a cut-o value higher than zero is a dominated strategy. When Firm j does not search, Firm i prefers searching with 0 as the cut-o to any other strategy. Suppose Firm j searches withaˆj ≥0 as the cut-o. The instantaneous gain from searching for Firm iin a stateai >0is

λ Z 1

ai

[Z(a|ˆaj, T)−Z(ai|ˆaj, T)]dF(a)−c

≤λ Z 1

ai

[Z(a|ai, T)−Z(ai|ai, T)]dF(a)−c

=1

2λ(1−e−λT)[1−F(ai)]2−c

<0,

where the rst inequality follows from Lemma 3.6. Hence, once Firm i is in a state above 0, it has no incentive to continue searching any more. In this case, Firmiprefers either not to conduct any search or to search with 0as the cut-o to any strategy with a cut-o value higher than zero.

Second, we show that the prescribed strategy prole is the unique equilibrium. It is sucient to show that searching with0as the cut-o is the best response to searching with0as the cut-o.

Suppose Firm j searches with 0 as the cut-o, the instantaneous gain from searching for Firm i

in state a= 0 is

λ Z 1

0

Z(a|0, T)dF(a)−c

=1

2λ(1 +e−λT)−c

>0.

That is, Firm i is strictly better o continuing searching if it is in state 0, and strictly better o stopping searching once it is in a state above 0. Hence, the prescribed strategy prole is the unique equilibrium.

Case [iii]. First, we prove that among the strategy proles in which each rm searches with a cut-o higher than0, the prescribed symmetric strategy prole is the unique equilibrium. Suppose a pair of cut-o rules (a1, a2), in which a1, a2 >0, is an equilibrium, then Firm i in state ai is indierent between continuing searching and not. That is, by Lemma 3.7, we have

λ Z 1

ai

Z(a|aj, T)−Z(ai|aj, T)

dF(a)−c= 0. (3.7)

Suppose a1 6=a2. W.l.o.g., we assumea1< a2. Then,

c=λ Z 1

a1

[Z(a|a2, T)−Z(a1|a2, T)]dF(a)

>λ Z 1

a2

[Z(a|a2, T)−Z(a2|a2, T)]dF(a)

>λ Z 1

a2

[Z(a|a1, T)−Z(a2|a1, T)]dF(a) =c resulting in a contradiction. Hence, it must be the case that a1 =a2.

Next, we show the existence of equilibrium by deriving the unique equilibrium cut-o value a :=a1 =a2 explicitly. Applying Lemma 3.5 to(3.7), we have

λ Z 1

a

h

1−e−λT[1−F(a)]iF(a)−F(a)

1−F(a) dF(a) =c

⇔1

2[1−F(a)]

h

1−e−λT[1−F(a)]

i

= c

λ. (3.8)

The existence of a solution is ensured by the intermediate value theorem: when F(a) = 1, the term on the left hand side of (3.8)equals to 0, smaller than cλ; whenF(a) = 0, it equals to

1−e−λT

2 , larger than or equals to λc. The uniqueness of the solution is insured by that the term on the left hand side of the above equality is strictly decreasing in a.

Second, we show that there is no equilibrium in which one rm searches with0 as the cut-o.

Suppose Firm j searches with 0as the cut-o. The instantaneous gain from searching for Firm i in a state ai >0 is

λ Z 1

ai

[Z(a|0, T)−Z(ai|0, T)]dF(a)−c

=λ Z 1

ai

(1−e−λT)[F(a)−F(ai)]dF(a)−c

=1

2λ(1−e−λT)[1−F(ai)]2−c, (3.9)

which is positive whenai = 0and negative whenai = 1. By the intermediate value theorem, there must be a valueˆai >0such that (3.9) equals0when ai = ˆai. Hence, Firmi's best response is to search with ˆai as the cut-o. However, if Firmisearches withˆai as the cut-o, it is not Firm j's best response to search with0 as the cut-o, because

0 = Z 1

ˆ aj

[Z(a|0, T)−Z(ˆaj|0, T)]dF(a)−c

<

Z 1

ˆ aj

[Z(a|ˆaj, T)−Z(ˆaj|ˆaj, T)]dF(a)−c

<

Z 1 0

[Z(a|ˆaj, T)−Z(0|ˆaj, T)]dF(a)−c,

which means that Firm j strictly prefers continuing searching when it is in a state slightly above 0.

Last, we show that there is no equilibrium in which one rm does not search. Suppose Firmj does not search. Firm i's best response is to search with 0 as the cut-o. However, Firm j then strictly prefers searching when it is in state0, since the instantaneous gain from searching for the rm in state 0is again

λ Z 1

0

Z(a|ai, T)dF(a)−c >0.

Proof of Lemma 3.2. The expected total cost of a rm who searches with a cut-o a≥0is c

Z T 0

∂(1−Z(a|a, t))

∂t tdt+T Z(a|a, T)

=(1−e−λT[1−F(a)]) c

λ[1−F(a)], (3.10)

which strictly increases in a. In Regions 2 and 3, in equilibrium, the probability of winning for

each rm is

1

2[1−Z2(0|a(c, T), T)]

=1

2(1−e−2λT), (3.11)

and thus the expected payo to each rm is the dierence between the expected probability of winning (3.11)and the expected search cost(3.10), setting ato bea(a, T):

1

2(1−e−2λT)−(1−e−λT[1−F(a(c,T))]) c

λ[1−F(a(c, T))]. (3.12) The limit of (3.12) as T approaches innity is0.

3.A.3 Proofs for the Head Start Case

First, we state two crucial lemmas for the whole section.

Lemma 3.8.

1. For aI1 > a(c, T), not to search is Firm 1's strictly dominant strategy.

2. For aI1 = a(c, T), not to search is Firm 1's weakly dominant strategy. If Firm 2 searches with cut-o aI1, Firm 1is indierent between not to search and search with aI1 as the cut-o;

Otherwise, Firm 1 strictly prefers not to search.

Proof of Lemma 3.8. Suppose Firm 2 does not search, Firm 1's best response is not to search.

Suppose Firm 2 searches with cut-o a2 ≥aI1. If Firm 1 searches with cut-oa1 ≥aI1, following from Lemma 3.6, the instantaneous gain from searching for Firm1in any statea1 ≥aI1 ≥a(c, T) is

λ Z 1

a1

[Z(a|a2, T)−Z(a1|a2, T)]dF(a)−c

≤λ Z 1

a(c,T)

[Z(a|a(c, T), T)−Z(a(c, T)|a(c, T), T)]dF(a)−c, (3.13) where equality holds if and only if a1=a2=a(c, T). The right hand side of inequality (3.13) is less than or equal to 0 (it equals to 0i c≥ 12λ[1−e−λT]). Hence, the desired results follow.

Lemma 3.9.

1. For aI > F−1(1− c), not to search is Firm 2's strictly dominant strategy.

2. For aI1 = F−1(1− λc), not to search is Firm 2's weakly dominant strategy. If Firm 1 does not search, Firm 2 is indierent between not to search and search with any ˆa2 ∈[aI2, aI1] as the cut-o. If Firm 1 searches, Firm 2's strictly prefers not to search.

Proof of Lemma 3.9. If Firm1does not search, the instantaneous gain from searching for Firm 2 in a statea2 ≤aI1 is

δ→0lim

λδe−λδ[1−F(aI1)] +o(δ)−cδ

δ =λ[1−F(aI1)]−c

<0 in Case[1]

= 0 in Case[2].

If Firm1searches, Firm2's instantaneous gain is even lower. Hence, the desired results follow.

Proof of Theorem 3.2. [1],[2], and [3] directly follow from Lemmas 3.8 and 3.9. We only need to prove [4] and[5] in the following.

[4]. Following Lemma 3.8, if Firm 2 searches with cut-o aI1, Firm 1 has two best responses:

not to search and search with cut-oaI1. If Firm1searches with cut-oaI1, the instantaneous gain from searching for Firm 2 is

λ Z 1

aI1

Z(a|aI1, T)dF(a)−c

=λ Z 1

aI1

e−λT[1−F(aI1)]+ (1−e−λT[1−F(aI1)])F(a)−F(aI1) 1−F(aI1)

dF(a)−c

=1

2λ(1 +e−λT[1−F(aI1)])[1−F(aI1)]−c

>λ[1−F(aI1)]−c

>0

if it is in a statea2 < aI1=a(c, T); it is

λ Z 1

a2

[Z(a|aI1, T)−Z(a2|aI1, T)]dF(a)−c <0

if it is in a statea2 > aI1=a(c, T). Hence, the two prescribed strategy proles are equilibria.

[5]. First, there is no equilibrium in which either rm does not search. If Firm 2 does not search, Firm 1's best response is not to search. However, if Firm 1 does not search, Firm 2's best response is to search with cut-o aI1 rather than not to search. If Firm 2 searches with cut-oaI1, then not to search is not Firm 1's best response, because the instantaneous gain from searching

for Firm 1in state aI1 is

λ Z 1

aI1

[Z(a|aI1, T)−Z(aI1|aI1, T)]dF(a)−c

=1

2λ(1−e−λT[1−F(aI1)])[1−F(aI1)]−c

>0,

where inequality holds because aI1 < a(c, T).

Next, we argue that there is no equilibrium in which either rm searches with cut-o aI1. Suppose Firm i searches with cut-o aI1. Firm j's best response is to search with a cut-o ˆaj ∈ [aI1, a(c, T)). This is because the instantaneous gain from searching for Firm j in a statea0≥aI1 is

λ Z 1

a0

p2[Z(a|aI1, T)−Z(a0|aI1, T)]dF(a)−c. (3.14) (3.14) is larger than 0 when a0 =aI1. It is less than0 if a0 =a(c, T), because by Lemma 3.6 we have

λ Z 1

a

p2[Z(a|aI1, T)−Z(a|aI1, T)]dF(a)−c

<λ Z 1

a(c,T)

[Z(a|a(c, T), T)−Z(a|a(c, T), T)]dF(a)−c

=0.

Then, the intermediate value theorem and the strict monotonicity yield the unique cut-o value of ˆaj ∈(aI1, a(c, T)).

However, if Firmj searches with cut-oˆaj ∈[aI1, a(c, T)), Firmi's best response is to search with a cut-o value ˆai ∈(ˆaj, a(c, T))rather thanaI1, because the instantaneous gain from search-ing for Firm iin a statea˜is

λ Z 1

˜ a

[Z(a|ˆa1, T)−Z(˜a|ˆa1, T)]dF(a)−c

<0 for ˜a=a(c, T)

>0 for ˜a= ˆa1

and it is monotone w.r.t. ˜a. This results in contradiction. Hence, there is no equilibrium in which either rm searches with aI1 as the cut-o.

Last, we only need to consider the case in which each rm searches with a cut-o higher than aI1. Following the same argument as in the proof of Theorem 3.1, we have(a(c, T), a(c, T))being the unique equilibrium.

Proof of Proposition 3.2. We apply Theorem 3.2 here for the analysis. We only need to show the case for aI1 ∈ (a(c, T), F−1(1− λc)). In this case, Firm 1 does not to search and Firm 2 searches with aI1 as the cut-o. Now, take the limit aI1 → a(c, T) from the right hand side of a(c, T). In the limit, where Firm 2 searches with a(c, T) as the cut-o, Firm 1 weakly prefers not to search. IfaI1 =a(c, T), Firm 1 is actually indierent between searching and not. Hence, a head start in the limit makes Firm 1 weakly better o. Firm 1's payo when it does not search is e−λT[1−F(aI1)], the probability of Firm 2 ending up in a state below aI1, is strictly increasing inaI1. Hence, a higher value of the head start makes Firm 1 even better o.

Proof of Proposition 3.3. [1]. For T being small,a(c, T) = 0. DM(0, aI1) = 0, and the partial derivative of DM(T, aI1) w.r.t. T when T is small is

∂DM(T, aI1)

∂T =λ(1−aI1)e−λT(1−aI1)[1− c

λ(1−aI1)]−λe−2λT+ce−λT, which equals to −λaI1 <0 at the limit ofT = 0.

[2]. Follows from Propositions 3.1 and 3.2.

Proof of Proposition 3.4. DM(T, aI1)is strictly decreasing inaI1, and it goes to the opposite of (3.12), which is less than 0, asaI1 goes to F−1(1−λc), and

(1−e−λT[1−F(a(c,T))])−1

2(1−e−2λT) (3.15)

as aI1 goes to a(c, T). Hence, if (3.15) is positive, Case 1 yields from the intermediate value theorem; Case 2 holds if (3.15)is negative.

3.A.4 Proofs for the Extended Models

Proof of Proposition 3.7. We argue that, to determine a subgame perfect equilibrium, we only need to consider two kinds of strategies proles:

a Firm 1 retains its initial innovation and does not search, and Firm 2 searches withaI1 as the cut-o;

b Firm 1 discards its initial innovation and searches with aI2 as the cut-o, and Firm 2 retains its initial innovation.

First, supposec < λ2(1+e−λT). If Firm1retains the initial innovation, it will have no incentive to search, and Firm 2 is indierent between discarding the initial innovation and not. In either

case, Firm2searches withaI1as the cut-o. Given that Firm 1 has discarded its initial innovation, Firm 2 has no incentive to discard its initial innovation as shown in Proposition 3.2.

Second, suppose c > λ2(1 +e−λT). In the subgame in which both rms discard their initial innovation, there are two equilibria, in each of which one rm searches with 0 as the cut-o and the other rm does not search. Hence, to determine a subgame perfect equilibrium, we have to consider another two strategy proles, in addition to [a] and [b]:

c Firm 1 discards its initial innovation and searches with0as the cut-o, and Firm2discards its initial innovation and does not search.

d Firm 1 discards its initial innovation and does not search, and Firm 2 discards its initial innovation and searches with 0as the cut-o.

However, we can easily rule out [c] and [d] from the candidates for equilibria. In [c], Firm 2 obtains a payo of 0. It can deviate by retaining its initial innovation so as to obtain a positive payo. Similarly, in [d], Firm 1 can deviate by retaining its initial innovation to obtain a positive payo rather than 0.

Last, it remains to compare Firm 1's payo in [a] and [b]. In [a], Firm 1's payo is

e−λT[1−F(aI1)]. (3.16)

In [b], it is

(1−e−λT[1−F(aI2)])(1− c

λ[1−F(aI2)]). (3.17)

The dierence between these two payos, (3.17) and (3.16), is increasing in T, and it equals

−1 whenT = 0 and goes to 1− c

λ[1−F(aI2)] >0 asT approaches innity. Hence, the desired result is implied by the intermediate value theorem.

Proof of Proposition 3.8. The backward induction is similar to the proof of Proposition 3.7, and thus is omitted.

Proof of Proposition 3.9. The equilibrium for the subgame starting from timet0 derives from Theorem 3.2. Suppose at time t0, Firm i is in a state a0i, where max{a01, a02} ≥ aI1. Assume a0i > a0j. Ifa0i > F−1(1−λc), then Firmiobtains a continuation payo of1, and Firmjobtains0. Ifa0i ∈(a(c, T−t0), F−1(1−λc)), then Firmiobtains a continuation payo of e−λ(T−t0)[1−F(a0i)], and Firmj obtains(1−e−λ(T−t0)[1−F(a0i)])(1−λ[1−Fc−1(a0

i)]).

To prove this result, we rst show that not to search before t0 is Firm 2's best response regardless of Firm 1's action before time t0. It is equivalent to showing that not to search before