Center for

Mathematical Economics

Working Papers

## 569

December 2016

### Communication Games with Optional Verification

### Simon Schopohl

Center for Mathematical Economics (IMW) Bielefeld University

Universit¨atsstraße 25

**Communication Games with** **Optional Verification** ^{∗}

### Simon Schopohl

^{†}

EDEEM, Université Paris 1, Universität Bielefeld and Université catholique de Louvain

### December 2016

**Abstract**

We consider a Sender-Receiver game in which the Sender can choose between sending a cheap-talk message, which is costless, but also not verified and a costly verified message.

While the Sender knows the true state of the world, the Receiver does not have this infor- mation, but has to choose an action depending on the message he receives. The action then yields to some utility for Sender and Receiver. We only make a few assumptions about the utility functions of both players, so situations may arise where the Sender’s preferences are such that she sends a message trying to convince the Receiver about a certain state of the world, which is not the true one. In a finite setting we state conditions for full revelation, i.e. when the Receiver always learns the truth. Furthermore we describe the player’s behav- ior if only partial revelation is possible. For a continuous setting we show that additional conditions have to hold and that these do not hold for "smooth" preferences and utility, e.g.

in the classic example of quadratic loss utilities.

Keywords: cheap-talk, communication, costly disclosure, full revelation, increasing differences, Sender-Receiver game, verifiable information.

JEL Classification: C72, D82.

∗I would like to thank Agnieszka Rusinowska and Christoph Kuzmics for their ideas, advises and support. I
also thank seminar participants at University Bielefeld and Université Paris 1 and especially Philippe Bich and
Frederic Koessler for helpful advises and comments. I am thankful for all the useful comments and suggestions
at the10^{th} Doctorissimes, EDEEM Summer Meeting 2016, SING12 and the Game Theory World Congress
2016.

The author acknowledges financial support by the European Commission in the framework of the European Doctorate in Economics - Erasmus Mundus (EDEEM) and additional support for the mobility by the Deutsch- Französische Hochschule (DFH).

†mail: SimonSchopohl@gmail.com

**1 Introduction**

In the early 1980s the first papers on communication games got published and nearly from the beginning there have been two strains dealing with a different type of communication. Craw- ford and Sobel (1982) introduced the cheap-talk models, in which the content of the message can be whatever the Sender wants it to be. The Sender does not have to tell the truth and so the Receiver either believes the message or not. Given this model the different types of the Sender (corresponding to our different states of the world) may either send the same messages or different types send different messages. While in the first case the Receiver cannot trust the messages and there is a so called babbling equilibrium, in the second case the Receiver can get information from the message.

While in this setting the Sender has no possibility to verify that she tells the truth, she can only tell the truth in the models of Grossman (1981) and Milgrom (1981). In the models of verifiable messages, each message consists of a set of states, which has to include the true state. In equilibrium all different types of the Sender are separated, caused by the unraveling argument. In our model we give the Sender the choice between sending a cheap-talk message or a verified message. The limitation we do is to only allow for the entire truth as the verified message, i.e. the Sender cannot choose a set of states, but can either tell the complete truth or send a cheap-talk message.

In this paper we do not model communication in a very own way, but combine the most basic settings of cheap-talk and verifiable messages. There is one player who has detailed information about the state of the world. She is called the Sender, as she sends a message about the state to the second player, the Receiver. In our model the Sender can choose between cheap-talk messages corresponding to the different states of the world, or a costly verified message. The Receiver reads this message and chooses an action, which yields to some payoff for both players.

While the verified message reveals the true state of the world, the cheap-talk messages do not.

The content of the cheap-talk messages can be the truth, can contain the truth, but also can be a lie. The Receiver’s behavior after receiving a cheap-talk message may vary depending on the preferences of both players. In a situation where there is no benefit for the Sender to lie, the Receiver can trust the cheap-talk messages, while he should not do so if the Sender’s preference differs too much from his own.

There are many economic and non-economic examples for Sender-Receiver-games: The most classic is the idea of an employer (the Receiver) and an agent (the Sender) going back to Spence (1973). The Sender has different interests than the Receiver, but still wants to be employed or get a contract. Her effort cannot be observed, but she reports it to the Receiver.

Our model adds the possibility of costly reporting a certified effort, e.g. giving a proof of skill

enhancement or other training. The Receiver has to choose an action according to the message he gets. While the verifiable message gives him guaranty, the receiving of a cheap-talk mes- sage also reveals further information, as the Sender has chosen not to invest into the verifiable message.

Another example is the Lemon Market by Akerlof (1970). There is a seller (the Sender), who knows the quality of the good she is selling and a buyer (the Receiver), who can decide to buy the product or not. The cheap-talk message corresponds to the idea that the Sender simply tells the quality of the good. We often see these messages in internet auctions or in online car advertisements. On the other hand the Sender could also invest some money and certify the quality of his product. In the real world there are several ways to do so, for example by independent consultants and experts. Of course the Sender will never verify if the quality is low, but for high quality it might be reasonable to assume that the Sender is willing to invest some money into the verification to get his good sold.

**Related Literature**

Since the introduction of cheap-talk models, extensions in many different directions have been made. Farrell and Gibbons (1989) introduce an additional Receiver. They observe how the existence of a second Receiver changes the report of the Sender. McGee and Yang (2013) and Ambrus and Lu (2014) do a similar step with multiple Senders. While McGee and Yang, 2013 focus on two Senders with complementary information, Ambrus and Lu (2014) model several Senders who observe a noisy state. Noise is also added to the signaling game by Haan, Offerman, and Sloof (2011). The authors derive equilibria depending on the level of noise and confirm their results by an experiment. A different extension of cheap-talk is done by Blume and Arnold (2004). They model learning in cheap-talk games and derive a learning rule depending on common interest. The idea of Baliga and Morris (2002) is closer to our paper. The authors give conditions for full communication, but also state conditions under which cheap-talk does not change the equilibrium.

An overview over cheap-talk models and models with verifiable information can be found in Sobel (2009). The author describes several models and gives some economic examples. Most of these examples can be extended to fit our setting by adding a reasonable verifiable message.

At the same time other models focus more on disclosure of information and costly commu- nication. Hagenbach, Koessler, and Perez-Richet (2014) analyse a game with a set of players, where each player can tell the truth about his type or can masquerade as some other type. As in the literature of verifiable messages, the player who deviates from telling the truth is punished by the other players. If a player masquerades, the other players assume a worst case type and punish him by choosing the action this type of player dislikes. The authors state conditions for

full revelation depending on the possible masquerades of each player.

Bull and Watson (2007) and Mookherjee and Tsumagari (2014) deal with communication and mechanism design. While Bull and Watson (2007) focus on costless disclosure, Mookher- jee and Tsumagari (2014) add communication costs to prevent full revelation of information.

Communication costs are also introduced by Hedlund (2015). The author derives two types of equilibria: For high costs there exists a pooling equilibrium, while for not too high costs a separating equilibrium exists.

Verrecchia (2001) provides an overview over different models of disclosure, which is ex- tended by Dye (2001).

It remains to mention that there are several works in which the authors have created their own way of modeling messages, which are most often in between the models of Crawford and Sobel (1982) and Grossman (1981) and Milgrom (1981). Kartik (2009) introduces a model, where the Sender sends a message about her type, but has the incentive to make the Receiver believe that her type is higher than it actually is. If the Sender lies in the message, she has to pay some costs for lying, which depend on the distance between the true type and the type stated in the message.

Dewatripont and Tirole (2005) analyse the communication, where the Sender has to invest into effort to make a message understandable, while the Receiver’s investment into effort is to under- stand the message. They motivate this model by the idea that very confusing written messages as well as reading messages without paying a lot of attention yield to misunderstandings. The probability of understanding the message is influenced by the effort of both players.

Austen-Smith and Banks (2000) introduce the possibility for the Sender to send a costly mes- sage with the same content as a costless message. By this way of burning money the Sender has an additional possibility of signaling. The authors show that conditions exist under which both message types are used. Verrecchia (1983) deals with the idea that the disclosure of information works as a signal itself.

The most closely related to this paper is the work by Es˝o and Galambos (2013). They start with a similar idea, but make some different assumptions in their model. While they focus on optimal actions for Sender and Receiver which are closely related to the paper of Crawford and Sobel (1982), we start with fewer assumptions and allow for a wider class of utility functions.

Under their assumptions, Es˝o and Galambos (2013) state conditions under which the equilibria are split into different intervals depending on the Sender’s type.

The paper is structured as follows. We start by defining a discrete model in Section 2 and focus on a finite set of states and actions. We state different conditions for full revelation.

Full revelation where the Sender just sends cheap-talk messages is only possible when the preferences of both players are similar. Even if the preferences differ, there still can be full

revelation: The Receiver threatens the Sender by answering cheap-talk with an action the Sender dislikes. By this the Sender has an incentive to pay for the costly verifiable message. As some actions are incredible threats, we define a set of possible threat points. We provide conditions under which the Receiver can use an action from this set to enforce full revelation. Furthermore we look at the cases of partial revelation and state all the different maximization problems. We get similar conditions as we got for the fully revealing equilibria, but for partial revelation these conditions just have to hold for some states. While the solving by hand might be complex, we give some detailed ideas for algorithms. We simplify all these conditions for utility functions that have increasing or decreasing differences.

In Section 3 we modify our setting to a continuous model. We provide necessary conditions for the existence of fully revealing equilibria and also give some examples. We illustrate and prove that in a fully revealing equilibrium the Sender will never use both types of messages, if the utility functions of both players are continuous. A very detailed analysis is done for the quadratic loss function. Since full revelation is impossible in a continuous state space, we give conditions for full revelation in a discrete version. Extension possibilities are mentioned several times within the paper, but discussed in detail in Section 4. Section 5 concludes. All proofs are relegated to the Appendix.

**2 Discrete Model**

We start with a model with only a finite set of states of the world and a finite set of actions the Receiver can choose from. LetΩ = {ω1, . . . , ωL}denote the set of theLdifferent states of the world, where each stateωihas the probabilityP[ωi].

The timing is as follows: The Sender learns the true state of the world and then she sends a message to the Receiver. We assume that the set of possible cheap-talk messagesMcorresponds to the states of the worldΩand that verifiable message v is unique in each state of the world.

The Sender chooses a message from M ∪ {v}, so either sends a cheap-talk message or the verifiable messagev. There is no possibility for partial disclosure. While sending any cheap- talk message is free, the Sender has to pay costsc >0if she sends the verifiable message. An economic explanation for these costs can be either a payment for a certificate or the investment into effort. For simplicity we assume that the costs are state independent, but state dependent costs are discussed in a later part as an extension.

The Receiver reads the message and chooses an action fromA={a_{1}, . . . aN}. By∆(A)we
denote the set of mixed strategies. Both players have preferences about the actions, resulting in
different von Neumann-Morgenstern utility functions for both players, depending on the action

and state of the world.

For the Sender it is given byu˜S :A×M∪ {v} ×Ω→Rwithu˜S(a, m, ω) =uS(a, ω)∀m ∈ M, u˜S(a, v, ω) = uS(a, ω)−canduS : A×Ω → R. So we can split the utility function of the Sender up into two parts: First a utility function depending on action and state of the world.

Additionally we have to subtract the costs for the message if there are any.

For the Receiver the utility function is not depending on the type of the message, but just
on the action and state: uR : A× Ω → R. The utility functions show that there is neither
a punishment for lying nor a direct reward for telling the truth. We assume that these utility
functions are common knowledge. Let a^{∗}_{R}(ω_{i}) denote the action the Receiver prefers in the
stateω_{i}. We will make some more assumptions about this function later. We denote the Sender’s
behavior by the functionf : Ω→M∪ {v}. This functionf maps each state of the world to the
message she sends. We assume that the Sender does not mix different messages.

The Receiver chooses the action, depending on the message he received: g :M∪Ω→∆(A).

In equilibria we have to define the behavior of the Sender for every state, so f(ω) and the Receiver’s action after each message, i.e.g(m)∀m∈M andg(v).

The equilibrium concept we use is the Perfect Bayesian Equilibrium.

Definition 1. A Perfect Bayesian Equilibrium in a dynamic game of incomplete information is
a strategy profile(f^{∗}, g^{∗})and a belief systemµ^{∗}for the Receiver such that

• The strategy profile(f^{∗}, g^{∗})is sequentially rational.

• The belief systemµ^{∗} is consistent whenever possible, given(f^{∗}, g^{∗}).

In other words each equilibrium consists of optimal strategies for Sender and Receiver, which are sequentially rational. Furthermore the Receiver has a belief system over the true state of the world depending on the message he receives. This belief system is updated by Bayes rule whenever possible. For Perfect Bayesian Equilibria the actions off the equilibrium path have to be the best actions for the Receiver for at least one belief system. We can neglect this if we limit our attention to actions that are undominated for the Receiver.

We are specially interested in equilibria with full revelation:

Definition 2. A Perfect Bayesian Equilibrium is fully revealing, if the Receiver knows the true state of the world, after reading the Sender’s message.

If there is full revelation, the Sender either sends different cheap-talk messages in each state, or just verifiable messages, or different cheap-talk messages in some states and verifiable mes- sages in the other states.

For the entire paper we make two assumptions:

Assumption 1. Let us assume that for each actionaj ∈Athere exists at least one belief system µsuch thataj is the Receiver’s best strategy under the belief systemµ.

By∆(A)ˆ ⊆∆(A)we denote the set of mixed strategies that satisfy this assumption, i.e.

∀ˆa∈∆(A) :ˆ ∃µ: ˆa∈arg max

a

X

ω∈Ω

µ(ω)·u_{R}(a, ω)

Assumption 1 requires that each action is optimal for the Receiver at least under one belief system, which means that there are no dominated actions. Our results depend on the idea that the Receiver uses an action as a threat and so enforces the Sender to send verified messages.

The threat is only credible, if this action is an element of∆(A).ˆ

We can think about different equilibrium refinements as introduced in several papers. The most common ones are the Divinity Criterion by Banks and Sobel (1987) and the Intuitive Criterion by Cho and Kreps (1987). Using any of them adds more conditions for the threat points, so the set∆(A)ˆ gets smaller and the Receiver has less possibilities to make a threat.

Assumption 2. Let us assume that in every state of the world the Receiver has strict preferences.

Under Assumption 2, a^{∗}_{R}(ω_{i}) is a single action, which will help for the upcoming results.

This is to avoid the situation that the Receiver is indifferent between two actions.

**2.1 Full revelation**

In this first part we focus our attention on fully revealing equilibria. We will state conditions for full revelation, where the Sender just uses the cheap-talk messages, conditions where she uses only verified messages and conditions where she uses both types of message, depending on the state. Even if conditions for one of these fully revealing equilibria are satisfied, there might be other equilibria at the same time. By examples we show that the existence of these different types of full revelation are independent of each other.

Assumption 3. Let us assume that for all statesωi 6=ωj the Receiver prefers different actions,
i.e.a^{∗}_{R}(ωi)6=a^{∗}_{R}(ωj).

In other words the functiona^{∗}_{R}: Ω→Ahas to be injective.

This assumption assures that there can be a fully revealing equilibrium, even if the Sender uses cheap-talk messages in several states. For the case that the Sender just uses the cheap-talk messages and we still have full revelation, the Sender is not allowed to have any incentive to deviate to another cheap-talk message. It is not necessary that the preferences in all states are the same for Sender and Receiver. Crucial is that the action the Receiver chooses when he

knows the true state a^{∗}_{R}(ωi)generates a higher utility for the Sender than the Receiver’s most
preferred action of any other statea^{∗}_{R}(ωj). There is also the possibility that there exists an action
the Sender prefers, but which is never included by the Receiver as long as he knows the true
state of the world.

Theorem 1(Full Revelation just by Cheap-Talk Messages).

Let Assumption 3 hold. There is a fully revealing equilibrium with only costless messages sent if and only if:

∀ω_{i} ∈Ω :u_{S}(a^{∗}_{R}(ω_{i}), ω_{i})> u_{S}(a^{∗}_{R}(ω_{j}), ω_{i})∀ω_{j} 6=ω_{i} (1)
Remark. If Assumption 3 does not hold, i.e. if there exist two statesωi, ωj such that

a^{∗}_{R}(ωi) =a^{∗}_{R}(ωj), there is no fully revealing equilibrium. Still the Receiver can get the highest
possible utility in every state, while the Sender just sends cheap-talk messages.

Theorem 2(Full Revelation just by Verifiable Messages).

There is a fully revealing equilibrium with only verifiable messages sent if and only if:

∃ˆa ∈∆(A) : 1)ˆ ∀ωi : ˆa6=a^{∗}_{R}(ωi)

2)∀ωi :uS(a^{∗}_{R}(ωi), ωi)−c > uS(ˆa, ωi)

The idea behind Theorem 2 is that the Sender replies to cheap-talk messages with an action ˆ

athe Sender really dislikes. With this threat pointˆathe Receiver forces the Sender to use the verified message. The same idea can be found in several existing papers dealing with verifiable messages, e.g. in Hagenbach, Koessler, and Perez-Richet (2014), which we already mentioned in the introduction.

We can combine both theorems and get conditions for full revelation, where the Sender uses both types of messages.

Theorem 3(Full Revelation by Cheap-Talk and Verifiable Messages).

Let Assumption 3 hold. There is a fully revealing equilibrium with both message types used if and only if there existsΩˆ (ΩwithΩˆ 6=∅such that

∃ˆa∈∆(A) : 1)ˆ ∀ω_{i} ∈/ Ω :ˆ uS(a^{∗}_{R}(ωi), ωi)−c > uS(a^{∗}_{R}(ωj), ωi)∀ω_{j} ∈Ωˆ
2)∀ωi ∈Ω :ˆ uS(a^{∗}_{R}(ωi), ωi)> uS(a^{∗}_{R}(ωj), ωi)∀ωj ∈Ω, ωˆ j 6=ωi

Theorem 3 allows that the Receiver trusts the Sender in some states (Ω), but in the other statesˆ he enforces the use of verifiable messages as in Theorem 2. To have both message types used Ωˆ has to be a real subset of Ωand non-empty, otherwise just one message type is used. The

two conditions in this theorem are quite similar to those of the previous theorems. Instead of a
single threat pointa, everyˆ a^{∗}_{R}(ωj)withωj ∈ Ωˆ has to work as a threat. In addition the Sender
is not allowed to have an incentive to deviate to another cheap-talk message if the true state is
inΩ.ˆ

There might be several possibilities for the set of states Ω, where the Receiver trusts theˆ cheap-talk messages. Those possibilities do not have to be subsets of each other, but also can be disjoint. For the case that there are several subsets we can say that for smaller sets Condition 1) has to hold for more states, but Condition 2) for less states.

Remarks.

• For the result of Theorem 3 we need Assumption 3 just for the states inΩ. So even if thereˆ
exist two statesωi, ωj ∈Ω/Ωˆ such thata^{∗}_{R}(ωi) =a^{∗}_{R}(ωj), Theorem 3 still holds.

• If Assumption 3 does not hold and there exist two statesωi, ωj ∈Ωˆ such that

a^{∗}_{R}(ωi) =a^{∗}_{R}(ωj), Theorem 3 does not hold, but under the conditions of that theorem, the
Receiver still gets the highest possible utility in every state.

Theorems 1, 2 and 3 give conditions for different types of fully revealing equilibria. It can happen that there is no fully revealing equilibrium just by cheap-talk or just by verifiable mes- sages, but one by a combination of both message types:

Example 1. Assume that there are two states (ω1, ω2) and two actions (a1, a2).

The Receiver prefers a1 in ω1 and a2 in ω2, while the Sender always prefers a1. Obviously
there is no fully revealing equilibrium just with cheap-talk, because the Sender always wants
the actiona_{1} and so she would lie. Furthermore there is no equilibrium just with verifiable
messages, because there is no threat available:

For the mixed strategy that plays a1 with probabilitypanda2 with probability(1−p), we use the notationpa1+ (1−p)a2. Denoteˆa=pa1 + (1−p)a2. Forp= 0, the Sender will not use the verifiable message inω2, because she gets the same action by sending cheap-talk, but verifiable messages are costly. Also forp >0the Sender will not use the verifiable message in ω2, because she prefersa1overa2and so she also prefersaˆovera2.

Still there is full revelation possible ifcis low enough. Let us assume that costscare small,
i.e. c < uS(a1, ω1)−uS(a2, ω1). If the Receiver answers every cheap-talk message bya2, the
Sender will use the verifiable message inω1, yielding to actiona1. The utility the Sender gets is
u_{S}(a_{1}, ω_{1})−c > u_{S}(a_{2}, ω_{1}), while her utility would be u_{S}(a_{2}, ω_{1})if she sends the cheap-talk
message. In the second stateω_{2}, the Sender will use the cheap-talk message. Both message
types will result in actiona2, so the Sender prefers the costless message.

Even though we stated conditions for full revelation, it might happen that there is no revela- tion at all. The easiest example can be done just by two states and two actions:

Example 2. Assume that the Receiver prefersa1inω1anda2inω2and the Sender’s preferences are switched, i.e. she prefersa2 inω1 anda1 inω2. Clearly there is no full revelation just by cheap-talk, because the Sender will always lie. Furthermore there can be no revelation just by verifiable messages. Assume that the threat point isaˆ=pa1+ (1−p)a2, with the notation used as in the previous example.

Forp = 0, the Sender will not use the verifiable message inω_{1}, because she prefersa_{2} over
a1. The same argument holds even forp >0: Using the cheap-talk message resulting inˆagives
the Sender at least a little chance of a2. Therefore uS(ˆa, ω1) > uS(a1, ω1) and this implies
uS(ˆa, ω1)> uS(a1, ω1)−c.

So the only possibility is to have a fully revealing equilibrium with both message types used.

Doing the same steps again for Theorem 3 proves that there is no full revelation. So in this ex- ample where the preferences of Sender and Receiver differ a lot, the Receiver has no possibility to enforce the full revelation.

**2.1.1 Increasing and Decreasing Differences**

The previous results have to be checked for every state, which might be not easy to do. If the utility function of the Sender satisfies either the increasing or decreasing differences property, we can state weaker conditions. The idea is that we just need to check the previous conditions, for one state and then can easily get full revelation for all states, if some additional properties are satisfied.

Definition 3. We sayuS(a, ω)has increasing (decreasing) differences in(a, ω), if

∀a^{′} ≥a,∀ω^{′} ≥ω:u_{S}(a^{′}, ω^{′})−u_{S}(a, ω^{′})≥(≤)u_{S}(a^{′}, ω)−u_{S}(a, ω).

Proposition 1(Full revelation under increasing differences).

Let Ω = {ω1, . . . , ωL} and sortA such that A = {a^{∗}_{R}(ω1), . . . , a^{∗}_{R}(ωL)}. We can ignore all
actions, which are never the best reply for the Receiver in a single state.

There is a fully revealing equilibrium withˆa=a^{∗}_{R}(ωj)if:

1)uShas increasing differences

2.1)u_{S}(a^{∗}_{R}(ω_{j+1}), ω_{j+1})−c > u_{S}(a^{∗}_{R}(ω_{j}), ω_{j+1})
2.2)uS(a^{∗}_{R}(ωj−1), ωj−1)−c > uS(a^{∗}_{R}(ωj), ωj−1)
3.1)∀ωi > ωj :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωi−1), ωi)
3.2)∀ω_{i} < ωj :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωi+1), ωi)

The fully revealing equilibrium is such that the Sender sends cheap-talk inωj and verifiable messages in all other states.

Corollary 1.

We can replace Condition 3.1) by

3.1^{′})∀ωi > ωj :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωj+1), ωi)
and Condition 3.2) by

3.2^{′})∀ωi < ωj :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωj−1), ωi)

An interpretation of these properties can be done easily, if we look at the following Corollary.

Corollary 2.

Let Ω = {ω1, . . . , ωL} and sortA such thatA = {a^{∗}_{R}(ω1), . . . , a^{∗}_{R}(ωL)}. We can ignore all
actions, which are never the best reply for the Receiver in a single state.

There is a fully revealing equilibrium withˆa=a^{∗}_{R}(ω1)if:

1)uShas increasing differences

2)u_{S}(a^{∗}_{R}(ω_{2}), ω_{2})−c > u_{S}(a^{∗}_{R}(ω_{1}), ω_{2})

3)∀ωi ∈ {ω2, . . . , ωL}:uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωi−1), ωi)

The fully revealing equilibrium is such that the Sender sends cheap-talk in ω1 and verifiable messages in all other states.

The threat point here is a^{∗}_{R}(ω1). Condition 2) ensures that the Sender prefers the verifiable
message in the state after, which isω2. Increasing Differences mean that the gains from a higher
action increase, if the state gets higher. With Condition 3) combined, we get that the Sender
also prefers the verifiable message in all states higher thanω_{2}. We can get a similar result with
a^{∗}_{R}(ωL), where we have to replaceω2 in Condition 2) byωL−1 and adjust Condition 3) as well.

An application can be found in Section 3.1.1.

Similar changes for decreasing differences can be made easily:

Proposition 2(Full revelation under decreasing differences).

Let Ω = {ω_{1}, . . . , ωL} and sortA such that A = {a^{∗}_{R}(ω1), . . . , a^{∗}_{R}(ωL)}. We can ignore all
actions, which are never the best reply for the Receiver in a single state.

There is a fully revealing equilibrium withˆa=a^{∗}_{R}(ωj)if:

1)u_{S}has decreasing differences

2.1)uS(a^{∗}_{R}(ωL), ωL)−c > uS(a^{∗}_{R}(ωj), ωL)
2.2)uS(a^{∗}_{R}(ω1), ω1)−c > uS(a^{∗}_{R}(ωj), ω1)

3.1)∀ω_{i} > ωj :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωi+1), ωi)
3.2)∀ω_{i} < ωj :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωi−1), ωi)

The fully revealing equilibrium is such that the Sender sends cheap-talk inωj and verifiable messages in all other states.

Changing the conditions as in Corollary 1 and Corollary 2 is possible.

**2.2 Maximization without full revelation**

Whenever there is no full revelation, the Receiver cannot get the highest possible utility in all states. Depending on the preferences, utility and costs, there might be partial revelation or no revelation at all. We start our analysis by an example with just three states and give conditions for partial revelation and no revelation. In a numerical example we will show that each of this possibilities can be the best strategy for the Receiver.

In the second part we generalize: In a setting with more than three states, partial revelation can be of one of three different types: Verifiable messages in some states, revealing the true state. Unique cheap-talk messages can have the same effect, but cheap-talk messages can also partial reveal information to the Receiver, such that it splits the state space into disjoint subsets.

In that case the Receiver might just know whether he is in the first or second state, or in the third or fourth state.

We give conditions for all the different possibilities of partial revelation and also for com- binations of those. Furthermore we again use utility functions with increasing or decreasing differences to simplify these conditions.

**2.2.1 Three state examples**

Assume|Ω|= 3and assume that the Receiver prefers different actions in different states.

In general the Receiver can maximize his utility by three different possibilities:

1. Use the same action in every state.

2. Reply with one action to one cheap-talk message and with another to the remaining mes- sages.

3. Use the same action as a reply to any cheap-talk message, enforcing the Sender to send the verifiable message in exactly one state, i.e. revelation of one state.

**First possibility**

maxˆa 3

X

i=1

P[ωi]uR(ˆa, ωi)

s.t.∀ω_{i} :u_{S}(ˆa, ω_{i})> u_{S}(a^{∗}_{R}(ω_{i}), ω_{i})−c
**Second possibility (Revelation in**ω1 **(wlog) by cheap-talk)**

maxaˆ

P[ω_{1}]u_{R}(a^{∗}_{R}(ω_{1}), ω_{1}) +

3

X

i=2

P[ω_{i}]u_{R}(ˆa, ω_{i})
s.t.uS(a^{∗}_{R}(ω1), ω1)> uS(ˆa, ω1)

uS(ˆa, ωi)> uS(a^{∗}_{R}(ω1), ωi)∀ωi, i∈ {2,3}

**Third possibility (Revelation in**ω_{1} **(wlog) by a verifiable message)**
maxˆa

P[ω_{1}]u_{R}(a^{∗}_{R}(ω_{1}), ω_{1}) +

3

X

i=2

P[ω_{i}]u_{R}(ˆa, ω_{i})
s.t.uS(a^{∗}_{R}(ω1), ω1)−c > uS(ˆa, ω1)

uS(ˆa, ωi)> uS(a^{∗}_{R}(ωi), ωi)−c∀ωi, i∈ {2,3}

These are the different maximization problems the Receiver has to solve to find the best strategy.

With different example we will show that either of the strategies can be the best choice. For that we have to keep in mind that the Receiver cannot commit to any strategies, but plays his best possibility given his beliefs. Especially in the case where he knows the true state, the Receiver will always play the action that yields the highest utility for him.

Example 3. Assumec >2,|Ω|=|A|= 3,

uR(a3, ω1) = 4 uR(a1, ω1) = 2 uR(a2, ω1) = 1 uR(a1, ω2) = 4 uR(a2, ω2) = 2 uR(a3, ω2) = 1 uR(a2, ω3) = 4 uR(a1, ω3) = 2 uR(a3, ω3) = 1

anduS(·, ω_{1}) = uR(·, ω_{1}), butuS(·, ω_{2}) = uR(·, ω_{3})anduS(·, ω_{3}) = uR(·, ω_{2}). So Sender and

Receiver have the same preferences inω1, but switched betweenω2andω3. No full revelation

Clearly there is no full revelation just by cheap-talk messages. The proof why there is no full revelation just by verifiable messages and also not by both message types used, follows the same idea: Assume thatˆa=π1a1+π2a2+ (1−π1−π2)a3, therefore

uS(a3, ω1)−c > uS(ˆa, ω1)
u_{S}(a_{1}, ω_{2})−c > u_{S}(ˆa, ω_{2})
uS(a2, ω3)−c > uS(ˆa, ω3)
have to hold. Rewriting this yields to

4−c > π1·1 +π2 ·2 + (1−π1 −π2)·4
2−c > π_{1}·2 +π_{2} ·4 + (1−π_{1} −π_{2})·1
2−c > π1·4 +π2 ·2 + (1−π1 −π2)·1
and finally to

c <3π1+ 2π2

1−c > π_{1}+ 3π_{2}
1−c >3π1+π2

This is impossible forc≥ 1. For the full revelation by both message types,aˆcan also be equal toa1,a2ora3, but all these possibilities still contradict at least one condition.

Maximization 1.) No revelation:

The Receiver solvesmax^{1}_{3} ·(2π_{1}+ 1π_{1}+ 4(1−π_{1}−π_{2})) +^{1}_{3} ·(4π_{1}+ 2π_{1}+ 1(1−π_{1}−π_{2})) +

1

3 ·(2π_{1} + 4π_{1} + 1(1−π_{1}−π_{2})), which yields toˆa =a_{1}and expected utility for the Receiver
given byE[uR] = ^{1}_{3}(2 + 4 + 2) = ^{8}_{3}. The conditions for the Sender not to deviate are:

uS(ˆa, ω1)> uS(a3, ω1) ⇔ 2>4−c uS(ˆa, ω2)> uS(a1, ω1) ⇔ 2>4−c uS(ˆa, ω3)> uS(a2, ω1) ⇔ 4>2−c.

All these conditions hold forc >2.

2.) Partial revelation ofω1 by cheap-talk:

The Receiver has to answer with a3 to one cheap-talk message and with ˆa to the others. The maximization problem yields thatˆa = π1a1+ (1−π1)a2, withπ1 ∈ [0,1]. The conditions for the Sender’s utility are

u_{S}(a_{3}, ω_{1})> u_{S}(ˆa, ω_{1}) ⇔ 4> u_{S}(ˆa, ω_{1})
uS(ˆa, ω2)> uS(a3, ω1) ⇔ uS(ˆa, ω2)>1
uS(ˆa, ω3)> uS(a3, ω1) ⇔ uS(ˆa, ω3)>1,

which are clearly satisfied. Here the Receiver’s expected utility isE[uR] = ^{1}_{3}(4 + 6) = ^{10}_{3}.
3.) Partial revelation ofω1 by a verifiable message:

For example we can takeˆa=a_{2}. The conditions for the Sender’s utility are
uS(a3, ω1)−c > uS(a2, ω1) ⇔ 4−c >1
u_{S}(a_{2}, ω_{2})> u_{S}(a_{1}, ω_{1})−c ⇔ 4>2−c
u_{S}(a_{2}, ω_{3})> u_{S}(a_{2}, ω_{1})−c ⇔ 2>2−c,

which all hold forc <3. The Receiver’s expected utility isE[uR] = ^{1}_{3}(4 + 6) = ^{10}_{3} .

In this example it is possible to get partial revelation inω1either by cheap-talk or by verifiable message ifc∈(2,3). Forc >3partial revelation is just possible by cheap-talk.

Example 4. Assumec >2,|Ω|=|A|= 3,

uR(a3, ω1) = 4 uR(a1, ω1) = 2 uR(a2, ω1) = 1
uR(a1, ω2) = 4 uR(a2, ω2) = 2 uR(a3, ω2) = 1
u_{R}(a_{2}, ω_{3}) = 4 u_{R}(a_{1}, ω_{3}) = 2 u_{R}(a_{3}, ω_{3}) = 1
and

uS(a3, ω1) = 4 uR(a1, ω1) = 2 uR(a2, ω1) = 1 uS(a3, ω2) = 4 uR(a2, ω2) = 2 uR(a1, ω2) = 1 uS(a3, ω3) = 4 uR(a1, ω3) = 2 uR(a2, ω3) = 1.

No full revelation

Obviously there is no fully revealing equilibrium just by cheap-talk message, because Sender and Receiver prefer different actions in two states. To see that full revelation just by verifiable messages is impossible, we take a closer look atω2: To have an incentive to send the verifiable information the Sender has to preferus(a1, ω2)−coveruS(ˆa, ω2). Since the left part equals1−c and the right part something larger than1, this is impossible. The same arguments contradict the full revelation by both message types for mixed strategies.

For ˆa = a1, the Sender does not use the verifiable message inω3 and for ˆa = a2 she uses

cheap-talk inω2. So there is no full revelation possible.

Maximization 1.) No revelation:

The maximization here is the same as in the previous example. It is possible forc > 2and the
expected utility isE[uR] = ^{8}_{3}.

2.) Partial revelation by cheap-talk

Getting partial revelation inω1 is impossible, because the Sender will use the same cheap-talk
message in both other states. To get partial revelation inω2,uS(a1, ω2)> uS(ˆa, ω2)has to hold,
which is impossible for anyaˆ 6= a_{1}. Same arguments work with a_{2} inω_{3}. This means in this
example it is not possible to achieve partial revelation by different answers to cheap-talk.

3.) Partial revelation ofω1 by a verifiable message:

For example we can takeˆa=a2. The conditions for the Sender’s utility are uR(a3, ω1)−c > uS(a2, ω1) ⇔ 4−c >1 uR(a2, ω2)> uS(a1, ω1)−c ⇔ 2>1−c uR(a2, ω3)> uS(a2, ω1)−c ⇔ 1>1−c,

which again all hold forc <3. The Receiver’s expected utility isE[uR] = ^{1}_{3}(4 + 6) = ^{10}_{3} .
In this example partial revelation is only possible by verifiable information and just if
c ∈ (2,3) holds. For c > 3 partial revelation is impossible and the Receiver maximizes his
utility as he would do without communication.

**2.2.2 General results**

Again we would like to underline that the Receiver cannot commit to any actions, but maximizes his utility. Then it should be obvious that the Receiver always prefers partial revelation over no revelation at all. If there is a partial revelation of one state, the Receiver will maximize his expected utility in the remaining states. It might happen that several actions (pure or mixed) solve this maximization problem.

Definition 4. ForΩ^{′} ⊆Ωwe defineA(Ω˙ ^{′}) = arg max

a

X

ω∈Ω^{′}

µ[ω]uR(a, ω).

A(Ω˙ ^{′}) is the set of actions which maximize the Receiver’s utility on a given state spaceΩ^{′}
according to the Receiver’s belief systemµ.

In a general model with more than three states, there can be different types of partial revela- tion: Partial revelation can be either achieved by verifiable messages, which then fully reveal a subset of states or by cheap-talk messages. Partial revelation by cheap-talk creates a partition of

state subsets, each element of the partition can contain a single state or several states. Elements with just a single state have the same effect as verifiable messages: The Receiver knows whether specific state is the true state. For simplicity we split partial revelation by cheap-talk up into two cases.

1 Partial revelation by verifiable messages⇒Full Revelation of a subset of states 2 A Partial revelation by cheap-talk⇒Full Revelation of a subset of states

B Partial revelation by cheap-talk⇒Dividing the state space into disjoint subsets.

The case 2A contains just the special cases, in which the partition consists of some subsets with just one element and a subset containing the remaining states. Note that also in case 2 there can be a full revelation of subsets of states.

In a world with four states{ω1, . . . ω4}partial revelation by type 2B for example means that the Receiver just knows whether the true state is in{ω1, ω2}or in{ω3, ω4}. Of course there can also be a combination of type 1 with 2A or with 2B.

Even with just 4 states most often it is impossible to see, which partial revelation is possible without calculating all possibilities. We state conditions for each of the different types and their combinations. With these conditions it is easy to write an algorithm and let a computer check all the possibilities.

**Partial Revelation just by one message type**

Proposition 3(Partial Revelation just by Verifiable Messages).

There is a partial revealing equilibrium, where the Sender uses verifiable messages to reveal
the true state just in the states inΩ^{vI} (Ωif

∃ˆa∈A(Ω˙ \Ω^{vI}) : 1)∀ω ∈Ω^{vI} :uS(a^{∗}_{R}(ω), ω)−c > uS(ˆa, ω)
2)∀ω ∈Ω\Ω^{vI} :uS(a^{∗}_{R}(ω), ω)−c < uS(ˆa, ω).

With this theorem we can define the family of subsets in which partial revelation by verifiable information is possible.

Definition 5.

Ω^{vI}(Ω) =n

Ω^{vI} (Ω

∃ˆa∈A(Ω˙ \Ω^{vI})such that

∀ω∈Ω^{vI} :uS(a^{∗}_{R}(ω), ω)−c > uS(ˆa, ω)and

∀ω∈Ω\Ω^{vI} :uS(a^{∗}_{R}(ω), ω)−c < uS(ˆa, ω) o

This implies that partial revelation by verifiable information is impossible ifΩ^{vI}(Ω) = {∅}.

We can also define the set of all tuples of actions and subsets of states(ˆa,Ω^{vI}), where the action
maximizes the Receiver’s utility onΩ\Ω^{vI}, but for the states inΩ^{vI}this action works as a threat
to enforce the Sender to use the verifiable message.

Definition 6.

Ω^{vI}_{A}(Ω) =n
ˆ

a,Ω^{vI}

1) ˆa∈A(Ω˙ \Ω^{vI})

2)∀ω ∈Ω^{vI} :uS(a^{∗}_{R}(ω), ω)−c > uS(ˆa, ω)

3)∀ω ∈Ω\Ω^{vI} :u_{S}(a^{∗}_{R}(ω), ω)−c < u_{S}(ˆa, ω) o

This definition will help to combine different types of partial revelation. We can make similar statements for partial revelation of type 2A:

Proposition 4(Partial Revelation just by Cheap-Talk).

There is a partial revealing equilibrium, where the Sender uses verifiable messages to reveal
the true state just in all states inΩ^{ct} (Ωif

∃ˆa∈A(Ω˙ \Ω^{ct}) : 1)∀ω ∈Ω^{ct} :uS(a^{∗}_{R}(ω), ω)> uS(ˆa, ω)
2)∀ω ∈Ω\Ω^{ct} :uS(a^{∗}_{R}(ω), ω)< uS(ˆa, ω).

Definition 7.

Ω^{ct}(Ω) =n

Ω^{ct} (Ω

∃ˆa∈A(Ω˙ \Ω^{ct})such that

∀ω ∈Ω^{ct} :uS(a^{∗}_{R}(ω), ω)> uS(ˆa, ω)and

∀ω ∈Ω\Ω^{ct} :uS(a^{∗}_{R}(ω), ω)< uS(ˆa, ω) o
Definition 8.

Ω^{ct}_{A}(Ω) =n
ˆ
a,Ω^{ct}

1) ˆa∈A(Ω˙ \Ω^{ct})

2)∀ω∈Ω^{ct} :u_{S}(a^{∗}_{R}(ω), ω)> u_{S}(ˆa, ω)

3)∀ω∈Ω\Ω^{ct} :uS(a^{∗}_{R}(ω), ω)< uS(ˆa, ω) o
For partial revelation of type 2B the conditions look a little bit different.

Proposition 5(Partial Revelation just by Cheap-Talk).

There is a partial revealing equilibrium, where the state spaceΩis split up into disjoint subsets
if there exists a series of sets(Ω^{div}_{j} )j=1,...,J such that

1.

J

S

j=1

Ω^{div}_{j} = Ωand∀k 6=l : Ω^{div}_{k} ∩Ω^{div}_{l} =∅.

2. ∀Ω^{div}_{j} ∃ˆaj ∈A(Ω˙ ^{div}_{j} )such thatuS(ˆak, ω)> uS(ˆal, ω)∀ω ∈Ω^{div}_{k} withk6=l.

The first condition says that the subsets have to be disjoint and add up to the complete state space. The second condition ensures that the Sender has no incentive to lie if the Receiver chooses the actions that maximize his expected utility for each subset. As before, we can write this as a set, this time consisting of series of tuples of actions and subsets of the state space:

Definition 9.

Ω^{div}_{A} (Ω) =n
ˆ

aj,Ω^{div}_{j}

j

1)∀Ω^{div}_{k} :ak∈A(Ω˙ ^{div}_{k} )
2) [

j

Ω^{div}_{j} = Ωand∀k6=l : Ω^{div}_{k} ∩Ω^{div}_{l} =∅
3)∀ω∈Ω^{div}_{k} :∀ˆak 6= ˆal:uS(ˆak, ω)> uS(ˆal, ω) o

This set contains all the different possibilities of series of tuples that split the state space into subsets.

**Partial revelation by a combination of verifiable messages and cheap-talk**

For the combination of two types of partial revelation it is not sufficient to combineΩ^{vI} and
Ω^{ct}, because we need to use the same actionˆafor the states, that are not revealed.

Theorem 4(Partial Revelation by type 1 and 2A).

All combinations of revelation by verifiable message and cheap-talk (type 2A) are given by:

Ω^{vI+ct}(Ω) =n

Ω^{vI},Ω^{ct}

1) Ω^{vI} ∩Ω^{ct} =∅

2)∃ˆa∈A˙ Ω\(Ω^{vI} ∪Ω^{ct})

such that
(ˆa,Ω^{vI})∈Ω^{vI}_{A}(Ω\Ω^{ct})and

(ˆa,Ω^{ct})∈Ω^{ct}_{A}(Ω\Ω^{vI}) o

This means that all states in Ω^{vI} are revealed by verifiable messages and those in Ω^{ct} by
cheap-talk. Therefore it is necessary thataˆmaximizes the Receiver’s utility for the remaining
statesΩ\(Ω^{vI} ∪Ω^{ct}). By the definition ofΩ^{vI} andΩ^{ct}it is ensured thatΩ^{vI} ∪Ω^{ct} (Ωholds,
because otherwise there would be full revelation.

Similar to the combination of type 1 and type 2A, it is also possible to combine type 1 and type 2B. This means that there are some states revealed by verifiable information (type 1) and the remaining states are divided into subsets of the state space (type 2B).

Theorem 5(Partial Revelation by type 1 and 2B).

All combinations of revelation by verifiable message and cheap-talk (type 2B) are given by:

Ω^{vI+div}(Ω) =
(

ˆaj,Ω^{div}_{j}

j,Ω^{vI}

1) [

j

Ω^{div}_{j}

!

∪Ω^{vI} = Ω
2)∀Ω^{div}_{k} : Ω^{div}_{k} ∩Ω^{vI} =∅
3) (ˆaj,Ω^{div}_{j} )j ∈Ω^{div}_{A} (Ω\Ω^{vI})
4)∀ˆa_{k}: (ˆa_{k},Ω^{vI})∈Ω^{vI}_{A} Ω\ [

l6=k

Ω^{div}_{l}

!! )

Condition 1) and 2) ensure that the subsets of states are disjoint, but united are equal to the
entire state space. Condition 3) makes sure that the states are split up, if there is no revelation
by a verifiable message. The Receiver plays different actions on different subsets of states, with
Condition 4) the Sender will send the verifiable message in all states inΩ^{vI} and will not deviate
to an actionˆakfrom the series(ˆaj).

**2.2.3 Increasing and Decreasing Differences**

For increasing (or decreasing) differences, we can state the existence of partial revealing equilib-
ria with verifiable messages in a way similar to Proposition 1. The most important change is that
the answer to cheap-talk is no longera^{∗}_{R}, butˆa such that this action maximizes the Receiver’s
utility on the non-revealed states.

Proposition 6(Partial Revelation by verifiable messages under increasing differences).

Let Ω = {ω_{1}, . . . , ωL} and sortA such that A = {a^{∗}_{R}(ω1), . . . , a^{∗}_{R}(ωL)}. We can ignore all
actions, which are never the best reply for the Receiver in a single state.

There is a partial revealing equilibrium that reveals the states just in[ω, ω]by verifiable mes- sages if

∃ˆa ∈A(Ω˙ \[ω, ω])witha^{∗}_{R}(ω)>ˆasuch that

1)uShas increasing differences onΩ^{′} = [ω, ω]andA^{′} = [a^{∗}_{R}(ω), a^{∗}_{R}(ω)]

2)uS(a^{∗}_{R}(ω), ω)−c > uS(ˆa, ω)

3)∀ω_{i} ∈[ω, ω] :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωi−1), ωi)
4)∀ωj ∈Ω\[ω, ω] :uS(ˆa, ωj)> uS(a^{∗}_{R}(ωj), ωj)−c

Proposition 7(Partial Revelation by verifiable messages under increasing differences).

Let Ω = {ω1, . . . , ωL} and sortA such that A = {a^{∗}_{R}(ω1), . . . , a^{∗}_{R}(ωL)}. We can ignore all
actions, which are never the best reply for the Receiver in a single state.

There is a partial revealing equilibrium that reveals the states just in[ω, ω]by verifiable mes- sages if

∃ˆa ∈A(Ω˙ \[ω, ω])witha^{∗}_{R}(ω)<ˆasuch that

1)u_{S}has increasing differences onΩ^{′} = [ω, ω]andA^{′} = [a^{∗}_{R}(ω), a^{∗}_{R}(ω)]

2)uS(a^{∗}_{R}(ω), ω)−c > uS(ˆa, ω)

3)∀ωi ∈[ω, ω] :uS(a^{∗}_{R}(ωi), ωi)≥uS(a^{∗}_{R}(ωi+1), ωi)
4)∀ω_{j} ∈Ω\[ω, ω] :u_{S}(ˆa, ω_{j})> u_{S}(a^{∗}_{R}(ω_{j}), ω_{j})−c

We can do a similar change to Condition 3) as before and also get the same results for de- creasing differences by the same changes as done between Propositions 1 and 2.

In addition it is possible to rewrite these conditions that they hold for more than just a single interval[ω, ω], but for a disjoint series of intervals [ωk, ωk]

k.

**3 Continuous model**

In many settings it is not enough to focus on a finite action or state space, but assume that both
of them are continuous. For example at wage negotiations or any discussions concerning prices,
we have to deal with a continuous interval. In this section we do not limit our attention any
more to the discrete setting, but switch to a continuous model. So in general we can assume
that A = Ω = [0,1]. We state different conditions under which there is no possibility for a
fully revealing equilibrium. Afterwards we use the example of the quadratic loss function to
visualize our results. Theorem 1 and Theorem 2, which give the conditions for fully revealing
equilibria with only a single message type, still hold. The conditions in these theorems still have
to hold for every state, which is more strict in the continuous model. The following Theorems 6
to 8 give us necessary conditions for the existence of different fully revealing equilibria, where
the continuity ofuSanda^{∗}_{R}are the most important factors. Combined with the results from the
discrete model we also get the sufficient conditions.

Theorem 6(Full Revelation under continuousuSanda^{∗}_{R}).

Assume that a^{∗}_{R}(ω) : Ω → A is continuous and that uS(a, ω) : A ×Ω → Ris continuous
in both arguments. Then full revelation can only be achieved either by cheap-talk messages in
every state or by verifiable messages only.

Theorem 7(Full Revelation under continuousuS(a, ω)).

Assume that uS(a, ω) is continuous. There can be a fully revealing equilibrium with both message types used if there exists[ω, ω]⊆[0,1]such that for allω∈[0,1]

1) lim

ωրωa^{∗}_{R}(ω)6=a^{∗}_{R}(ω)
2) lim

ωցωa^{∗}_{R}(ω)6=a^{∗}_{R}(ω)

3) Ifω6=ω :∀ω_{i} ∈[ω, ω] :uS(a^{∗}_{R}(ωi), ωi)> uS(a^{∗}_{R}(ωj), ωi)∀ω_{j} ∈[ω, ω], ωj 6=ωi

holds.

Remarks.

• Ifω= 0, then the first condition is always satisfied.

• Ifω= 1, then the second condition is always satisfied.

• There may exist more than one interval satisfying the conditions of Theorem 7.

Theorem 7 states that if uS is continuous in both arguments, the functiona^{∗}_{R} has to be dis-
continuous. The interval [ω, ω]gives the interval of states in which the Receiver believes the
Sender’s cheap-talk message. For thata^{∗}_{R}has to be neither right-continuous nor left-continuous
at a singleωˆ or not right-continuous atωand not left-continuous atω > ω. In the second case
the Sender also is not allowed to have any incentive to deviate to a different cheap-talk message
for states in[ω, ω].

Corollary 3.

There is a fully revealing equilibrium with both message types used if there exists[ω, ω]⊆[0,1]

such that:

• [ω, ω]satisfies the conditions of Theorem 7.

• Theorem 3 is satisfied withΩ = [ω, ω].ˆ

Figure 1 shows three different discontinuous functions a^{∗}_{R}(ω). For the blue graph there can
be a fully revealing equilibrium with both message types, where the threat point is ata^{∗}_{R}(^{1}_{2}). The
red graph shows a situation where the possible threat point is at the border of the interval, here
ata^{∗}_{R}(1). So Condition 2) of Theorem 7 is always satisfied. As the function is discontinuous
forω = 1, Condition 1) also holds. An example where Theorem 7 implies that there can be no
full revelation is given by the green graph. The function is continuous coming from below and
so does not satisfy Condition 1).

Ifa^{∗}_{R}is continuous the previous Theorem does not hold, but we need thatuSis discontinuous
to achieve full revelation under the usage of both message types.