• No results found

The short-term welfare effects in a two-player signalling game

N/A
N/A
Protected

Academic year: 2021

Share "The short-term welfare effects in a two-player signalling game"

Copied!
49
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The short-term welfare effects in a two-player signalling

game

Malou Smink 10780114 Master Thesis Econometrics

Faculty of Economics and Business University of Amsterdam, Amsterdam Supervisor: A. Kopányi-Peuker

(2)

Abstract

I study possible equilibria and the welfare effects in a situation with an incomplete information setting. There is a perfectly informed Sender and an uninformed Receiver. Sender and Receiver can have aligned preferences but they can also have a conflict of interest. Sender can send a message about the preference relation which contains the truth or a lie. Next to that, he can send another message about the quality of a product and again the message can contain the truth or a lie. Based on the beliefs of Receiver about Sender, Receiver can decide to accept or reject the product and that decision affects both Sender and Receiver. I show that the action of Receiver depends on the probability of having aligned preferences. Further, I construct equilibria where Sender does not provide all his information to Receiver when preferences are not perfectly aligned. Next to that, it is shown that telling the truth about the quality of the product, when preferences are aligned, is at least as good for the total welfare as sending a message of good quality with this preference relation.

Statement of Originality

This document is written by Student Malou Smink who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Contents 1 Introduction ... 1 2 Related literature ... 4 3 Model ... 8 3.1 Setup ... 8 3.2 Parameterization ... 10 4 Equilibrium ... 11 4.1 Separating equilibrium ... 12

4.1.1 Always tell the truth ... 12

4.1.2 Pooling on good quality ... 14

4.1.3 The truth with aligned preferences, pooling with a conflict of interest ... 16

4.2 Pooling equilibrium ... 18

4.2.1 Situation 1: Pooling on good quality ... 19

4.2.2 Situation 2: The truth with aligned preferences, pooling with a conflict of interest . 21 4.3 Expected payoffs ... 24 4.3.1 Sender ... 24 4.3.2 Receiver ... 25 4.3.3 Total welfare ... 26 5 Comparative statics ... 27 5.1 Separating equilibrium ... 28

5.1.1 Always tell the truth ... 28

5.1.2 Pooling on good quality ... 28

5.1.3 The truth with aligned preferences, pooling with a conflict of interest ... 30

5.2 Pooling equilibrium... 31

5.2.1 Situation 1: Pooling on good quality ... 31

5.2.2 Situation 2: The truth with aligned preferences, pooling with a conflict of interest . 32 5.3 Expected payoffs ... 35

6 Experimental design ... 36

7 Conclusion & discussion ... 39

8 References ... 42

Appendix A: Game tree ... 44

(4)

1 Introduction

Conflicts of interest occur in many different situations (Kottow, 2010). A possible game with which you can handle such situations is a signalling game. This is a game with two players where one player has more information than the other player. The player with the private information can have different types and has to send a signal to the less-informed player. After the signal the less-less-informed player can take an action. In the end, the two players receive a payoff which depends on the signal and the type of the player with the private information and on the action of the less-informed player. Two seminal papers about signalling games are written by Spence (1973) and Cho and Kreps (1987). Spence (1973), for example, tried to outline a concept to determine the signalling power of observable personal characteristics such as education, race and sex. In the two seminal papers, the models represent a situation where the better-informed player gets all the private information at the same time. However, this is not the case for the signalling game in this thesis. In this game the better-informed player first gets information about the preference relation of the two players and only after sending a signal about this relation he gets additional information that is also only known by him1.

As mentioned above, the better-informed player has to make a decision in this game about the signal he wants to send with regard to the preference relation. This relation can be aligned or misaligned. However, it is found that the professional decisions of an individual are often not in line with personal or emotional decisions in a situation with conflict of interest (Kottow, 2010). Most of the time, these situations are also accompanied by an incomplete information setting where, e.g., an Advisor is better informed than the Advisee (Crawford and Sobel, 1982). For example, a doctor can prescribe a medicine which gives him special benefits or he can prescribe a medicine which is the best for the patient. Bargainers can disclose or not disclose the

1

(5)

2

state of products they want to sell, and attorneys have to give advice in favour of their clients even when this advice is in contradiction with their own feelings. However, the patient, the seller or the client can decide to buy a medicine, to buy a product or to follow the advice of the attorney respectively. Their actions will not only affect them but also their doctor, dealer or attorney. To make this clearer, in case of the doctor-patient relation with for example misaligned preferences, the doctor is worse off if the patient does not accept to buy the advised medicine but the patient is better off if she rejects that medicine. So there is a better-informed party that sends a signal and a less-informed party that takes an action which affects both parties. In this thesis, I am dealing with such situations.

Several studies have addressed the subject about disclosing conflicts of interest. Conflicts of interest might be harmful for both Sender and Receiver within the setting of signalling games. Therefore it is interesting to determine the effects of a conflict of interest on the payoff of both parties, and to investigate whether it is better to disclose or not to disclose the misalignment in preferences. Crawford and Sobel (1982) addressed this subject and in their study they investigated the actions taken by Sender and Receiver in a theoretical setting when preferences can be aligned or misaligned. They developed a model in a situation where Sender has more information than Receiver and they only looked at a situation where the preference relation is disclosed. Their results show that Sender will not fully disclose his private information about the state of the world if preferences are not completely aligned. Li and Madarasz (2008) have extended this model by taking into account both mandatory disclosure, and mandatory nondisclosure of a preference relation in a setting in which this relation is only known by Sender. For the situation with mandatory disclosure, Sender has to tell the truth about the preference relation. They found, among other things, that nondisclosure leads to a higher payoff for both Sender and Receiver.

In many situations Sender can also choose to be honest or dishonest about the preference relation. For example, a doctor can say that the recommended medicine is

(6)

3

the best medicine for the patient. However, in reality the recommended medicine is not the best one for the patient but it offers the doctor the highest payoff. In this described situation the doctor lies about the conflict of interest. Besides that, he can also decide to tell that there is a conflict of interest between him and the patient to gain the trust and confidence of a patient. So Sender has to consider what is more important for him: reputation, confidence and trust or a higher payoff for himself.

The main subject of this thesis is the effect on the payoffs for both parties with the extension that Sender can choose to reveal the truth about the preference relation or not. This leads to an additional signalling element. The model used in this thesis is similar to the one introduced by Li and Madarasz (2008) but there is a difference in the way how Sender can inform Receiver about the preference relation. This is no longer exogenously determined but can be chosen by Sender. So first of all, the preference relation is determined by nature and this relation is only known by Sender. Then, Sender can tell the truth about the preference relation but he could also choose to lie. After the decision of telling the truth or not, he can again choose to tell the truth or to tell a lie about his privation information with respect to the state of the world. Based on the received information, Receiver takes an action that affects both players.

This game-theoretic model with incomplete information should give an answer to the following question: What are the possible equilibria and the short-term welfare effects in a situation with possible conflicts of interest between a well-informed Sender and a less-informed Receiver?

The results show, among other things, that it is better for the welfare if Sender always sends a message of aligned preferences when a larger proportion of the population have an alignment in the preferences. Next to that, it is shown that the actions of Receiver depend on the payoffs.

The rest of this thesis is organised as follows. In section 2 I summarize the related literature. In section 3 I introduce the model and in section 4 I show possible equilibria. Section 5 discusses comparative statistics and in section 6 an experimental

(7)

4

design is described. Finally, section 7 concludes this thesis and introduces suggestions for further research.

2 Related literature

In this section I discuss two strands of literature my thesis contributes to. Mainly I discuss papers that form a part of the cheap talk literature but also literature about lying is discussed. The term “cheap talk” means that communication between two parties does not directly affect the payoffs of both parties and that information is not verifiable. This can refer to situations where, among other things, disclosure of information, nondisclosure of information or lying is without charge. The model used in this thesis partly forms a part of the cheap talk literature because the information is not verifiable. Next to that, Sender can freely lie at first because he only has to pay a lying cost afterwards2.

Crawford and Sobel (1982) wrote a seminal paper on the cheap talk literature. They investigated the actions taken by Sender and Receiver in which they took the preference relation into account. In their model, they describe a situation where Receiver is less informed than Sender. Furthermore, they only discuss a scenario with disclosure and a possible misalignment in preferences. Their findings suggest that Sender is not expected to reveal all his information to Receiver when preferences are not fully aligned but when preferences become more aligned, the expected payoffs of both players will increase. Next to that, Crawford and Sobel (1982) have shown that direct communication becomes more valuable when two agents are more agreed on the aim.

The model of Li and Madarasz (2008) is an extension to the study conducted by Crawford and Sobel (1982) and their model is closely related to my model. They introduced a model where an informed Sender with a conflict of interest has to give an

2

(8)

5

advice to Receiver who is uninformed. In the next step, Receiver has to make a decision which influences the payoffs of both players. In their study, the following two different situations are compared: a situation where it is mandatory to disclose the conflict of interest and a situation with mandatory nondisclosure in which Receiver can only gain information about the conflict of interest by receiving a possibly noisy signal about the preference relation from Sender. The difference between their model and the model of Crawford and Sobel (1982) is that there is made a distinction between disclosure and nondisclosure in the model of Li and Madarasz (2008). With this distinction, they have shown that nondisclosure often results in higher payoffs for both players, and the mandatory disclosure of the conflict of interest probably decreases the payoff of the players3.

As shortly mentioned in the introduction, there is one important difference in the model of this thesis in comparison with the model of Li and Madarasz (2008). Namely, I consider a situation in which Sender can decide to lie or to tell the truth about the preference relation. So the choice of Sender is not determined beforehand anymore. Next to that, no longer a distinction is made between the disclosure and nondisclosure of the preference relation but Sender can only lie or tell the truth about this relation. Lastly, I deviate from the paper in the type of preferences. Whereas Li and Madarasz (2008) always had a conflict of interest, this is not necessarily the case in the model I use.

While the two studies mentioned above theoretically analyse the effects of disclosing conflicts of interest, Cain, Loewenstein and Moore (2005) used an experimental setting to investigate these effects. In their experiment, Sender had to write some suggestions about the value of coins in a jar on a report what was then given to a randomly matched Receiver. Senders could examine the jar very closely but Receiver could only look at it from a distance and for a few seconds. The more

3

By disclosing the bias, Receiver knows the sign of the bias (negative or positive). For disclosing a negative sign the expected payoffs of the players decrease. In the case of nondisclosure the expected payoff is equal to 0 because the biases cancel each other out.

(9)

6

accurate Receiver estimated the value of the coins, the higher her payoff. The payoff of Sender depended on the treatment. In the control treatment, Sender received more when Receiver estimated the value of the coins more accurate while in two conflict-of-interest treatments4 Senders’ payoff increases with the overestimation of Receiver. The preference relationship possibilities in my model follow the three treatments. Whereas the control treatment is similar to the situation with aligned preference, the conflict-of-interest treatments come close to the situation with misaligned preference. The results of the experiment are related to the findings of Li and Madarasz (2008). Namely, Cain et al. (2005) also found that disclosing might not deal with the problems obtained by the conflicts of interest, and might even hurt.

The results described in the papers so far, are confirmed by other papers. For example, Morgan and Stocken (2003) found that the uncertainty of an investor about the incentives of a financial analyst makes it impossible for the analyst to fully disclose all the extra information he has about stock reports. Next to that, Koch and Schmidt (2010) have replicated the findings of Cain et al. (2005) in a laboratory experiment.

However, Chiba and Leong (2015) found that the communication between two parties was strengthened by an increase in conflict of interest, which is in contradiction with general literature. In their model, there are two possible projects, and Receiver can make a decision to implement one of these projects or not to implement any project. Sender partially knows which project will attain a good outcome, and he can advise Receiver about a project. Also in this paper the communication is done by using cheap talk which means in this case that Sender can lie about this information for free. Pfattheicher and Schindler (2017) did not focus on the possibility to disclose or not but they investigated the dishonest behaviour of people. In their experimental study, they distinguish the two following situations: a situation where participants could be dishonest to prevent a loss and a situation where the participants could be

4 In one of the two treatments, the conflict of interest was disclosed and in the other treatment this

(10)

7

dishonest to earn a gain. Based on the prospect theory, one would expect that the participants prefer to be dishonest when they can prevent a loss because losses hurt more than gains (Cameron and Trivedi, 2005). Pfattheicher and Schindler (2017) have shown that their results are in accordance with the prospect theory. Namely, the results show that people are more willing to be dishonest when they can prevent a loss in comparison with the situation where they can earn a gain. This difference in dishonest behaviour is confirmed by Kern and Chugh (2009) but they have also shown that the difference is greater in situations with time pressure.

Other authors have focussed on the reputation and reputation concerns of Sender when they might give reliable advice or when they could be selfish. Morris (2001) theoretically investigated the reputation of Sender in a situation where Receiver is uncertain about the incentives of that Sender in a repeated cheap talk game. Sender could build on his reputation because of the repeated game. Morris (2001) shows that the reputation of Sender will be reduced in comparison with a situation without uncertainty about the incentives. This result does not depend on whether Sender is telling the truth or not. Other two results found by Morris (2001) are that Sender will not convey any information in equilibrium when reputation is really important, and that Sender desires Receiver to listen to him. Sobel (1985) was also occupied with a situation where Receiver is not sure about the incentives of Sender in a repeated cheap talk game. He investigated what happens with the reputation of a person when this person is not sure about the incentives of someone he has to communicate or to work with. The striking point he found is that if someone is uncertain about the incentives of another person he has to deal with, the tendency of trusting the other person depends on that person’s earlier decisions or actions. This finding does not only hold for someone who has to make a decision but also for the one who has to convey information. Some other authors who have addressed this subject are Bénabou and Laroque (1992), Koch and Schmidt (2010) and Spector (2000).

(11)

8

3 Model

In this section the model of this thesis is introduced. For simplicity it is based on a parameterization. First I explain the setup of the model5 and next I describe the parameterization of the model.

3.1 Setup

The model used in this thesis mimics the situation described in the introduction about the interaction between two players: Sender and Receiver6. The preferences of these two players can be aligned but they can also have a conflict of interest. Only Sender knows the preference relation, and regardless of this relation Sender can choose to tell the truth about this but he could also choose to lie.

After that, a homogeneous and indivisible product is introduced which is either of a good or a bad quality. So note that Sender gets information about the quality of the product only after he sends the message about the preference relation7. Again Sender is the only one who knows the quality of the product. So during the interaction between Sender and Receiver, Sender is better-informed about two aspects of the game, i.e., his own preference and the quality of the good. After knowing the quality of the product, Sender has to send a message about the quality to Receiver. Again, the message can contain the truth but it can also be a lie.

Receiver can accept or reject the product based on the message she received. This choice depends on the beliefs about Sender. Is this Sender reliable or not? The final choice of Receiver, the true preference relation and the quality of the product influence the payoffs of both players.

The setup described above includes 4 different information sets for Receiver. These information sets exist because Sender can lie and therefore Receiver can not

5 A game tree of the setup of this model is shown in Appendix A. Note that the payoffs are not represent

in this game tree. The defined information sets belong to Receiver.

6

Sender is defined as male and Receiver as female.

7 Note that the game setup is not completely equivalent to the game with 4 states because now Sender

(12)

9

verify the message. This leads for example to an information set where Sender sends a message of aligned preference and a message about a product of bad quality.

The players, strategies and information sets of the full model are summarized below. See appendix B for the explanation of the position of each character in the strategy spaces of the players.

 N = {Sender, Receiver} is set of players

 SSender = {* * + + + + + + + +} is the strategy space of Sender,

with * ϵ (T,L) and + ϵ (G,B). In total, there are 210= 1024 possible strategies.

 T = Telling the truth about the preference relation  L= Lying about the preference relation

 G= Sending a message about a product of good quality  B= Sending a message about a product of bad quality  SReceiver = { * * * *} is the strategy space of Receiver, with

* ϵ (A,R). In total, there are 24= 16 possible strategies.  A = Accept the product

 R = Reject the product

 The information sets of Sender only consist of one node as he is always well-informed. So in total there are 10 information sets for Sender. These nodes are shown in Appendix A.

 The 4 information sets of Receiver: Two messages of Sender with each two choices which gives 2x2 information sets.

o Each information set contains 4 nodes depending on the true state of the world: 2 x 2 = preferences x quality8.

8 Two preferences options: aligned and misaligned

(13)

10

3.2 Parameterization

From now until section 5 I introduce and use parameterizations with given probabilities and payoffs to determine possible equilibria9. I set the probability of having a conflict of interest to 1-α and the probability of having aligned preferences to α. The probability of having a product of bad or good quality is equal, so both can occur with a probability of 50 percent.

In the table below, the payoffs of Sender and Receiver are shown for every possible situation. The first value refers to the payoff of the product for Sender and the second value refers to the payoff of the product for Receiver. The utility of accepting a product of good quality is higher than rejecting a product of good quality. However, for a product of bad quality it is better to reject than to accept. These two relations are included in the chosen payoffs. Later, in the comparative statistics another relation between the payoffs will be described. Sender’s payoff, given in table 1, will be reduced by 1 if he lies about the preference relation.

Like several other studies, I assume that people have a preference for honesty (e.g., Matsushima, 2007; Kartik et al., 2014) and to stimulate honesty I introduce lying costs. Note that the reduction by means of lying costs is known by both Sender and Receiver. There are no lying costs after lying about the quality of the product because it is worse to lie about yourself than about a random product. As mentioned above, I normalize the lying cost at first but later I extend my analysis to various costs.

9

Later, I will discuss the consequences of these choices.

(14)

11

The utility of both players depends on the random draws described in table 1 and on the strategies and actions of Sender and Receiver. The utility corresponds to the payoffs in the end. Besides that, Receiver’s beliefs are based on the behavioral strategies of Sender and on nature’s draws by using Bayesian updating wherever it is possible.

4 Equilibrium

The model introduced in section 3 is an example of a signalling game. However, signalling games often have many possible sequential equilibria (Cho and Kreps, 1987). My model is complex and has also many possible equilibria but the aim of this thesis is not to construct all these equilibria. I only consider some situations which are intuitive. Therefore I do not, for example, consider a situation where Sender always lies about both preference relation and about quality of the product when the preferences are aligned even though I might be able to construct such equilibria. This situation is not intuitive because it does not make sense to lie about the preference relation when preferences are aligned and when Sender has to pay a cost for lying.

For the strategies considered, I look at possible separating or pooling

equilibria. A pooling equilibrium is an equilibrium in which Sender with different types always sends the same message regardless of the true type. So in this case, the

message does not provide any information to Receiver and therefore her beliefs will not be updated after receiving the message. In contrast to that, a separating

equilibrium leads to updated beliefs of Receiver. Namely with a separating equilibrium, Sender with different types sends a different message for every type. So Receiver can infer the type of Sender.

Next to that, I only consider pure strategies and I specify weak sequential equilibria. An equilibrium is a weak sequential equilibrium if it satisfies sequential

(15)

12

rationality10 and weak consistency (Kreps and Wilson, 1982)11. The figures which are used to show the actions for a given strategy include the payoffs. It is important to note that these payoffs are already reduced by any lying costs.

4.1 Separating equilibrium

In this subsection, I consider situations in which Sender sends another message with aligned preferences than he sends when there is a conflict of interest.

4.1.1 Always tell the truth

People often want to know the truth but does this lead to an equilibrium in every situation? To see whether there is an equilibrium in the described parameterization, I consider a situation in which Sender always tells the truth about both preference relation and quality of the product.

In this situation, Receiver exactly knows what her position in the game is so she can make a decision based on the received messages. Receiver will always accept when preferences are misaligned and when the quality of the product is good because a payoff of 10 is larger than -1. On the other hand, Receiver will always reject when preferences are misaligned and when the quality of a product is bad because of the higher payoff of rejecting.

The same situation exists when preferences are aligned which means that Receiver accepts the product after a message of good quality and rejects the product after a message of bad quality. However, to have an equilibrium it is essential that both Receiver and Sender do not have an incentive to deviate. In the case of aligned preferences Sender does not have an incentive to tell a lie about the quality of the product because he will end up with a payoff of -1 instead of 10 or with 0 instead of 3.

10

Sequential rationality: Given someone’s beliefs and the strategies of other players, the strategy of

every player is optimal whenever he has to move.

11

Weak consistency: When a player reaches an information set, his beliefs about the probability distribution over the nodes in that information set is consistent with behavioural strategies played.

(16)

13

But on the contrary, in the situation with misaligned preferences it is better for Sender to tell a lie about the quality of the product when the quality is bad. This deviation will increase Sender’s payoff from 1 to 5.

So there is no equilibrium in which Sender always tells the truth. Figure 1 gives a visual representation of the situation. The yellow lines correspond to the actions taken by Sender and Receiver. The red arrow in this figure represents the part where Sender’s payoff will improve by deviating from the message about a product of bad quality when the preferences are misaligned.

(17)

14

4.1.2 Pooling on good quality

In this subsection, I investigate possible equilibria in a situation where Sender tells the truth about the preference relation but always says that there is a product of good quality. So Receiver can update her beliefs after receiving the message about the preference relation but not after the message about the quality of the product. Figure 2 is used to explain the possible equilibria. Note that the yellow lines in this figure represent the actions of Sender.

The message of Sender about the quality of the product is not informative for Receiver and therefore she does not know whether the quality of the product is bad or good. Because of this, she wants to maximize her expected payoffs by taking into account both possibilities, so whether the product is of good or bad quality. Below the expected payoffs for Receiver are calculated for this situation which holds true for both the case with aligned and misaligned preferences:

(18)

15  Accepting: 0.5 * 10 + 0.5 * 0 = 5

 Rejecting: 0.5 * -1 + 0.5 * 3 = 1

As a result of these calculated values of the expected payoffs, it is clear that Receiver’s best action is to always accept the product. This is shown in figure 2 by the grey lines. To make sure that Sender with aligned preferences does not have an incentive to deviate, Receiver should always accept after a message of bad quality. However, Receiver will only accept if this gives her a higher expected payoff than the expected payoff of rejecting.

The probability of being at the upper node in information set 4 is mentioned as “W”. This is the node after the lying message of misaligned preferences and the lying message of bad quality. The probability W is shown in the figure 2. This figure also shows the probabilities “X”, “Y” and “Z” which corresponds to the probability of being at another node in the same information set. The sum of these probabilities is equal to 1 and all these probabilities belong to the belief system of Receiver for that information set. To accept the product in this information set the following equation must hold:

A similar result is found after looking at the beliefs of Receiver in information set 2 when Sender sends a message of aligned preferences and a message about a product of bad quality. For this information set equation (2) must apply.

(19)

16

So to achieve an equilibrium, equations (1) and (2) must hold which makes sure that Receiver will always accept.

The equilibrium can be summarized as follows:  Beliefs of Receiver:

 Information set 1: [(0.5;0.5;0;0)]

 Information set 2: [(A;B;C;D)] with A+B+C+D=1 and equation (2) is satisfied

 Information set 3: [(0;0;0.5;0.5)]

 Information set 4: [(W;X;Y;Z)] with W+X+Y+Z=1 and equation (1) is satisfied

 Actions of Receiver:  {A,A,A,A}  Actions of Sender:

 {T T + + G G + + G G} with + ϵ (G,B)

4.1.3 The truth with aligned preferences, pooling with a conflict of interest

This subsection considers the situation where Sender tells the truth about the preference relation and about the quality of the product when preferences are aligned. However, he always sends a message of good quality when preferences are misaligned. The yellow lines in figure 3 show these actions of Sender.

In the case of aligned preferences, Receiver exactly knows the quality of the product and therefore she can make a decision based on the quality when preferences are aligned. Receiver wants to maximize her expected payoff and therefore she accepts the product in information set 1 and she rejects the product in information set 2.

In the previous subsection, where Sender with different types always sends a message of good quality, it is shown that it is better for Receiver to accept the product

(20)

17

in information set 3 than to reject this product. Namely, accepting the product gives her an expected payoff of 5 and this is higher than the expected payoff of rejecting which is 0.

Sender with aligned preferences does not have an incentive to deviate from the message about the quality of the product because deviating decreases his expected payoff. Next to that, this Sender also does not want to deviate from the message about the preferences because then he has to pay the lying costs.

Also Sender with misaligned preferences does not have an incentive to deviate from both the message about the quality of the product and from the message about the preference relation. By deviating from the message about the quality of the product his expected payoff can be at most as high as sending a message of good quality. In addition, Sender with misaligned preferences has to pay a lying cost if he does not tell the truth about the preference relation. Therefore he also does not want to deviate from the message about the preference relation. So this situation leads to an equilibrium because no one has an incentive to deviate. The equilibrium is summarized below.

 Beliefs of Receiver:

 Information set 1: [(1;0;0;0)]  Information set 2: [(0;1;0;0)]  Information set 3: [(0;0;0.5;0.5)]  Information set 4 : [(W;X;Y;Z)]  Actions of Receiver12

:

 {A,R,A,*} with * = A if (W+Y)≥ (X+Z) and * = R if (W+Y)≤ (X+Z)  Actions of Sender:

 {T T + + G B + + G G} with +

ϵ

(G,B)

12

(21)

18

4.2 Pooling equilibrium

So far, the probabilities α and 1-α did not matter for constructing the equilibria. However, this can have an influence when Sender pools on the message of the preference relation. It is not intuitive to lie about aligned preferences and therefore I do not consider the situation where Sender pools on the message of misaligned preferences. There is also no reason to lie about the quality of the product when the quality is good. So in the situations described below, Sender tells the truth about the quality if the quality is good.

Next to that, I assume that Sender lies about a product of bad quality if he also lied about the preference relation. In the case of aligned preferences, I consider two situations: one where he lies about the quality of a bad product and one where he tells the truth about the quality of a bad product. The last situation is plausible because he was already honest for ones and therefore it could be that it is an honest Sender. Both two situations for aligned preferences are discussed in this section.

(22)

19

4.2.1 Situation 1: Pooling on good quality

In a situation where Sender always sends a message of aligned preferences and a message about a product of good quality, Receiver cannot update her beliefs. She only knows the probability of being in a node in information set 1. These probabilities are shown in figure 4. Receiver accepts the product in information set 1 if the following equation holds:

This equation is true for every value of α ϵ (0,1) so Receiver will always accept. Sender does not have an incentive to deviate about the message of the quality of the product when Receiver also accepts in information set 2. Receiver accepts in this information set if

. Next to that, Sender will

not lie about the preferences when the preferences are aligned and Receiver always accepts in the information sets 3 and 4 because this will reduce his payoffs caused by the lying costs.

On the other hand, for misaligned preferences it is better for Sender to deviate by telling the truth about the preference relation instead of lying about this relation if Receiver always accepts in the information sets 3 and 4. Then Sender will avoid the lying costs and therefore his expected payoff will increase. So Receiver has to reject in the information sets 3 and 4 to make sure that Sender with misaligned preferences does not want to deviate.

Note that Sender with aligned preferences also does not have an incentive to deviate from the message about the preference relation if Receiver rejects in information set 3 and 4. Namely, his expected payoff of telling the truth about the preference relation is equal to 5 but if he lies these are only 0.

(23)

20

However, Receiver only rejects in information set 3 if . In addition, she only rejects in information set 4 if

To conclude, there is only one equilibrium for this situation if the three equations mentioned above hold. This equilibrium can be summarized as follows:

 Beliefs of Receiver:

 Information set 1: [( ; ; ; )]

 Information set 2: [(K;L;M;N)] with K+L+M+N=1 and

 Information set 3: [(W;X;Y;X)] with W+X+Y+Z=1 and

 Information set 4 : [(A;B;C;D)] with A+B+C+D=1 and

 Actions of Receiver:  {A,A,R,R}  Actions of Sender:

 {T L + + G B + + G G} with +

ϵ

(G,B)

The equilibrium summarized above is based on the fixed lying cost of 1. However, Sender with misaligned preferences can have an incentive to deviate from the message about the preference relation if the lying costs, say P, are high enough and when Receiver rejects in the information sets 3 and 4. The calculations below show that Sender with misaligned preferences has an incentive to deviate if the lying costs are higher than 7.5 so for P>7.5 there is no equilibrium.

 Expected payoff of telling the truth (and rejecting in information set 3 or 4):  0.5 * -1 + 0.5 * 1 = 0

 Expected payoff of lying (and accepting in information set 1):  0.5 * (10-P) + 0.5 * (5-P) = 7.5 – P

 Incentive to deviate if:

(24)

21

4.2.2 Situation 2: The truth with aligned preferences, pooling with a conflict of interest

This subsection describes a situation where again Sender pools on a message of aligned preferences. The difference with the previous subsection is that in this case Sender will tell the truth if the preferences are aligned. The yellow lines in figure 5 show the actions of Sender. Next to that, the probabilities of reaching a node in information set 1, which belong to the belief system of Receiver, are represent in the figure and these probabilities are obtained with Bayesian updating. The probability of reaching the node in information set 1 where Sender with aligned tells the truth about the preference relation and the quality of a product of good quality is calculated as follows:

 Step 1: the probability of reaching each state in information 1 is equal to

α

, 0, α and α from the top node to bottom node in figure 5 respectively.

 Step 2: Use Bayesian updating:

(25)

22

α

α α α α

α

Based on the beliefs, equation (4) indicates the condition for which Receiver will accept the product after a message of good quality in information set 1.

It is obvious that equation (4) holds for every value of α between 0 and 1 and therefore Receiver always accepts in this information set. In information set 2 Receiver rejects because a payoff of 3 is larger than a payoff of 0. The actions of Receiver in the information sets 1 and 2 are shown in figure 5 by the grey and green lines respectively. As also found in the previous subsection, accepting is independent of α but in this case Receiver rejects after the message about a product of bad quality in information set 2. Sender has no reason to deviate from his message about the quality of the product for both preference relations.

However, Sender with misaligned preferences wants to deviate from the message about the preference relation if Receiver accepts the product in the information sets 3 and 4. So to reach an equilibrium, Receiver has to reject the product in these two sets but she only rejects if it is optimal for her. This is the case for information set 3 if equation (5) holds13. In the same way, it is calculated that equation (6) must hold for the probabilities in information set 4.

13

(26)

23

The equilibrium in this situation can be summarized as follows14:  Beliefs of Receiver:

 Information set 1: [( ;0; ; )]  Information set 2: [(0;1;0;0)]

 Information set 3: [(W;X;Y;Z)] with W+X+Y+Z=1 and equation (5) is satisfied

 Information set 4: [(A;B;C;D)] with A+B+C+D =1 and equation (6) is satisfied  Actions of Receiver:  {A R R R}  Actions of Sender:  {T L + + G B G G + +} with + ϵ (G,B).

14 Note that this equilibrium does not exist if the lying costs are higher than 7.5. The calculations are the

same as described in section 4.2.1.

(27)

24

4.3 Expected payoffs

There are two equilibria found in the five situations described above. In this part I calculate which of the equilibria is better for the players separately and which one is better for the total welfare. I start with the expected payoffs of Sender, followed by the expected payoffs of Receiver and in the end I show which equilibrium is optimal for the total welfare.

4.3.1 Sender

First I compare the two equilibria found in the sections where Sender tells the truth about the preference relation. The expected payoff of Sender for the equilibrium found in 4.1.2 is calculated as follows:

α α α α α α – α

– α (7)

In a similar way, the expected payoff of Sender for the equilibrium in section 4.1.3 is calculated which leads to a value of – α. So for Sender it is better to reach the equilibrium of section 4.1.2 than to reach the equilibrium of section 4.1.3 if – α – α which is equal to α . This indicates that it is better for Sender to reach the equilibrium of section 4.1.3 if α is positive and that he is indifferent if α is equal to 0. This indifference is due to the fact that for α=0 the whole population has misaligned preferences and therefore the situation of the two equilibria are similar.

For the two equilibria where Sender with different types always sends a message of aligned preferences, the expected payoff for Sender is equal to – α for the equilibrium of section 4.2.1 and equal to 6,5 for the equilibrium of section 4.2.2. It can be shown that the equilibrium of section 4.2.2 is better for Sender than

(28)

25

the equilibrium of section 4.2.1 if α is positive. For α equal to 0 Sender is indifferent which is also due to the fact that the situations of the two equilibria are the same for a population with only misaligned preferences.

So the equilibrium of section 4.1.3 is at least as good as the equilibrium of section 4.1.2 and the equilibrium of section 4.2.2 is at least as good as the equilibrium of section 4.2.1. This is because in section 4.1.3 and 4.2.2 Receiver exactly knows the quality of the product and therefore he can make a more optimal decision. The optimal decision for Receiver also gives the best expected payoff for Sender. In addition, I compare the equilibrium of section 4.1.3 with the one of section 4.2.2 to determine which of the two equilibria the best is for Sender. This comparison is done in equation (8). The equation shows that the expected payoff of the equilibrium of section 4.1.3 is at least as high as the expected payoff of the equilibrium in section 4.2.2. Therefore, it is optimal for Sender to tell the truth about the preference relation when preferences are aligned and 0<α<1.

– α

α (8)

4.3.2 Receiver

Similar calculations are done for Receiver to determine the expected payoffs and to calculate which equilibrium gives her the highest payoff. The expected payoff of Receiver for the equilibrium found in section 4.1.2 is equal to 5. The same expected payoff of Receiver is found for the equilibrium in section 4.2.1. The expected payoffs of the equilibrium in section 4.1.3 and the equilibrium in section 4.2.2 are also equal. Both are equivalent to 5 + 1,5α. By comparing the two different expected payoffs, it can be shown that the equilibria in section 4.1.3 and section 4.2.2 always give the highest expected payoffs for Receiver. This is very intuitive because in these equilibria

(29)

26

there is probability of having aligned preferences, which is α, and when the preferences are aligned the quality of the product is revealed. Therefore Receiver can make a more optimal choice.

4.3.3 Total welfare

I combine the expected payoffs of Receiver and Sender to determine the equilibrium that is better for the total welfare. Equation (9) shows the calculation to determine which of the two equilibria, in which Sender tells the truth about the preference relation, is more optimal for the total welfare. So these are the equilibria of section 4.1.2 and section 4.1.3. The calculation represents the situation for which values of α the equilibrium in section 4.1.2 is more optimal for the total welfare than the equilibrium in section 4.1.3. The outcome indicates that the equilibrium of section 4.1.3 is at least as good as the equilibrium of section 4.1.2.

– α α α

(9)

In a similar way it can be shown that for the total welfare the equilibrium of section 4.2.2 is at least as good as the equilibrium of section 4.2.1. These two equilibria are the ones where Sender with different types sends a message of aligned preferences.

So the equilibrium of section 4.1.3 is at least as good for the total welfare as the equilibrium of section 4.1.2 and the equilibrium of section 4.2.2 is at least as good as the equilibrium of section 4.2.1. This corresponds with the results found by comparing expected payoffs of Sender. Next to that, the expected payoff for the total welfare of the equilibrium of section 4.1.3 is higher than the expected payoff of the equilibrium of section 4.2.2 for 0<α<1. Also, this is the same result found by comparing the expected

(30)

27

payoffs of Sender for these two equilibria. For α equal to 1 the expected payoffs of the two equilibria are equal.

To conclude, it is better for the total welfare if Sender tells the truth about the preference relation when the probability of having aligned preferences is between 0 and 1. This result is due to the lying costs for Sender. Next to that, Sender has to tell the truth about the quality of the product when preferences are aligned but he has to send a message of good quality when preferences are misaligned to get the highest expected payoff for the total welfare.

5 Comparative statics

The described situations and equilibria in the previous section are independent of α. However, this does not have to be the case for any set of parameters. Besides that, it can be possible that other payoffs will lead to other equilibria. Therefore I consider a new set of parameters which includes different payoffs but the same probabilities. In section 4 the expected payoffs of accepting a product were higher than the expected payoffs of rejecting that product. This relation is reversed for the new payoffs. The new payoffs are shown in table 2. Further, I still assume a cost of 1 for lying about the preference relation. The same strategies are discussed in this section so the results of both cases can be compared with each other.

(31)

28

5.1 Separating equilibrium

In this section, I consider the same situations as described in section 4.1 to check for possible equilibria.

5.1.1 Always tell the truth

In a situation where Sender always tells the truth about both preference relation and quality of the product, the results are similar to those found in the previous example. Again, Receiver will accept the product after receiving a message about a product of good quality and reject after receiving a message about a product of bad quality. However, Sender can increase his payoffs by deviating in the situation with misaligned preferences and a product of bad quality. Namely, his utility will be equal to 0 when Sender tells the truth but when he deviates and says that the product is of good quality his utility increases up till 4. So there will be no equilibrium where Sender is always honest in this case.

5.1.2 Pooling on good quality

I first calculate the expected payoffs for Receiver to check whether there is an equilibrium where Sender tells the truth about the preference relation but always sends a message about a product of good quality. This calculation gives the following outcomes for both aligned and misaligned preferences:

 Accepting: 0.5 * 5 + 0.5 * -2 = 1.5  Rejecting: 0.5 * 0 + 0.5 * 4 = 2

The expected payoffs show that it is better for Receiver to always reject the product after a message about a product of good quality which is in contradiction with the outcome in section 4.1.2. Next to that, Receiver has to reject the product after a message of bad quality otherwise both Senders with aligned preferences and misaligned preferences have a reason to deviate from their message about the quality

(32)

29

of the product. Receiver only rejects if the expected payoff of rejecting is higher than the expected payoff of accepting. This is the case if the two equations below hold.

 (B+D) ≥ (A+C) for information set 2  (X+Z) ≥ (W+Y) for information set 4

Given the mentioned actions of Receiver and given the lying costs, Sender will never lie about the preference relation. So again there is an equilibrium in which Sender tells the truth about the preference relation and always sends a message about a product of good quality.

The difference between this equilibrium and the equilibrium described in section 4.1.2 is that in the equilibrium of section 4.1.2 Receiver had to accept in every information set but in this case she always has to reject. The actions of Sender and Receiver are represented in figure 6 by the yellow and green lines respectively. The equilibrium is summarized below. Note that rejecting each time is not optimal for the welfare because, for example, accepting a product of good quality instead of rejecting this product will increase the payoffs of both players.

 Beliefs of Receiver:

 Information set 1: [(0.5;0.5;0;0)]

 Information set 2: [(A;B;C;D)] with A+B+C+D=1  Information set 3: [(0;0;0.5;0.5)]

 Information set 4: [(W;X;Y;Z)] with W+X+Y+Z=1  Actions of Receiver:

 {R R R R}  Actions of Sender:

(33)

30

5.1.3 The truth with aligned preferences, pooling with a conflict of interest

In this situation Sender tells the truth about the quality of the product when preferences are aligned. However, when preferences are misaligned he always sends a message a good quality.

In this case, Receiver exactly knows the quality of the product when preferences are aligned and therefore she accepts in information set 1 and rejects in information set 2. As shown in section 5.1.2, Receiver is better off rejecting than accepting in information set 3.

Regardless of the action of Receiver in information set 4, Sender with aligned preferences does not have an incentive to deviate because then he has to pay a lying cost. Next to that, he can not improve his payoff by deviating from the message about the quality of the product.

However, the action of Receiver in information set 4 matters to make sure that Sender with misaligned preferences does not have an incentive to deviate from the message about the quality of the product. Namely, when Receiver accepts in information set 4, Sender with misaligned preferences is better off by sending a

(34)

31

message of bad quality instead of sending a message of good quality. Therefore Receiver has to reject in information set 4.

Given these actions of Receiver, still no equilibrium will be reached because now Sender with misaligned preferences has an incentive to deviate from the message about the preference relation. He can increase his expected payoff by sending a message of aligned preferences and by sending a message of good quality every time.

5.2 Pooling equilibrium

Again, I consider possible equilibria where Sender pools on the message of the preference relation.

5.2.1 Situation 1: Pooling on good quality

Despite of the new payoffs, the actions of Sender and the probabilities of Receiver which belong to the belief system of information set 1 are the same as described in section 4.2.1. On the other hand, the equation that indicates the condition for which Receiver will accept the product after a message of good quality differs. It is still based on the beliefs of Receiver, but now it is equal to the following equation:

(10)

Equation (10) is impossible and therefore Receiver will always reject. Sender does not have an incentive to deviate about the message of the quality of the product when Receiver rejects in information set 2. Receiver rejects in that information set if with A+B+C+D=1 and A, B, C and D defined in figure 7. However, Sender with misaligned preferences can be better off by telling the truth about the preference relation when Receiver accepts in information set 3. In addition, Sender

(35)

32

with misaligned preferences also has an incentive to deviate when Receiver rejects in information 3. This Sender can be better off by telling the truth about the preference relation to avoid the lying costs. So for any action of Receiver in information set 3, there is a Sender who has an incentive to deviate and therefore this situation does not lead to an equilibrium.

5.2.2 Situation 2: The truth with aligned preferences, pooling with a conflict of interest

In this situation the actions of Sender and the probabilities of reaching the nodes in information set 1 which belong to Receiver are the same as described in section 4.2.2. However, the equation that indicates the condition for which Receiver will accept the product after a message of good quality is now equal to the following:

(36)

33

(11)

So Receiver will accept the product after a message of good quality in most of the cases. Only if a small part (less than ) of the population have aligned preferences she will reject. In information set 2 Receiver will always reject because a payoff of 4 is larger than a payoff of -2. For both preference relations Sender does not have an incentive to deviate about the message of the quality of the product when Receiver rejects in information set 2 and . However, Receiver has to reject the product in the information sets 3 and 4 when to construct an equilibrium. Otherwise, Sender with misaligned preferences has an incentive to deviate from the message of aligned preference. This is because by sending a message of misaligned preferences he will avoid the lying costs. Receiver rejects in the information sets 3 and 4 if with A+B+C+D=1 and with W+Y+X+Z=1. The letters are defined in figure 8.

The equilibrium for can be summarized as follows:  Beliefs of Receiver:  Information set 1: [( ;0; ; )]  Information set 2: [(0;1;0;0)]

 Information set 3: [(W;X;Y;Z)] with W+X+Y+Z=1 and  Information set 4: [(A;B;C;D)] with A+B+C+D =1 and  Actions of Receiver:

 {A R R R}  Actions of Sender:

(37)

34

The equilibrium summarized above does not exist if the lying costs are very high. Namely, Sender with misaligned preferences can have an incentive to deviate from the message about the preference relation if the lying costs are higher than 5. This can be shown as follows:

 Expected payoff of telling the truth (and rejecting in information set 3 or 4):  0.5 * 0 + 0.5 * 0 = 0

 Expected payoff of lying (and accepting in information set 1) with P defined as the lying costs:

 0.5 * (5-P) + 0.5 * (5-P) = 5 – P  Incentive to deviate if:

 0 > 5 – P ↔ P > 5

For the situation with Sender does not have an incentive to deviate from the message about the quality of the product. However, there is no equilibrium for this

(38)

35

situation. This can be shown by looking at information set 3. When Receiver accepts in this information set then Sender with, for example, misaligned preferences has an incentive to deviate from the message of aligned preferences by sending a message of misaligned preferences but by still sending a message of a product of good quality. When Receiver always rejects in information set 3 then again, Sender with misaligned preferences has an incentive to deviate from the message about the preference relation. This Sender can avoid the lying costs by telling the truth instead of lying about the conflict of interest.

5.3 Expected payoffs

For the new payoffs, there are only two equilibria found in the five situations described above so I can determine which of the two equilibria is better for the welfare. The calculations are done in a similar way as described in section 4.3.

The expected payoffs for Sender are equal to 2α and 4 – 0,5α for the equilibria found in sections 5.1.2 and 5.2.2 respectively. By comparing these two expected payoffs it can be shown that the equilibrium found in section 5.2.2 always leads to the highest expected payoff for α ϵ (0,1). Only, note that α has to be at least as high as to reach the equilibrium of section 5.2.2.

The equilibrium that leads to the highest expected payoff for Receiver depends on α. Receiver has an expected payoff of 2 for the equilibrium of section 5.2.1 and an expected payoff of 1,5+α for the equilibrium found in 5.2.2. By comparing these two, it is found that for α≤0.5 the equilibrium in 5.2.1 gives the highest expected payoff. So when at most half of the population has aligned preferences it is better to tell the truth about the preference relation and to always send a message of good quality instead of lying about the preference relation when preferences are misaligned. For α≥0.5, the equilibrium found in section 5.2.2 is better for the welfare of Receiver.

(39)

36

By combining the expected payoffs of Receiver and Sender for both equilibria and by looking at a situation for which the equilibrium of section 5.1.2 is better for the total welfare than the equilibrium of section 5.2.2, I found the relation shown in equation (12). This equation indicates that the equilibrium of 5.2.2 is better than the equilibrium of 5.1.2 for the welfare if α ϵ (0,1). However, α needs to be at least as high as to construct the equilibrium of section 5.2.2.

α – α α

α (12)

6 Experimental design

It is interesting to see whether the equilibria found in the previous sections will also be realized in practice or not. Therefore this section contains an experimental design to test if an equilibrium will be reached and which one this will be. The experiment can be executed using computers.

For the experiment you need an even number of participants for each session and you have to run at least 5 sessions. The participants will be randomly divided into two equal groups of, for example, about 10 participants. The participants are not allowed to communicate with each other. The participants in one group will all have the role of Sender and the participants in the other group will all have the role Receiver. Next, all the participants take place behind a computer and then Sender is randomly linked to Receiver. They have to play two rounds: one round with the payoffs given in section 3 and one round with the payoffs given in section 5. The order of the two rounds is randomly determined per session. So in some sessions it can be the case that the payoffs of section 3 are first used but for other sessions this can be reversed. The couple of Sender and Receiver will not change during a round but after round 1 the couples will again be randomly chosen. The couples are linked again so that they

(40)

37

can not build a reputation. All participants get a paper with the payoffs for every possible situation for round 1 and a paper with the payoffs for every possible situation for round 2. They can look at these papers during the whole game.

The game can start with round 1 after making the distribution and showing the payoffs. First, the computer randomly determines the preference relation between a couple and this relation is shown on the screen of Sender. Next, Sender can click on one of two different options: lying about the preference relation or telling the truth about the preference relation. When Sender has chosen one of these options, Receiver will see a message on her screen with a preference relation. This message depends on the choice of Sender. So, for example, when preferences are aligned and Sender chooses to lie about the relation, Receiver will see a message of misaligned preferences.

After that, the computer also randomly determines the quality of a product. Again, this is only shown to Sender and this time he can click on one of the two following options: lying about the quality of the product or telling the truth about the quality of the product. Just like after the first choice of Sender, Receiver gets a message on her screen based on the option chosen by Sender about the quality of the product.

Sender has finished his actions in round 1 and only Receiver has to make a decision to accept or reject the product. She can tick her choice on the screen and then the computer will show the payoff of Sender to Sender and the payoff of Receiver to Receiver. After this, round 1 is finished.

For the second round, the computer will generate new random couples between Sender and Receiver. Then all the previous steps will be repeated but only the payoffs are changed by the payoffs of the paper of round 2. When the final payoffs of round 2 are showed to the participants, the game is finished. For each participant the payoff of round 2 is added to the payoff of round 1 and this amount is the amount of money that a player has earned during the game.

(41)

38

After the game, the participants have to fill in a questionnaire with questions about their gender, age and their residence. Next to that, they have to answer 10 questions that measure their risk aversion. These questions are based on the game described by Holt and Laury (2002). There are two options for each of the 10 questions and they have to select the option that they prefer. The different options for all 10 rounds are represented in table 3. The options are more risky in the first questions but become less risky in the later questions. Note that the two payoffs of option B are more variable in comparison with option A so that option B is more risky. The participants are finished after the questionnaire is completed.

(42)

39 7 Conclusion & discussion

In this thesis I consider situations with a possible conflict of interest with a well-informed Sender but with an unwell-informed Receiver. Sender can send messages to Receiver which contains the truth or a lie about the preference relation and the quality of the product. The aim of this thesis is to construct possible equilibria and to look at the welfare effects of these equilibria.

In this model, I construct the separating and pooling equilibria for situations which are intuitive. For the two chosen payoffs, there is an equilibrium where Sender with different types tells the truth about the preference relation but always sends a message of a product of good quality. Next to that, there is an equilibrium for both cases in the situation where Sender with different types sends a message of aligned preferences, tells the truth about the product when preferences are aligned but always sends a message of a product of good quality when there is a misalignment in the preferences. However, there is only such an equilibrium in the case where the expected payoffs of rejecting a product are higher than the expected payoffs of accepting that product if . This means that the proportion of the population with aligned preferences has to be at least to reach this equilibrium. Receiver can make a more optimal decision if preferences are aligned because then she knows the quality of the product. So how larger the proportion of the population with aligned preferences, how more likely it is that Receiver can make an optimal decision.

There are two more equilibria left for the situation where the expected payoffs of accepting a product are higher than the expected payoffs of rejecting that product. One of these two equilibria is an equilibrium where Sender always tells the truth about the preference relation and about the quality of the product when preferences are aligned. However, in that equilibrium Sender with misaligned preferences always sends a message about a product of good quality. In the other equilibrium, Sender

(43)

40

with different types always sends a message of aligned preferences and always sends a message of good quality.

The equilibrium which is better for the welfare depends on α for the case where the expected payoffs of accepting a product are higher than the expected payoffs of rejecting that product. For 0<α< 1 it is always better to tell the truth about both preference relation and quality of the product when preferences are aligned but Sender has to send a message of good quality when preferences are misaligned. So Sender with misaligned preferences does not tell the truth about the quality of the product when the quality is bad. This result is in line with the findings of Crawford and Sobel (1982) that Sender will not reveal all her information to Receiver when preferences are not fully aligned.

For the equilibria with the payoffs defined in table 2, it is shown that the equilibrium of section 5.2.2 is better for the total welfare. In that equilibrium Sender always sends a message of aligned preferences so Sender does not provide all his information about the preference relation. This result is also in agreement with the findings of Crawford and Sobel (1982) that Sender will not reveal all his information.

I have to mention that the actions of Receiver and the welfare optimal equilibrium for the players depend on the chosen parameters. The actions of Receiver are different for the two chosen payoffs in, for example, the situation where Sender with different types tells the truth about the preference relation but always sends a message of good quality. For this situation, Receiver has to accept in every information set to reach an equilibrium when the expected payoffs of accepting a product of good quality are higher than the expected payoffs of rejecting a product of bad quality. However, Receiver always has to reject to construct an equilibrium for the same situation but with a reversed relation between the expected payoffs.

Another point is that the found equilibria do not have to be welfare optimal. The equilibrium described in section 5.1 is an example of an equilibrium that is not optimal. In this equilibrium both players can, for example, increase their expected

Referenties

GERELATEERDE DOCUMENTEN

H4b: The FRP will positively influence the effect of supplier funded products on customer spending H5: In post-redemption weeks, the rewarded behaviour effect will increase

dezelfde vraag die aan slachtoffers is gesteld, gaat de vraag aan plegers niet om alleen het meest recente voorval, maar om alle voorvallen in de afgelopen vijf

b) [10%] Bereken de eigenwaarde(n) van bovenstaande matrix en klassificeer het evenwicht. bepaal of het evenwichtspunt een stabiele knoop, onstabiele knoop, gedegenereerde

[r]

On type 1 theories of mathematical truth, there is a dynamical system for the course of the planets, or for the antics of the economy, because such systems have all the time

Appendix 1 Summary of Prompt Corrective Action provisions of the Federal Deposit Insurance Corporation Improvement Act of 19911. Capital ratios (%) Risk-based Leverage

Sommen, producten en quoti¨ enten van continue afbeeldingen zijn

Denote by H(ξ) the naive height, that is the maximum of the absolute values of the coefficients of the minimal polynomial of an algebraic number ξ.. In this note we prove such a type