« Back to Results

Advances in Network Theory

Paper Session

Saturday, Jan. 4, 2020 10:15 AM - 12:15 PM (PDT)

Marriott Marquis, Cardiff
Hosted By: Econometric Society
  • Chair: Matthew O. Jackson, Stanford University

Learning Through the Grapevine: The Impact of Message Mutation, Transmission Failure, and Deliberate Bias

Matthew O. Jackson
,
Stanford University
Suraj Malladi
,
Stanford University
David McAdams
,
Duke University

Abstract

We examine how well someone learns when information from an original sources only reaches them after repeated person-to-person noisy relay (oral or written). We consider three distortions in communication: random mutation of message content, random failure of message transmission, and deliberate biasing of message content. We characterize how many independent chains a learner needs to access in order to accurately learn. With only mutations and transmission failures, there is a sharp threshold such that a receiver fully learns if they have access to more chains than the threshold number, and learn nothing if they have fewer. A receiver learns not only from the content, but also from the number of received messages---which is informative if agents' propensity to relay a message depends on its content. We bound the relative learning that is possible from these two different forms of information. Finally, we show that learning can be completely precluded by the presence of biased agents who deliberately relay their preferred message regardless of what they have heard. Thus, the type of communication distortion determines whether learning is simply difficult or impossible: random mutations and transmission failures can be overcome with sufficiently many sources and chains, while biased agents (unless they can be identified and ignored) cannot. We show that partial learning can be recovered by limiting the number of contacts to whom an agent can pass along a given message, a policy that some platforms are starting to use.

Learning and Selfconfirming Equilibria in Network Games

Pierpaolo Battigalli
,
Bocconi University
Fabrizio Panebianco
,
Catholic University of the Sacred Heart-Milano
Paolo Pin
,
Bocconi University

Abstract

Consider a set of agents who play a network game repeatedly. Agents may not know the network. They may even be unaware that they are interacting with other agents in a network. Possibly, they just understand that their payoffs depend on an unknown state that in reality is an aggregate of the actions of their neighbors. Each time, every agent chooses an action that maximizes her subjective expected payoff and then updates her beliefs according to what she observes. In particular, we assume that each agent only observes her realized payoff. A steady state of such dynamic is a selfconfirming equilibrium given the assumed feedback. We characterize the structure of the set of selfconfirming equilibria in network games and we relate selfconfirming and Nash equilibria. Thus, we provide conditions on the network under which the Nash equilibrium concept has a learning foundation, despite the fact that agents may have incomplete information. In particular, we show that the choice of being active or inactive in a network is crucial to determine whether agents can make correct inferences about the payoff state and hence play the best reply to the truth in a selfcon_rming equilibrium. We also study learning dynamics and show how agents can get stuck in non{Nash selfconfirming equilibria. In such dynamics, the set of inactive agents can only increase in time, because once an agent finds it optimal to be inactive, she gets no feedback about the payoff state, hence she does not change her beliefs and remains inactive.

Targeting Interventions in Networks

Andrea Galeotti
,
London Business School
Benjamin Golub
,
Harvard University
Sanjeev Goyal
,
University of Cambridge

Abstract

We study the design of optimal interventions in network games, where individuals' incentives to act are affected by their network neighbors' actions. A planner shapes individuals' incentives, seeking to maximize the group's welfare. We characterize how the planner's intervention depends on the network structure. A key tool is the decomposition of any possible intervention into principal components, which are determined by diagonalizing the adjacency matrix of interactions. There is a close connection between the strategic structure of the game and the emphasis of the optimal intervention on various principal components: In games of strategic complements (substitutes), interventions place more weight on the top (bottom) principal components. For large budgets, optimal interventions are simple -- targeting a single principal component.

Naive Learning with Uninformed Agents

Abhijit Banerjee
,
Massachusetts Institute of Technology
Emily Breza
,
Harvard University
Arun Chandrasekhar
,
Stanford University
Markus Mobius
,
Microsoft Research

Abstract

The DeGroot model has emerged as a credible alternative to the standard Bayesian model for studying learning on networks, offering a natural way to model naive learning in a complex setting. One unattractive aspect of this model is the assumption that the process starts with every node in the network having a signal. We study a natural extension of the DeGroot model that can deal with sparse initial signals. We show that an agent’s social influence in this generalized DeGroot model is essentially proportional to the number of uninformed nodes who will hear about an event for the first time via this agent. This characterization result then allows us to relate network geometry to information aggregation. We identify an example of a network structure where essentially only the signal of a single agent is aggregated, which helps us pinpoint a condition on the network structure necessary for almost full aggregation. We then simulate the modeled learning process on a set of real world networks; for these networks there is on average 21.6% information loss. We also explore how correlation in the location of seeds can exacerbate aggregation failure. Simulations with real world network data show that with clustered seeding, information loss climbs to 35%.
JEL Classifications
  • D8 - Information, Knowledge, and Uncertainty