Network Economics Publications

Sort by year or by subject or by author or by venue.

By-year quick links: [Favorites] [Preprints] [2013] [2012] [2011] [2010] [2009] [2008]

By-subject quick links: [Cloud computing] [Computational complexity] [Cost sharing] [Cournot competition] [Electricity markets] [Epidemics] [Game theoretic control] [General equilibrium theory] [Learning in games] [Market power] [Matchings] [Mechanism design] [Price of anarchy] [Privacy] [Revealed preference theory] [Scheduling and routing games] [Social networks] [Theory of the consumer]


Cloud computing

  • Jonatha Anselmi, Danilo Ardagna and Adam Wierman
    Preprint.
    [show/hide abstract]
    This paper proposes a model to study the interaction of pricing and congestion in the cloud computing marketplace. Specifically, we develop a three-tier market model that captures a marketplace with users purchasing services from Software-as-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) providers or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize competitive equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs and to understand the impact of price competition on the user experienced performance. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud. We evaluate the inefficiency of user performance by studying the 'Price of Anarchy' and show that it grows with the non-linearity of the congestion functions for cloud resources.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Preprint.
    [show/hide abstract]
    In this paper, we consider the problem of capacity provisioning for an online service supported by advertising. We analyse the strategic interaction between the service provider and the user base in this setting, modeling positive network effects, as well as congestion sensitivity in the user base. We focus specifically on the influence of positive network effects, as well as the impact of non-cooperative behavior in the user base on the firm's capacity provisioning decision and its profit. Our analysis reveals that stronger positive network effects, as well as non-cooperation in the user base, drive the service into a more congested state and lead to increased profit for the service provider. However, the impact of non-cooperation, or 'anarchy' in the user base strongly dominates the impact of network effects.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Proceedings of IFIP Performance, 2011.
    [show/hide abstract]
    Online services today are characterized by a highly congestion sensitive user base, that also experiences strong positive network effects. A majority of these services are supported by advertising and are offered for free to the end user. We study the problem of optimal capacity provisioning for a profit maximizing firm operating such an online service in the asymptotic regime of a large market size. We show that network effects heavily influence the optimal capacity provisioning strategy, as well as the profit of the firm. In particular, strong positive network effects allow the firm to operate the service with fewer servers, which translates to increased profit.
  • Jonatha Anselmi, Urtzi Ayesta and Adam Wierman
    Performance Evaluation, 2011. 68(11):986-1001.
    Also appeared in the Proceedings of IFIP Performance 2011.
    [show/hide abstract]
    We study a nonatomic congestion game with N parallel links, with each link under the control of a profit maximizing provider. Within this `load balancing game', each provider has the freedom to set a price, or toll, for access to the link and seeks to maximize its own profit. Within fixed prices, a Wardrop equilibrium among users is assumed, under which users all choose paths of minimal and identical effective cost. Within this model we have oligopolistic price competition which, in equilibrium, gives rise to situations where neither providers nor users have incentives to adjust their prices or routes, respectively. In this context, we provide new results about the existence and efficiency of oligopolistic equilibria. Our main theorem shows that, when the number of providers is small, oligopolistic equilibria can be extremely inefficient; however as the number of providers N grows, the oligopolistic equilibria become increasingly efficient (at a rate of 1/N) and, as N grows, the oligopolistic equilibrium matches the socially optimal allocation.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of IEEE Infocom, 2009.
    [show/hide abstract]
    Load balancing is a common approach to task assignment in distributed architectures. In this paper, we show that the degree of inefficiency in load balancing designs is highly dependent on the scheduling discipline used at each of the back-end servers. Traditionally, the back-end scheduler can be modeled as Processor Sharing (PS), in which case the degree of inefficiency grows linearly with the number of servers. However, if the back-end scheduler is changed to Shortest Remaining Processing Time (SRPT), the degree of inefficiency can be independent of the number of servers, instead depending only on the heterogeneity of the speeds of the servers. Further, switching the back-end scheduler to SRPT can provide significant improvements in the overall mean response time of the system as long as the heterogeneity of the server speeds is small.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Computational complexity

  • Federico Echenique, Daniel Golovin and Adam Wierman
    Preprint. Extension of a paper that appeared in the Proceedings of ACM EC, 2011.
    [show/hide abstract]
    Consumption theory assumes that consumers posses infinite computational abilities. Proponents of bounded rationality want to instead require that any model of consumer behavior incorporate computational constraints. In this paper, we establish that this requirement has no testable implications. Any consumption data set that is compatible with a rational consumer is also compatible with a rational and computationally bounded consumer. The result extends to data on multiple agents interacting in a market economy. We present sufficient conditions on observed market outcomes such that they are compatible with an instance of the model for which Walrasian equilibrium is easy to compute. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worst-case view of computational complexity in important ways, and is methodologically close to mainstream economics.
  • Federico Echenique and Adam Wierman
    Proceedings of ACM EC, 2012.
    [show/hide abstract]
    In this work, we study the complexity of finding a Walrasian equilibrium. Our main result gives an algorithm which can compute an approximate Walrasian equilibrium in an exchange economy with general, but well-behaved, utility functions in time that is polynomial in the number of goods when the number of agents is held constant. This result has applications to macroeconomics and finance, where applications of Walrasian equilibrium theory tend to deal with many goods but a fixed number of agents.
  • Federico Echenique, Daniel Golovin and Adam Wierman
    Proceedings of ACM EC, 2011.
    [show/hide abstract]
    One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worst-case under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time.

    Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worst-case view of computational complexity in important ways, and is methodologically close to mainstream economics.
  • Federico Echenique, Daniel Golovin and Adam Wierman
    ACM SigEcom Exchange, 2011.
    [show/hide abstract]
    Recent results in complexity theory suggest that various economic theories require agents to solve intractable problems. However, such results assume the agents are optimizing explicit utility functions, whereas the economic theories merely assume the agents are rational, where rational behavior is defined via some optimization problem. Might making rational choices be easier than solving the corresponding optimization problem? For at least one major economic theory, the theory of the consumer, we find this is indeed the case.

Cost sharing

  • Ragavendran Gopalakrishnan, Jason Marden and Adam Wierman
    Proceedings of ACM EC, 2013.
    [show/hide abstract]
    We consider the problem of designing distribution rules to share 'welfare' (cost or revenue) among individually strategic agents. There are many known distribution rules that guarantee the existence of a (pure) Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; however, a characterization of the space of distribution rules that guarantee the existence of a Nash equilibrium is unknown. Our work provides an exact characterization of this space for a specific class of scalable and separable games, which includes a variety of applications such as facility location, routing, network formation, and coverage games. Given arbitrary local welfare functions W, we prove that a distribution rule guarantees equilibrium existence for all games (i.e., all possible sets of resources, agent action sets, etc.) if and only if it is equivalent to a generalized weighted Shapley value on some 'ground' welfare functions W´, which can be distinct from W. However, if budget-balance is required in addition to the existence of a Nash equilibrium, then W´ must be the same as W. We also provide an alternate characterization of this space in terms of 'generalized' marginal contributions, which is more appealing from the point of view of computational tractability. A possibly surprising consequence of our result is that, in order to guarantee equilibrium existence in all games with any fixed local welfare functions, it is necessary to work within the class of potential games.
  • Jason Marden and Adam Wierman
    IEEE Transactions on Automatic Control, 2013. To appear.
    [show/hide abstract]
    Cooperative control focuses on deriving desirable collective behavior in multiagent systems through the design of local control algorithms. Game theory is beginning to emerge as a valuable set of tools for achieving this objective. A central component of this game theoretic approach is the assignment of utility functions to the individual agents. Here, the goal is to assign utility functions within an `admissible' design space such that the resulting game possesses desirable properties. Our first set of results illustrates the complexity associated with such a task. In particular, we prove that if we restrict the class of utility functions to be local, scalable, and budget-balanced then (i) ensuring that the resulting game possesses a pure Nash equilibrium requires computing a Shapley value, which can be computationally prohibitive for large-scale systems, and (ii) ensuring that the allocation which optimizes the system level objective is a pure Nash equilibrium is impossible. The last part of this paper demonstrates that both limitations can be overcome by introducing an underlying state space into the potential game structure.
  • Jason Marden and Adam Wierman
    INFORMS Operations Research, 2013. To appear.
    [show/hide abstract]
    We consider a variation of the resource allocation problem. In the traditional problem, there is a global planner who would like to assign a set of players to a set of resources so as to maximize welfare. We consider the situation where the global planner does not have the authority to assign players to resources; rather, players are self-interested. The question that emerges is how can the global planner entice the players to settle on a desirable allocation with respect to the social welfare? To study this question, we focus on a class of games that we refer to as distributed welfare games. Within this context, we investigate how the global planner should distribute the welfare to the players? We measure the efficacy of a distribution rule in two ways: (i) Does a pure Nash equilibrium exist? (ii) How does the welfare associated with a pure Nash equilibrium compare to the social welfare associated with the optimal allocation? In this paper we explore the applicability of cost sharing methodologies for distributing welfare in such resource allocation problems. We demonstrate that obtaining desirable distribution rules, such as distribution rules that are budget balanced and guarantee the existence of a pure Nash equilibrium, often comes at a significant informational and computational cost. In light of this, we derive a systematic procedure for designing desirable distribution rules with a minimal informational and computational cost for a special class of distributed welfare games. Furthermore, we derive a bound on the price of anarchy for distributed welfare games in a variety of settings. Lastly, we highlight the implications of these results using the problem of sensor coverage.
  • Christine Chung, Katrina Ligett, Kirk Pruhs and Aaron Roth
    Algorithmica, 2012. 63:634-644.
    [show/hide abstract]
    We explore the revenue capabilities of truthful, monotone ('fair') allocation and pricing functions for resource-constrained auction mechanisms within a general framework that encompasses unlimited supply auctions, knapsack auctions, and auctions with general non-decreasing convex production cost functions. We study and compare the revenue obtainable in each fair pricing scheme to the profit obtained by the ideal omniscient multi-price auction. We show that for capacitated knapsack auctions, no constant pricing scheme can achieve any approximation to the optimal profit, but proportional pricing is as powerful as general monotone pricing. In addition, for auction settings with arbitrary bounded non-decreasing convex production cost functions, we present a proportional pricing mechanism which achieves a poly-logarithmic approximation. Unlike existing approaches, all of our mechanisms have fair (monotone) prices, and all of our competitive analysis is with respect to the optimal profit extraction.
  • Ragavendran Gopalakrishnan, Jason Marden and Adam Wierman
    Proceedings of NetGCoop, 2011.
    [show/hide abstract]
    We consider the problem of designing the distribution rule used to share welfare (cost or revenue) among individually strategic agents. There are many distribution rules known to guarantee the existence of a Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; however a characterization of the space of distribution rules that yield the existence of a Nash equilibrium is unknown. Our work provides a step towards such a characterization. We prove that when the welfare function is a submodular function, a budget-balanced distribution rule guarantees the existence of an equilibrium for all games (set of resources, agent action sets, etc.) if and only if it is a weighted Shapley value.
  • The Power of Fair Pricing Mechanisms
    Christine Chung, Katrina Ligett, Kirk Pruhs and Aaron Roth
    Proceedings of LATIN, 2010.
  • Ragavendran Gopalakrishnan, Jason Marden and Adam Wierman
    Proceedings of ACM Hotmetrics, 2010.
    [show/hide abstract]
    Game-theoretic control is a promising new approach for distributed resource allocation. In this paper, we describe how game-theoretic control can be viewed as having an intrinsic layered architecture, which provides a modularization that simplifies the control design. We illustrate this architectural view by presenting details about one particular instantiation using potential games as an interface. This example serves to highlight the strengths and limitations of the proposed architecture while also illustrating the relationship between game-theoretic control and other existing approaches to distributed resource allocation.
  • George Christodoulou, Christine Chung, Katrina Ligett, Evangelia Pyrga and Rob van Stee
    Proceedings of WAOA, 2009. 86-97.
    [show/hide abstract]
    We continue the study of the effects of selfish behavior in the network design problem. We provide new bounds for the price of stability for network design with fair cost allocation for undirected graphs. We consider the most general case, for which the best known upper bound is the Harmonic number Hn, where n is the number of agents, and the best previously known lower bound is 12/7 ≈ 1.778.
    We present a nontrivial lower bound of 42/23 ≈ 1.8261. Furthermore, we show that for two players, the price of stability is exactly 4/3, while for three players it is at least 74/48 ≈ 1.542 and at most 1.65. These are the first improvements on the bound of Hn for general networks. In particular, this demonstrates a separation between the price of stability on undirected graphs and that on directed graphs, where Hn is tight. Previously, such a gap was only known for the cases where all players have a shared source, and for weighted players.
  • Jason Marden and Adam Wierman
    Proceedings of IEEE CDC, 2009.
    [show/hide abstract]
    Recently, game theory has been proposed as a tool for cooperative control. Specifically, the interactions of a multi-agent distributed system are modeled as a non-cooperative game where agents are self-interested. In this work, we prove that this approach of non-cooperative control has limitations with respect to engineering multi-agent systems. In particular, we prove that it is not possible to design budget balanced agent utilities that also guarantee that the optimal control is a Nash equilibrium. However, it is important to realize that game-theoretic designs are not restricted to the framework of non-cooperative games. In particular, we demonstrate that these limitations can be overcome by conditioning each player's utility on additional information, i.e., a state. This utility design fits into the framework of a particular form of stochastic games termed state-based games and is applicable in many application domains.
  • Jason Marden and Adam Wierman
    Proceedings of IEEE CDC, 2008.
    [show/hide abstract]
    We consider a variation of the resource allocation problem. In the traditional problem, there is a global planner who would like to assign a set of players to a set of resources so as to maximize social welfare. We consider the situation where the global planner does not have the authority to assign players to resources; rather, players are self-interested. The question that emerges is how can the global planner entice the players to settle on a desirable allocation with respect to the social welfare? To study this question, we focus on a class of games that we refer to as distributed welfare games. Within this context, we investigate how the global planner should distribute the social welfare to the players? We measure the efficacy of a distribution rule in two ways: (i) Does a pure Nash equilibrium exist? (ii) How does the social welfare of a pure Nash equilibrium compare to the social welfare of the optimal allocation? This comparison is referred to as the price of anarchy. To answer these questions, we derive sufficient conditions on the distribution rule that ensures the existence of a pure Nash equilibrium in any single-selection distributed welfare game. Furthermore, we derive a price of anarchy for distributed welfare games in a variety of settings. Lastly, we highlight the implications of these results using the sensor coverage problem.

Cournot competition

  • Amos Fiat, Elias Koutsoupias, Katrina Ligett, Yishay Mansour and Svetlana Olonetsky
    Proceedings of SODA, 2012.
    [show/hide abstract]
    A Nash Equilibrium is a joint strategy profile at which each agent myopically plays a best response to the other agents' strategies, ignoring the possibility that deviating from the equilibrium could lead to an avalanche of successive changes by other agents. However, such changes could potentially be beneficial to the agent, creating incentive to act non-myopically, so as to take advantage of others' responses.

    To study this phenomenon, we consider a non-myopic Cournot competition, where each firm selects whether it wants to maximize profit (as in the classical Cournot competition) or to maximize revenue (by masquerading as a firm with zero production costs).

    The key observation is that profit may actually be higher when acting to maximize revenue, (1) which will depress market prices, (2) which will reduce the production of other firms, (3) which will gain market share for the revenue maximizing firm, (4) which will, overall, increase profits for the revenue maximizing firm. Implicit in this line of thought is that one might take other firms' responses into account when choosing a market strategy. The Nash Equilibria of the non-myopic Cournot competition capture this action/response issue appropriately, and this work is a step towards understanding the impact of such strategic manipulative play in markets.

    We study the properties of Nash Equilibria of non-myopic Cournot competition with linear demand functions and show existence of pure Nash Equilibria, that simple best response dynamics will produce such an equilibrium, and that for some natural dynamics this convergence is within linear time. This is in contrast to the well known fact that best response dynamics need not converge in the standard myopic Cournot competition.

    Furthermore, we compare the outcome of the non-myopic Cournot competition with that of the standard myopic standard Cournot competition. Not surprisingly, perhaps, prices in the non-myopic game are lower and the firms, in total, produce more and have a lower aggregate utility.

Electricity markets

  • Subhonmesh Bose, Chenye Wu, Yunjian Xu, Adam Wierman and Hamed Mohsenian-Rad
    Preprint. Submitted to Trans. on Power Systems.
    [show/hide abstract]
    Market power assessment is a prime concern when designing a deregulated electricity market. In this paper, we propose a new functional market power measure, termed transmission constrained network flow (TCNF), that unifies three large classes of long-term transmission constrained market power indices in the literature: residual supply based, network flow based, and minimal generation based. Furthermore, it is suitable for demand-response and renewable integration and hence more amenable to identify market power in the future smart grid. The measure is defined abstractly, and so allows incorporation of power flow equations in multiple ways; this is illustrated this using: (a) a DC approximation, (b) a semidefinite relaxation, and (c) interior-point algorithms from Matpower. Finally, we provide extensive simulations on IEEE benchmark systems and highlight the complex interaction of engineering constraints with market power assessment and compare all the different approaches to assess market power.
  • Jayakrishnan Nair, Sachin Adlakha and Adam Wierman
    Preprint.
    [show/hide abstract]
    The increasing penetration of intermittent, unpredictable renewable energy sources, such as wind energy, pose significant challenges for the utility companies trying to incorporate renewable energy in their portfolio. In this talk, we discuss inventory management issues that arise in the presence of intermittent renewable resources. We model the problem as a three stage newsvendor problem with uncertain supply and model the estimates of wind using a martingale model of forecast evolution. We describe the optimal procurement strategy and use it to study the impact of proposed market changes and of increased renewable penetration. A key insight from our results is to show a separation between the impact of the structure of electricity markets and the impact of increased penetration. In particular, the effect of market structure on the optimal procurement policy is independent of the level of wind penetration. Additionally, we study two proposed changes to the market structure: the addition and the placement of an intermediate market. Importantly, we show that addition of an intermediate market does not necessarily reduce the total amount of energy procured by the utility company.
  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Proceedings of IEEE Power & Energy Society General Meeting, 2013. ''Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed transmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.
  • Integrating distributed energy resource pricing and control
    Paul de Martini, Adam Wierman, Sean Meyn and Eilyan Bitar
    Proceedings of CIGRE USNC Grid of the Future Symposium, 2012.
    [show/hide abstract]
    As the market adoption of distributed energy resources (DER) reaches regional scale it will create significant challenges in the management of the distribution system related to existing protection and control systems. This is likely to lead to issues for power quality and reliability because of three issues. In this paper, we describe a framework for the development of a class of pricing mechanisms that both induce deep customer participation and enable efficient management of their end-use devices to provide both distribution and transmission side support. The basic challenge resides in reliably extracting the desired response from customers on short time-scales. Thus, new pricing mechanisms are needed to create effective closed loop systems that are tightly coupled with distribution control systems to ensure reliability and power quality.

Epidemics

  • The cost of an epidemic over a complex network: A random matrix approach
    Subhonmesh Bose, Elizabeth Bodine-Baron, Babak Hassibi and Adam Wierman
    Preprint.
    [show/hide abstract]
    In this paper we quantify the total cost of an epidemic spreading through a social network, accounting for both the immunization and disease costs. Previous research has typically focused on determining the optimal strategy to limit the lifetime of a disease, without considering the cost of such strategies. In the large graph limit, we calculate the exact expected disease cost for a general random graph, and we illustrate it for the specific example of an Erdos-Renyi network. We also give an upper bound on the expected disease cost for finite-size graphs. Finally, we study how to optimally perform a one-shot immunization to minimize the social cost of a disease, including both the cost of the disease and the cost of immunization.
  • Elizabeth Bodine-Baron, Subhonmesh Bose, Babak Hassibi and Adam Wierman
    Proceedings of ICST Gamenets, 2011.
    [show/hide abstract]
    In this paper we quantify the total cost of an epidemic spreading through a social network, accounting for both the immunization and disease costs. Previous research has typically focused on determining the optimal strategy to limit the lifetime of a disease, without considering the cost of such strategies. In the large graph limit, we calculate the exact expected disease cost for a general random graph, and we illustrate it for the specific example of an Erdos-Renyi network. We also give an upper bound on the expected disease cost for finite-size graphs. Finally, we study how to optimally perform a one-shot immunization to minimize the social cost of a disease, including both the cost of the disease and the cost of immunization.

Game theoretic control

  • Ragavendran Gopalakrishnan, Jason Marden and Adam Wierman
    Proceedings of ACM EC, 2013.
    [show/hide abstract]
    We consider the problem of designing distribution rules to share 'welfare' (cost or revenue) among individually strategic agents. There are many known distribution rules that guarantee the existence of a (pure) Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; however, a characterization of the space of distribution rules that guarantee the existence of a Nash equilibrium is unknown. Our work provides an exact characterization of this space for a specific class of scalable and separable games, which includes a variety of applications such as facility location, routing, network formation, and coverage games. Given arbitrary local welfare functions W, we prove that a distribution rule guarantees equilibrium existence for all games (i.e., all possible sets of resources, agent action sets, etc.) if and only if it is equivalent to a generalized weighted Shapley value on some 'ground' welfare functions W´, which can be distinct from W. However, if budget-balance is required in addition to the existence of a Nash equilibrium, then W´ must be the same as W. We also provide an alternate characterization of this space in terms of 'generalized' marginal contributions, which is more appealing from the point of view of computational tractability. A possibly surprising consequence of our result is that, in order to guarantee equilibrium existence in all games with any fixed local welfare functions, it is necessary to work within the class of potential games.
  • Jason Marden and Adam Wierman
    IEEE Transactions on Automatic Control, 2013. To appear.
    [show/hide abstract]
    Cooperative control focuses on deriving desirable collective behavior in multiagent systems through the design of local control algorithms. Game theory is beginning to emerge as a valuable set of tools for achieving this objective. A central component of this game theoretic approach is the assignment of utility functions to the individual agents. Here, the goal is to assign utility functions within an `admissible' design space such that the resulting game possesses desirable properties. Our first set of results illustrates the complexity associated with such a task. In particular, we prove that if we restrict the class of utility functions to be local, scalable, and budget-balanced then (i) ensuring that the resulting game possesses a pure Nash equilibrium requires computing a Shapley value, which can be computationally prohibitive for large-scale systems, and (ii) ensuring that the allocation which optimizes the system level objective is a pure Nash equilibrium is impossible. The last part of this paper demonstrates that both limitations can be overcome by introducing an underlying state space into the potential game structure.
  • Jason Marden and Adam Wierman
    INFORMS Operations Research, 2013. To appear.
    [show/hide abstract]
    We consider a variation of the resource allocation problem. In the traditional problem, there is a global planner who would like to assign a set of players to a set of resources so as to maximize welfare. We consider the situation where the global planner does not have the authority to assign players to resources; rather, players are self-interested. The question that emerges is how can the global planner entice the players to settle on a desirable allocation with respect to the social welfare? To study this question, we focus on a class of games that we refer to as distributed welfare games. Within this context, we investigate how the global planner should distribute the welfare to the players? We measure the efficacy of a distribution rule in two ways: (i) Does a pure Nash equilibrium exist? (ii) How does the welfare associated with a pure Nash equilibrium compare to the social welfare associated with the optimal allocation? In this paper we explore the applicability of cost sharing methodologies for distributing welfare in such resource allocation problems. We demonstrate that obtaining desirable distribution rules, such as distribution rules that are budget balanced and guarantee the existence of a pure Nash equilibrium, often comes at a significant informational and computational cost. In light of this, we derive a systematic procedure for designing desirable distribution rules with a minimal informational and computational cost for a special class of distributed welfare games. Furthermore, we derive a bound on the price of anarchy for distributed welfare games in a variety of settings. Lastly, we highlight the implications of these results using the problem of sensor coverage.
  • Ragavendran Gopalakrishnan, Jason Marden and Adam Wierman
    Proceedings of NetGCoop, 2011.
    [show/hide abstract]
    We consider the problem of designing the distribution rule used to share welfare (cost or revenue) among individually strategic agents. There are many distribution rules known to guarantee the existence of a Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; however a characterization of the space of distribution rules that yield the existence of a Nash equilibrium is unknown. Our work provides a step towards such a characterization. We prove that when the welfare function is a submodular function, a budget-balanced distribution rule guarantees the existence of an equilibrium for all games (set of resources, agent action sets, etc.) if and only if it is a weighted Shapley value.
  • Ragavendran Gopalakrishnan, Jason Marden and Adam Wierman
    Proceedings of ACM Hotmetrics, 2010.
    [show/hide abstract]
    Game-theoretic control is a promising new approach for distributed resource allocation. In this paper, we describe how game-theoretic control can be viewed as having an intrinsic layered architecture, which provides a modularization that simplifies the control design. We illustrate this architectural view by presenting details about one particular instantiation using potential games as an interface. This example serves to highlight the strengths and limitations of the proposed architecture while also illustrating the relationship between game-theoretic control and other existing approaches to distributed resource allocation.
  • Jason Marden and Adam Wierman
    Proceedings of IEEE CDC, 2009.
    [show/hide abstract]
    Recently, game theory has been proposed as a tool for cooperative control. Specifically, the interactions of a multi-agent distributed system are modeled as a non-cooperative game where agents are self-interested. In this work, we prove that this approach of non-cooperative control has limitations with respect to engineering multi-agent systems. In particular, we prove that it is not possible to design budget balanced agent utilities that also guarantee that the optimal control is a Nash equilibrium. However, it is important to realize that game-theoretic designs are not restricted to the framework of non-cooperative games. In particular, we demonstrate that these limitations can be overcome by conditioning each player's utility on additional information, i.e., a state. This utility design fits into the framework of a particular form of stochastic games termed state-based games and is applicable in many application domains.
  • Jason Marden and Adam Wierman
    Proceedings of IEEE CDC, 2008.
    [show/hide abstract]
    We consider a variation of the resource allocation problem. In the traditional problem, there is a global planner who would like to assign a set of players to a set of resources so as to maximize social welfare. We consider the situation where the global planner does not have the authority to assign players to resources; rather, players are self-interested. The question that emerges is how can the global planner entice the players to settle on a desirable allocation with respect to the social welfare? To study this question, we focus on a class of games that we refer to as distributed welfare games. Within this context, we investigate how the global planner should distribute the social welfare to the players? We measure the efficacy of a distribution rule in two ways: (i) Does a pure Nash equilibrium exist? (ii) How does the social welfare of a pure Nash equilibrium compare to the social welfare of the optimal allocation? This comparison is referred to as the price of anarchy. To answer these questions, we derive sufficient conditions on the distribution rule that ensures the existence of a pure Nash equilibrium in any single-selection distributed welfare game. Furthermore, we derive a price of anarchy for distributed welfare games in a variety of settings. Lastly, we highlight the implications of these results using the sensor coverage problem.

General equilibrium theory

  • Federico Echenique and Adam Wierman
    Proceedings of ACM EC, 2012.
    [show/hide abstract]
    In this work, we study the complexity of finding a Walrasian equilibrium. Our main result gives an algorithm which can compute an approximate Walrasian equilibrium in an exchange economy with general, but well-behaved, utility functions in time that is polynomial in the number of goods when the number of agents is held constant. This result has applications to macroeconomics and finance, where applications of Walrasian equilibrium theory tend to deal with many goods but a fixed number of agents.

Learning in games

  • Amos Fiat, Elias Koutsoupias, Katrina Ligett, Yishay Mansour and Svetlana Olonetsky
    Proceedings of SODA, 2012.
    [show/hide abstract]
    A Nash Equilibrium is a joint strategy profile at which each agent myopically plays a best response to the other agents' strategies, ignoring the possibility that deviating from the equilibrium could lead to an avalanche of successive changes by other agents. However, such changes could potentially be beneficial to the agent, creating incentive to act non-myopically, so as to take advantage of others' responses.

    To study this phenomenon, we consider a non-myopic Cournot competition, where each firm selects whether it wants to maximize profit (as in the classical Cournot competition) or to maximize revenue (by masquerading as a firm with zero production costs).

    The key observation is that profit may actually be higher when acting to maximize revenue, (1) which will depress market prices, (2) which will reduce the production of other firms, (3) which will gain market share for the revenue maximizing firm, (4) which will, overall, increase profits for the revenue maximizing firm. Implicit in this line of thought is that one might take other firms' responses into account when choosing a market strategy. The Nash Equilibria of the non-myopic Cournot competition capture this action/response issue appropriately, and this work is a step towards understanding the impact of such strategic manipulative play in markets.

    We study the properties of Nash Equilibria of non-myopic Cournot competition with linear demand functions and show existence of pure Nash Equilibria, that simple best response dynamics will produce such an equilibrium, and that for some natural dynamics this convergence is within linear time. This is in contrast to the well known fact that best response dynamics need not converge in the standard myopic Cournot competition.

    Furthermore, we compare the outcome of the non-myopic Cournot competition with that of the standard myopic standard Cournot competition. Not surprisingly, perhaps, prices in the non-myopic game are lower and the firms, in total, produce more and have a lower aggregate utility.
  • Katrina Ligett and Georgios Piliouras
    ACM SIGecom Exchanges, 2011. 10:23-26.
  • Robert Kleinberg, Katrina Ligett, Georgios Piliouras and Éva Tardos
    Proceedings of ICS, 2011.
    [show/hide abstract]
    Nash equilibrium analysis has become the de facto standard for judging the solution quality achieved in systems composed of selfish users. This mindset is so pervasive in computer science that even the few papers devoted to directly analyzing outcomes of dynamic processes in repeated games (e.g., best-response or no-regret learning dynamics) have focused on showing that the performance of these dynamics is comparable to that of Nash equilibria. By assuming that equilibria are representative of the outcomes of selfish behavior, do we ever reach qualitatively wrong conclusions about those outcomes?

    In this paper, we argue that there exist games whose equilibria represent unnatural outcomes that are hard to coordinate on, and that the solution quality achieved by selfish users in such games is more accurately reflected in the disequilibrium represented by dynamics such as those produced by natural families of on-line learning algorithms. We substantiate this viewpoint by studying a game with a unique Nash equilibrium, but where natural learning dynamics exhibit non-convergent cycling behavior rather than converging to this equilibrium. We show that the outcome of this learning process is optimal and has much better social welfare than the unique Nash equilibrium, dramatically illustrating that natural learning processes have the potential to significantly outperform equilibrium-based analysis.
  • Avrim Blum, Eyal Even-Dar and Katrina Ligett
    Theory of Computing, 2010. 6:179-199.
    [show/hide abstract]
    There has been substantial work developing simple, efficient regret-minimizing algorithms for a wide class of repeated decision-making problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversarially-changing environments. There has also been substantial work on analyzing properties of Nash equilibria in routing games. In this paper, we consider the question: if each player in a routing game uses a regret-minimizing strategy, will behavior converge to a Nash equilibrium? In general games the answer to this question is known to be no in a strong sense, but routing games have substantially more structure.

    In this paper we show that in the Wardrop setting of multicommodity flow and infinitesimal agents, behavior will approach Nash equilibrium (on most days, the cost of the flow will be close to the cost of the cheapest paths possible given that flow) at a rate that depends polynomially on the players' regret bounds and the maximum slope of any latency function. We also show that Price of Anarchy results may be applied to these approximate equilibria, and also consider the finite-size (non-infinitesimal) load-balancing model of Azar (1998). Our nonatomic results also apply to the more general class of congestion games.
  • Sham Kakade, Adam Tauman Kalai and Katrina Ligett
    SIAM Journal on Computing, 2009. 39:1088-1106.
    [show/hide abstract]
    In an online linear optimization problem, on each period t, an online algorithm chooses stS from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ ℝn, and the algorithm incurs cost c(st, wt), where c is a fixed cost function that is linear in the weight vector. In the full-information setting, the vector wt is then revealed to the algorithm, and in the bandit setting, only the cost experienced, c(st, wt), is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed sS in hindsight. Many repeated decision-making problems with weights fit naturally into this framework, such as online shortest-path, online traveling salesman problem (TSP), online clustering, and online weighted set cover. Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online algorithm in both the full-information and the bandit settings, with average cost nearly as good as that of the best fixed sS in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio α > 1, the previous approach worked only for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an α-approximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than α times that of the best sS, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a 'barycentric spanner' for the problem is also (provably) necessary as input. Our algorithm can also be viewed as a method for playing large repeated games, where one can compute only approximate best responses, rather than best responses.

Market power

  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Proceedings of IEEE Power & Energy Society General Meeting, 2013. ''Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed transmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.

Matchings

  • Elizabeth Bodine-Baron, Christina Lee, Anthony Chong, Babak Hassibi and Adam Wierman
    Proceedings of SAGT, 2011.
    [show/hide abstract]
    Many-to-one matching markets exist in numerous different forms, such as college admissions, matching medical interns to hospitals for residencies, assigning housing to college students, and the classic rms and workers market. In the these markets, externalities such as complementarities and peer effects severely complicate the preference ordering of each agent. Further, research has shown that externalities lead to serious problems for market stability and for developing effciient algorithms to finding stable matchings.

    In this paper we make the observation that peer effects are often the result of underlying social connections, and we explore a formulation of the many-to-one matching market where peer effects are derived from an underlying social network. Under this assumption, we prove that stable matchings always exist and characterize the set of stable matchings. Further, we prove bounds on how far welfare of the worst-case stable matching can be from the welfare of the optimal matching, and find that the structure of the clustering of the social network plays a large role.

Mechanism design

  • Christine Chung, Katrina Ligett, Kirk Pruhs and Aaron Roth
    Algorithmica, 2012. 63:634-644.
    [show/hide abstract]
    We explore the revenue capabilities of truthful, monotone ('fair') allocation and pricing functions for resource-constrained auction mechanisms within a general framework that encompasses unlimited supply auctions, knapsack auctions, and auctions with general non-decreasing convex production cost functions. We study and compare the revenue obtainable in each fair pricing scheme to the profit obtained by the ideal omniscient multi-price auction. We show that for capacitated knapsack auctions, no constant pricing scheme can achieve any approximation to the optimal profit, but proportional pricing is as powerful as general monotone pricing. In addition, for auction settings with arbitrary bounded non-decreasing convex production cost functions, we present a proportional pricing mechanism which achieves a poly-logarithmic approximation. Unlike existing approaches, all of our mechanisms have fair (monotone) prices, and all of our competitive analysis is with respect to the optimal profit extraction.
  • The Power of Fair Pricing Mechanisms
    Christine Chung, Katrina Ligett, Kirk Pruhs and Aaron Roth
    Proceedings of LATIN, 2010.

Price of anarchy

  • Jonatha Anselmi, Danilo Ardagna and Adam Wierman
    Preprint.
    [show/hide abstract]
    This paper proposes a model to study the interaction of pricing and congestion in the cloud computing marketplace. Specifically, we develop a three-tier market model that captures a marketplace with users purchasing services from Software-as-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) providers or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize competitive equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs and to understand the impact of price competition on the user experienced performance. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud. We evaluate the inefficiency of user performance by studying the 'Price of Anarchy' and show that it grows with the non-linearity of the congestion functions for cloud resources.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Preprint.
    [show/hide abstract]
    In this paper, we consider the problem of capacity provisioning for an online service supported by advertising. We analyse the strategic interaction between the service provider and the user base in this setting, modeling positive network effects, as well as congestion sensitivity in the user base. We focus specifically on the influence of positive network effects, as well as the impact of non-cooperative behavior in the user base on the firm's capacity provisioning decision and its profit. Our analysis reveals that stronger positive network effects, as well as non-cooperation in the user base, drive the service into a more congested state and lead to increased profit for the service provider. However, the impact of non-cooperation, or 'anarchy' in the user base strongly dominates the impact of network effects.
  • Vinod Ramaswamy, Sachin Adlakha, Srinivas Shakkottai and Adam Wierman
    Proceedings of Allerton, 2012.
    [show/hide abstract]
    The rapid growth of content distribution on the Internet has brought with it proportional increases in the costs of distributing content. Adding to distribution costs is the fact that digital content is easily duplicable, and hence can be shared in an illicit peer-to-peer (P2P) manner that generates no revenue for the content provider. In this paper, we study whether the content provider can recover lost revenue through a more innovative approach to distribution. In particular, we evaluate the benefits of a hybrid revenue-sharing system that combines a legitimate P2P swarm and a centralized client-server approach. We show how the revenue recovered by the content provider using a server-supported legitimate P2P swarm can exceed that of the monopolistic scheme by an order of magnitude. Our analytical results are obtained in a fluid model, and supported by stochastic simulations.
  • Katrina Ligett and Georgios Piliouras
    ACM SIGecom Exchanges, 2011. 10:23-26.
  • Robert Kleinberg, Katrina Ligett, Georgios Piliouras and Éva Tardos
    Proceedings of ICS, 2011.
    [show/hide abstract]
    Nash equilibrium analysis has become the de facto standard for judging the solution quality achieved in systems composed of selfish users. This mindset is so pervasive in computer science that even the few papers devoted to directly analyzing outcomes of dynamic processes in repeated games (e.g., best-response or no-regret learning dynamics) have focused on showing that the performance of these dynamics is comparable to that of Nash equilibria. By assuming that equilibria are representative of the outcomes of selfish behavior, do we ever reach qualitatively wrong conclusions about those outcomes?

    In this paper, we argue that there exist games whose equilibria represent unnatural outcomes that are hard to coordinate on, and that the solution quality achieved by selfish users in such games is more accurately reflected in the disequilibrium represented by dynamics such as those produced by natural families of on-line learning algorithms. We substantiate this viewpoint by studying a game with a unique Nash equilibrium, but where natural learning dynamics exhibit non-convergent cycling behavior rather than converging to this equilibrium. We show that the outcome of this learning process is optimal and has much better social welfare than the unique Nash equilibrium, dramatically illustrating that natural learning processes have the potential to significantly outperform equilibrium-based analysis.
  • Elizabeth Bodine-Baron, Christina Lee, Anthony Chong, Babak Hassibi and Adam Wierman
    Proceedings of SAGT, 2011.
    [show/hide abstract]
    Many-to-one matching markets exist in numerous different forms, such as college admissions, matching medical interns to hospitals for residencies, assigning housing to college students, and the classic rms and workers market. In the these markets, externalities such as complementarities and peer effects severely complicate the preference ordering of each agent. Further, research has shown that externalities lead to serious problems for market stability and for developing effciient algorithms to finding stable matchings.

    In this paper we make the observation that peer effects are often the result of underlying social connections, and we explore a formulation of the many-to-one matching market where peer effects are derived from an underlying social network. Under this assumption, we prove that stable matchings always exist and characterize the set of stable matchings. Further, we prove bounds on how far welfare of the worst-case stable matching can be from the welfare of the optimal matching, and find that the structure of the clustering of the social network plays a large role.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Proceedings of IFIP Performance, 2011.
    [show/hide abstract]
    Online services today are characterized by a highly congestion sensitive user base, that also experiences strong positive network effects. A majority of these services are supported by advertising and are offered for free to the end user. We study the problem of optimal capacity provisioning for a profit maximizing firm operating such an online service in the asymptotic regime of a large market size. We show that network effects heavily influence the optimal capacity provisioning strategy, as well as the profit of the firm. In particular, strong positive network effects allow the firm to operate the service with fewer servers, which translates to increased profit.
  • Jonatha Anselmi, Urtzi Ayesta and Adam Wierman
    Performance Evaluation, 2011. 68(11):986-1001.
    Also appeared in the Proceedings of IFIP Performance 2011.
    [show/hide abstract]
    We study a nonatomic congestion game with N parallel links, with each link under the control of a profit maximizing provider. Within this `load balancing game', each provider has the freedom to set a price, or toll, for access to the link and seeks to maximize its own profit. Within fixed prices, a Wardrop equilibrium among users is assumed, under which users all choose paths of minimal and identical effective cost. Within this model we have oligopolistic price competition which, in equilibrium, gives rise to situations where neither providers nor users have incentives to adjust their prices or routes, respectively. In this context, we provide new results about the existence and efficiency of oligopolistic equilibria. Our main theorem shows that, when the number of providers is small, oligopolistic equilibria can be extremely inefficient; however as the number of providers N grows, the oligopolistic equilibria become increasingly efficient (at a rate of 1/N) and, as N grows, the oligopolistic equilibrium matches the socially optimal allocation.
  • Sherwin Doroudi, Ragavendran Gopalakrishnan and Adam Wierman
    Proceedings of ACM MAMA, 2011.
  • Avrim Blum, Eyal Even-Dar and Katrina Ligett
    Theory of Computing, 2010. 6:179-199.
    [show/hide abstract]
    There has been substantial work developing simple, efficient regret-minimizing algorithms for a wide class of repeated decision-making problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversarially-changing environments. There has also been substantial work on analyzing properties of Nash equilibria in routing games. In this paper, we consider the question: if each player in a routing game uses a regret-minimizing strategy, will behavior converge to a Nash equilibrium? In general games the answer to this question is known to be no in a strong sense, but routing games have substantially more structure.

    In this paper we show that in the Wardrop setting of multicommodity flow and infinitesimal agents, behavior will approach Nash equilibrium (on most days, the cost of the flow will be close to the cost of the cheapest paths possible given that flow) at a rate that depends polynomially on the players' regret bounds and the maximum slope of any latency function. We also show that Price of Anarchy results may be applied to these approximate equilibria, and also consider the finite-size (non-infinitesimal) load-balancing model of Azar (1998). Our nonatomic results also apply to the more general class of congestion games.
  • George Christodoulou, Christine Chung, Katrina Ligett, Evangelia Pyrga and Rob van Stee
    Proceedings of WAOA, 2009. 86-97.
    [show/hide abstract]
    We continue the study of the effects of selfish behavior in the network design problem. We provide new bounds for the price of stability for network design with fair cost allocation for undirected graphs. We consider the most general case, for which the best known upper bound is the Harmonic number Hn, where n is the number of agents, and the best previously known lower bound is 12/7 ≈ 1.778.
    We present a nontrivial lower bound of 42/23 ≈ 1.8261. Furthermore, we show that for two players, the price of stability is exactly 4/3, while for three players it is at least 74/48 ≈ 1.542 and at most 1.65. These are the first improvements on the bound of Hn for general networks. In particular, this demonstrates a separation between the price of stability on undirected graphs and that on directed graphs, where Hn is tight. Previously, such a gap was only known for the cases where all players have a shared source, and for weighted players.
  • Sham Kakade, Adam Tauman Kalai and Katrina Ligett
    SIAM Journal on Computing, 2009. 39:1088-1106.
    [show/hide abstract]
    In an online linear optimization problem, on each period t, an online algorithm chooses stS from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ ℝn, and the algorithm incurs cost c(st, wt), where c is a fixed cost function that is linear in the weight vector. In the full-information setting, the vector wt is then revealed to the algorithm, and in the bandit setting, only the cost experienced, c(st, wt), is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed sS in hindsight. Many repeated decision-making problems with weights fit naturally into this framework, such as online shortest-path, online traveling salesman problem (TSP), online clustering, and online weighted set cover. Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online algorithm in both the full-information and the bandit settings, with average cost nearly as good as that of the best fixed sS in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio α > 1, the previous approach worked only for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an α-approximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than α times that of the best sS, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a 'barycentric spanner' for the problem is also (provably) necessary as input. Our algorithm can also be viewed as a method for playing large repeated games, where one can compute only approximate best responses, rather than best responses.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of IEEE Infocom, 2009.
    [show/hide abstract]
    Load balancing is a common approach to task assignment in distributed architectures. In this paper, we show that the degree of inefficiency in load balancing designs is highly dependent on the scheduling discipline used at each of the back-end servers. Traditionally, the back-end scheduler can be modeled as Processor Sharing (PS), in which case the degree of inefficiency grows linearly with the number of servers. However, if the back-end scheduler is changed to Shortest Remaining Processing Time (SRPT), the degree of inefficiency can be independent of the number of servers, instead depending only on the heterogeneity of the speeds of the servers. Further, switching the back-end scheduler to SRPT can provide significant improvements in the overall mean response time of the system as long as the heterogeneity of the server speeds is small.
  • Avrim Blum, MohammadTaghi Hajiaghayi, Katrina Ligett and Aaron Roth
    Proceedings of STOC, 2008.
    [show/hide abstract]
    We propose weakening the assumption made when studying the price of anarchy: Rather than assume that selfinterested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret minimization can be done via simple, efficient algorithms even in many settings where the number of action choices for each player is exponential in the natural parameters of the problem. We prove that despite our weakened assumptions, in several broad classes of games, this 'price of total anarchy' matches the Nash price of anarchy, even though play may never converge to Nash equilibrium. In contrast to the price of anarchy and the recently introduced price of sinking [15], which require all players to behave in a prescribed manner, we show that the price of total anarchy is in many cases resilient to the presence of Byzantine players, about whom we make no assumptions. Finally, because the price of total anarchy is an upper bound on the price of anarchy even in mixed strategies, for some games our results yield as corollaries previously unknown bounds on the price of anarchy in mixed strategies.
  • Christine Chung, Katrina Ligett, Kirk Pruhs and Aaron Roth
    Proceedings of SAGT, 2008.
    [show/hide abstract]
    We consider the solution concept of stochastic stability, and propose the price of stochastic anarchy as an alternative to the price of (Nash) anarchy for quantifying the cost of selfishness and lack of coordination in games. As a solution concept, the Nash equilibrium has disadvantages that the set of stochastically stable states of a game avoid: unlike Nash equilibria, stochastically stable states are the result of natural dynamics of computationally bounded and decentralized agents, and are resilient to small perturbations from ideal play. The price of stochastic anarchy can be viewed as a smoothed analysis of the price of anarchy, distinguishing equilibria that are resilient to noise from those that are not. To illustrate the utility of stochastic stability, we study the load balancing game on unrelated machines. This game has an unboundedly large price of Nash anarchy even when restricted to two players and two machines. We show that in the two player case, the price of stochastic anarchy is 2, and that even in the general case, the price of stochastic anarchy is bounded. We conjecture that the price of stochastic anarchy is 𝑂(𝑚), matching the price of strong Nash anarchy without requiring player coordination. We expect that stochastic stability will be useful in understanding the relative stability of Nash equilibria in other games where the worst equilibria seem to be inherently brittle.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Privacy

  • Moritz Hardt, Katrina Ligett and Frank McSherry
    Proceedings of NIPS, 2012.
    [show/hide abstract]
    We present a new algorithm for differentially private data release, based on a simple combination of the Exponential Mechanism with the Multiplicative Weights update rule. Our MWEM algorithm achieves what are the best known and nearly optimal theoretical guarantees, while at the same time being simple to implement and experimentally more accurate on actual data sets than existing techniques.
  • Katrina Ligett and Aaron Roth
    Proceedings of WINE, 2012.
    [show/hide abstract]
    In this paper, we consider the problem of estimating a potentially sensitive (individually stigmatizing) statistic on a population. In our model, individuals are concerned about their privacy, and experience some cost as a function of their privacy loss. Nevertheless, they would be willing to participate in the survey if they were compensated for their privacy cost. These cost functions are not publicly known, however, nor do we make Bayesian assumptions about their form or distribution. Individuals are rational and will misreport their costs for privacy if doing so is in their best interest. Ghosh and Roth recently showed in this setting, when costs for privacy loss may be correlated with private types, if individuals value differential privacy, no individually rational direct revelation mechanism can compute any non-trivial estimate of the population statistic. In this paper, we circumvent this impossibility result by proposing a modified notion of how individuals experience cost as a function of their privacy loss, and by giving a mechanism which does not operate by direct revelation. Instead, our mechanism has the ability to randomly approach individuals from a population and offer them a take-it-or-leave-it offer. This is intended to model the abilities of a surveyor who may stand on a street corner and approach passers-by.
  • Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth and Kunal Talwar
    Proceedings of SODA, 2010.
    [show/hide abstract]
    Consider the following problem: given a metric space, some of whose points are 'clients,' select a set of at most k facility locations to minimize the average distance from the clients to their nearest facility. This is just the well-studied k-median problem, for which many approximation algorithms and hardness results are known. Note that the objective function encourages opening facilities in areas where there are many clients, and given a solution, it is often possible to get a good idea of where the clients are located. This raises the following quandary: what if the locations of the clients are sensitive information that we would like to keep private? Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?

    In this paper, we initiate a systematic study of algorithms for discrete optimization problems in the framework of differential privacy (which formalizes the idea of protecting the privacy of individual input elements). We show that many such problems indeed have good approximation algorithms that preserve differential privacy; this is even in cases where it is impossible to preserve cryptographic definitions of privacy while computing any non-trivial approximation to even the value of an optimal solution, let alone the entire solution.

    Apart from the k-median problem, we consider the problems of vertex and set cover, min-cut, facility location, and Steiner tree, and give approximation algorithms and lower bounds for these problems. We also consider the recently introduced submodular maximization problem, 'Combinatorial Public Projects' (CPP), shown by Papadimitriou et al. [PSS08] to be inapproximable to subpolynomial multiplicative factors by any efficient and truthful algorithm. We give a differentially private (and hence approximately truthful) algorithm that achieves a logarithmic additive approximation.
  • Shuheng Zhou, Katrina Ligett and Larry Wasserman
    Proceedings of ISIT, 2009.
    [show/hide abstract]
    This work studies formal utility and privacy guarantees for a simple multiplicative database transformation, where the data are compressed by a random linear or affine transformation, reducing the number of data records substantially, while preserving the number of original input variables. We provide an analysis framework inspired by a recent concept known as differential privacy. Our goal is to show that, despite the general difficulty of achieving the differential privacy guarantee, it is possible to publish synthetic data that are useful for a number of common statistical learning applications. This includes high dimensional sparse regression [24], principal component analysis (PCA), and other statistical measures [16] based on the covariance of the initial data.
  • Avrim Blum, Katrina Ligett and Aaron Roth
    Proceedings of STOC, 2008.
    [show/hide abstract]
    We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy.

Revealed preference theory

  • Federico Echenique, Daniel Golovin and Adam Wierman
    Preprint. Extension of a paper that appeared in the Proceedings of ACM EC, 2011.
    [show/hide abstract]
    Consumption theory assumes that consumers posses infinite computational abilities. Proponents of bounded rationality want to instead require that any model of consumer behavior incorporate computational constraints. In this paper, we establish that this requirement has no testable implications. Any consumption data set that is compatible with a rational consumer is also compatible with a rational and computationally bounded consumer. The result extends to data on multiple agents interacting in a market economy. We present sufficient conditions on observed market outcomes such that they are compatible with an instance of the model for which Walrasian equilibrium is easy to compute. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worst-case view of computational complexity in important ways, and is methodologically close to mainstream economics.
  • Federico Echenique, Daniel Golovin and Adam Wierman
    Proceedings of ACM EC, 2011.
    [show/hide abstract]
    One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worst-case under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time.

    Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worst-case view of computational complexity in important ways, and is methodologically close to mainstream economics.
  • Federico Echenique, Daniel Golovin and Adam Wierman
    ACM SigEcom Exchange, 2011.
    [show/hide abstract]
    Recent results in complexity theory suggest that various economic theories require agents to solve intractable problems. However, such results assume the agents are optimizing explicit utility functions, whereas the economic theories merely assume the agents are rational, where rational behavior is defined via some optimization problem. Might making rational choices be easier than solving the corresponding optimization problem? For at least one major economic theory, the theory of the consumer, we find this is indeed the case.

Scheduling and routing games

  • Jonatha Anselmi, Danilo Ardagna and Adam Wierman
    Preprint.
    [show/hide abstract]
    This paper proposes a model to study the interaction of pricing and congestion in the cloud computing marketplace. Specifically, we develop a three-tier market model that captures a marketplace with users purchasing services from Software-as-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) providers or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize competitive equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs and to understand the impact of price competition on the user experienced performance. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud. We evaluate the inefficiency of user performance by studying the 'Price of Anarchy' and show that it grows with the non-linearity of the congestion functions for cloud resources.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Preprint.
    [show/hide abstract]
    In this paper, we consider the problem of capacity provisioning for an online service supported by advertising. We analyse the strategic interaction between the service provider and the user base in this setting, modeling positive network effects, as well as congestion sensitivity in the user base. We focus specifically on the influence of positive network effects, as well as the impact of non-cooperative behavior in the user base on the firm's capacity provisioning decision and its profit. Our analysis reveals that stronger positive network effects, as well as non-cooperation in the user base, drive the service into a more congested state and lead to increased profit for the service provider. However, the impact of non-cooperation, or 'anarchy' in the user base strongly dominates the impact of network effects.
  • Vinod Ramaswamy, Sachin Adlakha, Srinivas Shakkottai and Adam Wierman
    Proceedings of Allerton, 2012.
    [show/hide abstract]
    The rapid growth of content distribution on the Internet has brought with it proportional increases in the costs of distributing content. Adding to distribution costs is the fact that digital content is easily duplicable, and hence can be shared in an illicit peer-to-peer (P2P) manner that generates no revenue for the content provider. In this paper, we study whether the content provider can recover lost revenue through a more innovative approach to distribution. In particular, we evaluate the benefits of a hybrid revenue-sharing system that combines a legitimate P2P swarm and a centralized client-server approach. We show how the revenue recovered by the content provider using a server-supported legitimate P2P swarm can exceed that of the monopolistic scheme by an order of magnitude. Our analytical results are obtained in a fluid model, and supported by stochastic simulations.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Proceedings of IFIP Performance, 2011.
    [show/hide abstract]
    Online services today are characterized by a highly congestion sensitive user base, that also experiences strong positive network effects. A majority of these services are supported by advertising and are offered for free to the end user. We study the problem of optimal capacity provisioning for a profit maximizing firm operating such an online service in the asymptotic regime of a large market size. We show that network effects heavily influence the optimal capacity provisioning strategy, as well as the profit of the firm. In particular, strong positive network effects allow the firm to operate the service with fewer servers, which translates to increased profit.
  • Jonatha Anselmi, Urtzi Ayesta and Adam Wierman
    Performance Evaluation, 2011. 68(11):986-1001.
    Also appeared in the Proceedings of IFIP Performance 2011.
    [show/hide abstract]
    We study a nonatomic congestion game with N parallel links, with each link under the control of a profit maximizing provider. Within this `load balancing game', each provider has the freedom to set a price, or toll, for access to the link and seeks to maximize its own profit. Within fixed prices, a Wardrop equilibrium among users is assumed, under which users all choose paths of minimal and identical effective cost. Within this model we have oligopolistic price competition which, in equilibrium, gives rise to situations where neither providers nor users have incentives to adjust their prices or routes, respectively. In this context, we provide new results about the existence and efficiency of oligopolistic equilibria. Our main theorem shows that, when the number of providers is small, oligopolistic equilibria can be extremely inefficient; however as the number of providers N grows, the oligopolistic equilibria become increasingly efficient (at a rate of 1/N) and, as N grows, the oligopolistic equilibrium matches the socially optimal allocation.
  • Sherwin Doroudi, Ragavendran Gopalakrishnan and Adam Wierman
    Proceedings of ACM MAMA, 2011.
  • Avrim Blum, Eyal Even-Dar and Katrina Ligett
    Theory of Computing, 2010. 6:179-199.
    [show/hide abstract]
    There has been substantial work developing simple, efficient regret-minimizing algorithms for a wide class of repeated decision-making problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversarially-changing environments. There has also been substantial work on analyzing properties of Nash equilibria in routing games. In this paper, we consider the question: if each player in a routing game uses a regret-minimizing strategy, will behavior converge to a Nash equilibrium? In general games the answer to this question is known to be no in a strong sense, but routing games have substantially more structure.

    In this paper we show that in the Wardrop setting of multicommodity flow and infinitesimal agents, behavior will approach Nash equilibrium (on most days, the cost of the flow will be close to the cost of the cheapest paths possible given that flow) at a rate that depends polynomially on the players' regret bounds and the maximum slope of any latency function. We also show that Price of Anarchy results may be applied to these approximate equilibria, and also consider the finite-size (non-infinitesimal) load-balancing model of Azar (1998). Our nonatomic results also apply to the more general class of congestion games.
  • George Christodoulou, Katrina Ligett and Evangelia Pyrga
    Proceedings of ICALP, 2010.
    [show/hide abstract]
    In many communications settings, such as wired and wireless local-area networks, when multiple users attempt to access a communication channel at the same time, a conflict results and none of the communications are successful. Contention resolution is the study of distributed transmission and retransmission protocols designed to maximize notions of utility such as channel utilization in the face of blocking communications.

    An additional issue to be considered in the design of such protocols is that selfish users may have incentive to deviate from the prescribed behavior, if another transmission strategy increases their utility. The work of Fiat et al. [8] addresses this issue by constructing an asymptotically optimal incentive-compatible protocol. However, their protocol assumes the cost of any single transmission is zero, and the protocol completely collapses under non-zero transmission costs.

    In this paper, we treat the case of non-zero transmission cost 𝑐. We present asymptotically optimal contention resolution protocols that are robust to selfish users, in two different channel feedback models. Our main result is in the Collision Multiplicity Feedback model, where after each time slot, the number of attempted transmissions is returned as feedback to the users. In this setting, we give a protocol that has expected cost 2𝑛+𝑐 log𝑛 and is in 𝑜(1)-equilibrium, where 𝑛 is the number of users.
  • George Christodoulou, Christine Chung, Katrina Ligett, Evangelia Pyrga and Rob van Stee
    Proceedings of WAOA, 2009. 86-97.
    [show/hide abstract]
    We continue the study of the effects of selfish behavior in the network design problem. We provide new bounds for the price of stability for network design with fair cost allocation for undirected graphs. We consider the most general case, for which the best known upper bound is the Harmonic number Hn, where n is the number of agents, and the best previously known lower bound is 12/7 ≈ 1.778.
    We present a nontrivial lower bound of 42/23 ≈ 1.8261. Furthermore, we show that for two players, the price of stability is exactly 4/3, while for three players it is at least 74/48 ≈ 1.542 and at most 1.65. These are the first improvements on the bound of Hn for general networks. In particular, this demonstrates a separation between the price of stability on undirected graphs and that on directed graphs, where Hn is tight. Previously, such a gap was only known for the cases where all players have a shared source, and for weighted players.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of IEEE Infocom, 2009.
    [show/hide abstract]
    Load balancing is a common approach to task assignment in distributed architectures. In this paper, we show that the degree of inefficiency in load balancing designs is highly dependent on the scheduling discipline used at each of the back-end servers. Traditionally, the back-end scheduler can be modeled as Processor Sharing (PS), in which case the degree of inefficiency grows linearly with the number of servers. However, if the back-end scheduler is changed to Shortest Remaining Processing Time (SRPT), the degree of inefficiency can be independent of the number of servers, instead depending only on the heterogeneity of the speeds of the servers. Further, switching the back-end scheduler to SRPT can provide significant improvements in the overall mean response time of the system as long as the heterogeneity of the server speeds is small.
  • Avrim Blum, MohammadTaghi Hajiaghayi, Katrina Ligett and Aaron Roth
    Proceedings of STOC, 2008.
    [show/hide abstract]
    We propose weakening the assumption made when studying the price of anarchy: Rather than assume that selfinterested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret minimization can be done via simple, efficient algorithms even in many settings where the number of action choices for each player is exponential in the natural parameters of the problem. We prove that despite our weakened assumptions, in several broad classes of games, this 'price of total anarchy' matches the Nash price of anarchy, even though play may never converge to Nash equilibrium. In contrast to the price of anarchy and the recently introduced price of sinking [15], which require all players to behave in a prescribed manner, we show that the price of total anarchy is in many cases resilient to the presence of Byzantine players, about whom we make no assumptions. Finally, because the price of total anarchy is an upper bound on the price of anarchy even in mixed strategies, for some games our results yield as corollaries previously unknown bounds on the price of anarchy in mixed strategies.
  • Christine Chung, Katrina Ligett, Kirk Pruhs and Aaron Roth
    Proceedings of SAGT, 2008.
    [show/hide abstract]
    We consider the solution concept of stochastic stability, and propose the price of stochastic anarchy as an alternative to the price of (Nash) anarchy for quantifying the cost of selfishness and lack of coordination in games. As a solution concept, the Nash equilibrium has disadvantages that the set of stochastically stable states of a game avoid: unlike Nash equilibria, stochastically stable states are the result of natural dynamics of computationally bounded and decentralized agents, and are resilient to small perturbations from ideal play. The price of stochastic anarchy can be viewed as a smoothed analysis of the price of anarchy, distinguishing equilibria that are resilient to noise from those that are not. To illustrate the utility of stochastic stability, we study the load balancing game on unrelated machines. This game has an unboundedly large price of Nash anarchy even when restricted to two players and two machines. We show that in the two player case, the price of stochastic anarchy is 2, and that even in the general case, the price of stochastic anarchy is bounded. We conjecture that the price of stochastic anarchy is 𝑂(𝑚), matching the price of strong Nash anarchy without requiring player coordination. We expect that stochastic stability will be useful in understanding the relative stability of Nash equilibria in other games where the worst equilibria seem to be inherently brittle.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Social networks

  • The cost of an epidemic over a complex network: A random matrix approach
    Subhonmesh Bose, Elizabeth Bodine-Baron, Babak Hassibi and Adam Wierman
    Preprint.
    [show/hide abstract]
    In this paper we quantify the total cost of an epidemic spreading through a social network, accounting for both the immunization and disease costs. Previous research has typically focused on determining the optimal strategy to limit the lifetime of a disease, without considering the cost of such strategies. In the large graph limit, we calculate the exact expected disease cost for a general random graph, and we illustrate it for the specific example of an Erdos-Renyi network. We also give an upper bound on the expected disease cost for finite-size graphs. Finally, we study how to optimally perform a one-shot immunization to minimize the social cost of a disease, including both the cost of the disease and the cost of immunization.
  • Elizabeth Bodine-Baron, Christina Lee, Anthony Chong, Babak Hassibi and Adam Wierman
    Proceedings of SAGT, 2011.
    [show/hide abstract]
    Many-to-one matching markets exist in numerous different forms, such as college admissions, matching medical interns to hospitals for residencies, assigning housing to college students, and the classic rms and workers market. In the these markets, externalities such as complementarities and peer effects severely complicate the preference ordering of each agent. Further, research has shown that externalities lead to serious problems for market stability and for developing effciient algorithms to finding stable matchings.

    In this paper we make the observation that peer effects are often the result of underlying social connections, and we explore a formulation of the many-to-one matching market where peer effects are derived from an underlying social network. Under this assumption, we prove that stable matchings always exist and characterize the set of stable matchings. Further, we prove bounds on how far welfare of the worst-case stable matching can be from the welfare of the optimal matching, and find that the structure of the clustering of the social network plays a large role.
  • Elizabeth Bodine-Baron, Subhonmesh Bose, Babak Hassibi and Adam Wierman
    Proceedings of ICST Gamenets, 2011.
    [show/hide abstract]
    In this paper we quantify the total cost of an epidemic spreading through a social network, accounting for both the immunization and disease costs. Previous research has typically focused on determining the optimal strategy to limit the lifetime of a disease, without considering the cost of such strategies. In the large graph limit, we calculate the exact expected disease cost for a general random graph, and we illustrate it for the specific example of an Erdos-Renyi network. We also give an upper bound on the expected disease cost for finite-size graphs. Finally, we study how to optimally perform a one-shot immunization to minimize the social cost of a disease, including both the cost of the disease and the cost of immunization.
  • Elizabeth Bodine-Baron, Babak Hassibi and Adam Wierman
    IEEE J. of Selected Topics in Signal Processing, 2010.
    [show/hide abstract]
    This paper focuses on a generalization of stochastic Kronecker graphs, introducing a Kronecker-like operator and defining a family of generator matrices H dependent on distances between nodes in a specified graph embedding. We prove that any lattice-based network model with sufficiently small distancedependent connection probability will have a Poisson degree distribution and provide a general framework to prove searchability for such a network. Using this framework, we focus on a specific example of an expanding hypercube and discuss the similarities and differences of such a model with recently proposed network models based on a hidden metric space. We also prove that a greedy forwarding algorithm can find very short paths of length O((log log 𝑛)2) on the hypercube with 𝑛 nodes, demonstrating that distance-dependent Kronecker graphs can generate searchable network models.
  • Elizabeth Bodine-Baron, Babak Hassibi and Adam Wierman
    Proceedings of Allerton, 2009.
    [show/hide abstract]
    This paper describes an extension to stochastic Kronecker graphs that provides the special structure required for searchability, by defining a 'distance'-dependent Kronecker operator. We show how this extension of Kronecker graphs can generate several existing social network models, such as the Watts-Strogatz small-world model and Kleinberg's lattice-based model. We focus on a specific example of an expanding hypercube, reminiscent of recently proposed social network models based on a hidden hyperbolic metric space, and prove that a greedy forwarding algorithm can find very short paths of length O((log log n)^2 ) for graphs with n nodes.

Theory of the consumer

  • Federico Echenique, Daniel Golovin and Adam Wierman
    Preprint. Extension of a paper that appeared in the Proceedings of ACM EC, 2011.
    [show/hide abstract]
    Consumption theory assumes that consumers posses infinite computational abilities. Proponents of bounded rationality want to instead require that any model of consumer behavior incorporate computational constraints. In this paper, we establish that this requirement has no testable implications. Any consumption data set that is compatible with a rational consumer is also compatible with a rational and computationally bounded consumer. The result extends to data on multiple agents interacting in a market economy. We present sufficient conditions on observed market outcomes such that they are compatible with an instance of the model for which Walrasian equilibrium is easy to compute. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worst-case view of computational complexity in important ways, and is methodologically close to mainstream economics.
  • Federico Echenique, Daniel Golovin and Adam Wierman
    Proceedings of ACM EC, 2011.
    [show/hide abstract]
    One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worst-case under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time.

    Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worst-case view of computational complexity in important ways, and is methodologically close to mainstream economics.