Sustainable IT Publications

Sort by year or by subject or by author or by venue.

By-year quick links: [Favorites] [Preprints] [2015] [2014] [2013] [2012] [2011] [2010] [2009] [2008]

By-subject quick links: [Algorithmic Game Theory] [Awardee] [Congestion Control] [Control Theory] [Data Centers] [Demand Response] [Dynamic Capacity Provisioning] [Electricity Markets] [Geographical Load Balancing] [Large Deviations] [Market Power] [Network Economics] [Networking] [Online Optimization] [Queueing Games] [Renewable Energy] [Scheduling] [Scheduling Theory] [Smart Grid] [Speed Scaling] [Sustainable IT]


Algorithmic Game Theory

  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Awardee

  • Minghong Lin, Jian Tan, Adam Wierman and Li Zhang
    Performance Evaluation, 2013. 70(10):720-735. This work also appeared in the Proceedings of IFIP Performance, 2013, where it recieved the ``Best Student Paper'' award. It was one of the ten most downloaded papers of Performance Evaluation during Fall and Winter 2013.
    [show/hide abstract]
    MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple processing phases, and thus an efficient job scheduling mechanism is crucial for ensuring efficient resource utilization. This paper studies the scheduling challenge that results from the overlapping of the ``map'' and ``shuffle'' phases in MapReduce. We propose a new, general model for this scheduling problem, and validate this model using cluster experiments. Further, we prove that scheduling to minimize average response time in this model is strongly NP-hard in the offline case and that no online algorithm can be constant-competitive. However, we provide two online algorithms that match the performance of the offline optimal when given a slightly faster service rate (i.e., in the resource augmentation framework). Finally, we validate the algorithms using a workload trace from a Google cluster and show that the algorithms are near optimal in practical settings.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    IEEE Transactions on Networking, 2013. 21:1378-1391. Received the 2014 IEEE Communication Society William R. Bennett Prize. Extended version of a paper that appeared in IEEE Infocom, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.
  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Performance Evaluation, 2013. 70(10): ``Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed \emphtransmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.

Congestion Control

  • Qiuyu Peng, Anwar Walid and Steven Low
    Preprint.
    [show/hide abstract]
    Multi-path TCP (MP-TCP) has the potential to greatly improve application performance by using multiple paths transparently. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new design that generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We illustrate our analysis and the behavior of the new algorithm using ns2 simulations.
  • Lingwen Gan, Anwar Walid and Steven Low
    Proceedings of ACM Sigmetrics, 2012. Sigmetrics held jointly with IFIP Performance.
    [show/hide abstract]
    Various link bandwidth adjustment mechanisms are being developed to save network energy. However, their interaction with congestion control can significantly reduce throughput, and is not well understood. We firstly put forward an easily implementable link dynamic bandwidth adjustment (DBA) mechanism, and then study its iteration with congestion control. In DBA, each link updates its bandwidth according to an integral control law to match its average buffer size with a target buffer size. We prove that DBA reduces link bandwidth without sacrificing throughput--DBA only turns o excess bandwidth--in the presence of congestion control. Preliminary ns2 simulations confirm this result.

Control Theory

  • Online Convex Optimization Using Predictions
    Niangjun Chen, Anish Agarwal, Adam Wierman, Siddharth Barman and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2015.
    [show/hide abstract]
    Making use of predictions is a crucial, but under-explored, area of online algorithms.This paper studies a class of online optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio using only a constant-sized prediction window. Furthermore, we show that typical performance of AFHC is tightly concentrated around its mean.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    IEEE Transactions on Networking, 2014. Extension of a paper that appeared in ACM Sigmetrics, 2011.
    [show/hide abstract]
    Energy expenditure has become a signifficant fraction of data center operating costs. Recently, `geographical load balancing' has been suggested as an approach for taking advantage of the geographical diversity of Internet-scale distributed systems in order to reduce energy expenditures by exploiting the electricity price differences across regions. However, the fact that such designs reduce energy costs does not imply that they reduce energy usage. In fact, such designs often increase energy usage.

    This paper explores whether the geographical diversity of Internet-scale systems can be used to provide environmental gains in addition to reducing data center costs. Specifically, we explore whether geographical load balancing can encourage usage of `green' energy from renewable sources and reduce usage of `brown' energy from fossil fuels. We make two contributions. First, we derive three algorithms, with varying degrees of distributed computation, for achieving optimal geographical load balancing. Second, using these algorithms, we show that if dynamic pricing of electricity is done in proportion to the fraction of the total energy that is brown at each time, then geographical load balancing provides signifficant reductions in brown energy usage. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    IEEE Transactions on Networking, 2013. 21:1378-1391. Received the 2014 IEEE Communication Society William R. Bennett Prize. Extended version of a paper that appeared in IEEE Infocom, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.

Data Centers

  • Online Convex Optimization Using Predictions
    Niangjun Chen, Anish Agarwal, Adam Wierman, Siddharth Barman and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2015.
    [show/hide abstract]
    Making use of predictions is a crucial, but under-explored, area of online algorithms.This paper studies a class of online optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio using only a constant-sized prediction window. Furthermore, we show that typical performance of AFHC is tightly concentrated around its mean.
  • Characterizing the impact of the workload on the value of dynamic resizing in data centers
    Kai Wang, Minghong Lin, Florin Ciucu, Adam Wierman and Chuang Lin
    Performance Evaluation, 2015. This is an extension of a paper that appeared in IEEE Infocom, 2013.
    [show/hide abstract]
    Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
  • Jayakrishnan Nair, Vijay Subramanian and Adam Wierman
    Proceedings of IFIP Performance, 2014. Extended abstract.
    [show/hide abstract]
    Motivated by cloud services, we consider the interplay of network effects, congestion, and competition in ad-supported services. We study the strategic interactions between competing service providers and a user base, modeling congestion sensitivity and two forms of positive network effects: network effects that are either 'firm-specific' or 'industry-wide.' Our analysis reveals that users are generally no better off due to the competition in a marketplace with positive network effects. Further, our analysis highlights an important contrast between firm-specific and industry-wide network effects: Firms can coexist in a marketplace with industry-wide network effects, but near-monopolies tend to emerge in marketplaces with firm-specific network effects.
  • Ganesh Ananthanarayanan, Michael Chien-Chun Hung, Xiaoqi Ren, Ion Stoica, Adam Wierman and Minlan Yu
    Proceedings of USENIX NSDI, 2014.
    [show/hide abstract]
    In big data analytics timely results, even if based on only part of the data, are often good enough. For this reason, approximation jobs, which have deadline or error bounds and require only a subset of their tasks to complete, are projected to dominate big data workloads. In this paper, we present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. The design of GRASS is based on first principles analysis of the impact of speculative copies. GRASS delicately balances immediacy of improving the approximation goal with the long term implications of using extra resources for speculation. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes shows that name increases accuracy of deadline-bound jobs by 47\% and speeds up error-bound jobs by 38\%. GRASS's design also speeds up exact computations, making it a unified solution for straggler mitigation.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    IEEE Transactions on Networking, 2014. Extension of a paper that appeared in ACM Sigmetrics, 2011.
    [show/hide abstract]
    Energy expenditure has become a signifficant fraction of data center operating costs. Recently, `geographical load balancing' has been suggested as an approach for taking advantage of the geographical diversity of Internet-scale distributed systems in order to reduce energy expenditures by exploiting the electricity price differences across regions. However, the fact that such designs reduce energy costs does not imply that they reduce energy usage. In fact, such designs often increase energy usage.

    This paper explores whether the geographical diversity of Internet-scale systems can be used to provide environmental gains in addition to reducing data center costs. Specifically, we explore whether geographical load balancing can encourage usage of `green' energy from renewable sources and reduce usage of `brown' energy from fossil fuels. We make two contributions. First, we derive three algorithms, with varying degrees of distributed computation, for achieving optimal geographical load balancing. Second, using these algorithms, we show that if dynamic pricing of electricity is done in proportion to the fraction of the total energy that is brown at each time, then geographical load balancing provides signifficant reductions in brown energy usage. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.
  • Adam Wierman, Zhenhua Liu, Iris Liu and Hamed Mohsenian-Rad
    Proceedings of IEEE IGCC, 2014.
    [show/hide abstract]
    This paper surveys the opportunities and challenges in an emerging area of research that has the potential to significantly ease the incorporation of renewable energy into the grid as well as electric power peak-load shaving: data center demand response. Data center demand response sits at the intersection of two growing fields: energy efficient data centers and demand response in the smart grid. As such, the literature related to data center demand response is sprinkled across multiple areas and worked on by diverse groups. Our goal in this survey is to demonstrate the potential of the field while also summarizing the progress that has been made and the challenges that remain.
  • Optimal power procurement for data centers in day-ahead and real-time electricity markets
    Mahdi Ghamkhari, Hamed Mohsenian-Rad and Adam Wierman
    Proceedings of Infocom Workshop on Smart Data Pricing, 2014.
    [show/hide abstract]
    With the growing trends in the amount of power consumed by data centers, finding ways to cut electricity bills has become an important and challenging problem. In this paper, we seek to understand the cost reductions that data centers may achieve by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. Based on a stochastic optimization framework, we propose to jointly select the data centers' service rates and their demand bids to the day-ahead and real-time electricity markets. In our analysis, we take into account service level-agreements, risk management constraints, and also the statistical characteristics of the workload and the electricity prices. Using empirical electricity price and Internet workload data, our numerical studies show that directly participating in the day-ahead and real-time electricity markets can significantly help data centers to reduce their energy expenditure.
  • Zhenhua Liu, Iris Liu, Steven Low and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    Demand response is a crucial tool for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large scale storage, if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply-function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are not accurate, and so we provide analytic, worst-case bounds on the impact of prediction accuracy on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.
  • Minghong Lin, Jian Tan, Adam Wierman and Li Zhang
    Performance Evaluation, 2013. 70(10):720-735. This work also appeared in the Proceedings of IFIP Performance, 2013, where it recieved the ``Best Student Paper'' award. It was one of the ten most downloaded papers of Performance Evaluation during Fall and Winter 2013.
    [show/hide abstract]
    MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple processing phases, and thus an efficient job scheduling mechanism is crucial for ensuring efficient resource utilization. This paper studies the scheduling challenge that results from the overlapping of the ``map'' and ``shuffle'' phases in MapReduce. We propose a new, general model for this scheduling problem, and validate this model using cluster experiments. Further, we prove that scheduling to minimize average response time in this model is strongly NP-hard in the offline case and that no online algorithm can be constant-competitive. However, we provide two online algorithms that match the performance of the offline optimal when given a slightly faster service rate (i.e., in the resource augmentation framework). Finally, we validate the algorithms using a workload trace from a Google cluster and show that the algorithms are near optimal in practical settings.
  • Jonatha Anselmi, Danilo Ardagna, John C. S. Lui, Adam Wierman, Yunjian Xu and Zichao Yang
    Proceedings of NetEcon, 2013. NetEcon was held jointly with ACM WPIN, 2013.
    [show/hide abstract]
    This paper proposes a model to study the interaction of price competition and congestion in the cloud computing marketplace. Specifically, we propose a three-tier market model that captures a marketplace with users purchasing services from Software-as-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) providers or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize competitive equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs, and to understand the impact of price competition on the user experienced performance, i.e., the `price of anarchy' of the cloud marketplace. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    IEEE Transactions on Networking, 2013. 21:1378-1391. Received the 2014 IEEE Communication Society William R. Bennett Prize. Extended version of a paper that appeared in IEEE Infocom, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.
  • Zhenhua Liu, Yuan Chen, Cullen Bash, Adam Wierman, Daniel Gmach, Zhikui Wang, Manish Marwah and Chris Hyser
    Proceedings of ACM Sigmetrics, 2012. Sigmetrics held jointly with IFIP Performance.
    [show/hide abstract]
    The demand for data center computing increased significantly in recent years resulting in huge energy consumption. Data centers typically comprise three main subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes the generated heat. This work presents a novel approach to model the energy flows in a data center and optimize its holistic operation. Traditionally, supply-side constraints such as energy or cooling availability were largely treated independently from IT workload management. This work reduces cost and environmental impact using a holistic approach that integrates energy supply, e.g., renewable supply and dynamic pricing, and cooling supply, e.g., chiller and outside air cooling, with IT workload planning to improve the overall attainability of data center operations. Specifically, we predict renewable energy as well as IT demand and design an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce the recurring power costs and the use of non-renewable energy by as much as 60\% compared to existing, non-integrated techniques, while still meeting operational goals and Service Level Agreements.
  • Kai Wang, Minghong Lin, Florin Ciucu, Adam Wierman and Chuang Lin
    Proceedings of ACM Sigmetrics, 2012. (Accepted as a poster.) Sigmetrics held jointly with IFIP Performance. An extended version of this work appeared in IEEE Infocom 2013.
    [show/hide abstract]
    Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
  • Lachlan L. H. Andrew, Minghong Lin, Zhenhua Liu and Adam Wierman
    Proceedings of IEEE COIN, 2012. Invited paper.
    [show/hide abstract]
    Data center power consumption can be reduced by switching off servers during low load. However, excess switching is wasteful. This paper reviews online algorithms for optimizing this tradeoff, including the benefits of shifting load between geographically distant data centres. These algorithms can also adjust a links number of parallel lightpaths.
  • Minghong Lin, Zhenhua Liu, Adam Wierman and Lachlan L. H. Andrew
    Proceedings of IEEE IGCC, 2012. ''Best Paper'' award recipient.
    [show/hide abstract]
    A common approach for dynamic capacity provisioning is 'receding horizon control (RHC)', which computes the provisioning for the current time by optimizing over a given window of predictions about the future arrivals. In this work, we provide new results characterizing the performance of RHC. We prove that RHC performs well when servers are homogeneous, i.e., it has performance that quickly tends toward optimality as the prediction window increases. However, we also prove that RHC performs badly when servers are heterogeneous, regardless of the length of the prediction window. This is problematic given that in practice data centers are heterogeneous. To address this issue, we introduce a variant of RHC that performs well in the heterogeneous setting. In fact, we prove that it matches the competitive ratio of RHC in the homogeneous setting under both the homogeneous and heterogeneous settings.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    Proceedings of IEEE Infocom, 2011. ''Best Paper'' award recipient.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    Proceedings of ACM Greenmetrics, 2011. ''Best Student Paper'' award recipient.
    [show/hide abstract]
    Given the significant energy consumption of data centers, improving their energy efficiency is an important social problem. However, energy efficiency is necessary but not sufficient for sustainability, which demands reduced usage of energy from fossil fuels. This paper investigates the feasibility of powering internet-scale systems using (nearly) entirely renewable energy. We perform a trace-based study to evaluate three issues related to achieving this goal: the impact of geographical load balancing, the role of storage, and the optimal mix of renewables. Our results highlight that geographical load balancing can significantly reduce the required capacity of renewable energy by using the energy more efficiently with ``follow the renewables'' routing. Further, our results show that small-scale storage can be useful, especially in combination with geographical load balancing, and that an optimal mix of renewables includes significantly more wind than photovoltaic solar.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    Proceedings of Allerton, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of low load. In this work, we study how to avoid such waste via an online dynamic capacity provisioning. We overview recent results showing that the optimal offline algorithm for dynamic capacity provisioning has a simple structure when viewed in reverse time, and this structure can be exploited to develop a new `lazy' online algorithm which is 3-competitive. Additionally, we analyze the performance of the more traditional approach of receding horizon control and introduce a new variant with a significantly improved worst-case performance guarantee.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Proceedings of IFIP Performance, 2011.
    [show/hide abstract]
    Online services today are characterized by a highly congestion sensitive user base, that also experiences strong positive network effects. A majority of these services are supported by advertising and are offered for free to the end user. We study the problem of optimal capacity provisioning for a profit maximizing firm operating such an online service in the asymptotic regime of a large market size. We show that network effects heavily influence the optimal capacity provisioning strategy, as well as the profit of the firm. In particular, strong positive network effects allow the firm to operate the service with fewer servers, which translates to increased profit.
  • Jonatha Anselmi, Urtzi Ayesta and Adam Wierman
    Performance Evaluation, 2011. 68(11):986-1001.
    Also appeared in the Proceedings of IFIP Performance 2011.
    [show/hide abstract]
    We study a nonatomic congestion game with N parallel links, with each link under the control of a profit maximizing provider. Within this 'load balancing game', each provider has the freedom to set a price, or toll, for access to the link and seeks to maximize its own profit. Within fixed prices, a Wardrop equilibrium among users is assumed, under which users all choose paths of minimal and identical effective cost. Within this model we have oligopolistic price competition which, in equilibrium, gives rise to situations where neither providers nor users have incentives to adjust their prices or routes, respectively. In this context, we provide new results about the existence and efficiency of oligopolistic equilibria. Our main theorem shows that, when the number of providers is small, oligopolistic equilibria can be extremely inefficient; however as the number of providers N grows, the oligopolistic equilibria become increasingly efficient (at a rate of 1/N) and, as N grows, the oligopolistic equilibrium matches the socially optimal allocation.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2011.
    [show/hide abstract]
    Energy expenditure has become a signifficant fraction of data center operating costs. Recently, `geographical load balancing' has been suggested as an approach for taking advantage of the geographical diversity of Internet-scale distributed systems in order to reduce energy expenditures by exploiting the electricity price differences across regions. However, the fact that such designs reduce energy costs does not imply that they reduce energy usage. In fact, such designs often increase energy usage.

    This paper explores whether the geographical diversity of Internet-scale systems can be used to provide environmental gains in addition to reducing data center costs. Specifically, we explore whether geographical load balancing can encourage usage of `green' energy from renewable sources and reduce usage of `brown' energy from fossil fuels. We make two contributions. First, we derive three algorithms, with varying degrees of distributed computation, for achieving optimal geographical load balancing. Second, using these algorithms, we show that if dynamic pricing of electricity is done in proportion to the fraction of the total energy that is brown at each time, then geographical load balancing provides signifficant reductions in brown energy usage. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of IEEE Infocom, 2009.
    [show/hide abstract]
    Load balancing is a common approach to task assignment in distributed architectures. In this paper, we show that the degree of inefficiency in load balancing designs is highly dependent on the scheduling discipline used at each of the back-end servers. Traditionally, the back-end scheduler can be modeled as Processor Sharing (PS), in which case the degree of inefficiency grows linearly with the number of servers. However, if the back-end scheduler is changed to Shortest Remaining Processing Time (SRPT), the degree of inefficiency can be independent of the number of servers, instead depending only on the heterogeneity of the speeds of the servers. Further, switching the back-end scheduler to SRPT can provide significant improvements in the overall mean response time of the system as long as the heterogeneity of the server speeds is small.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Demand Response

  • Online Convex Optimization Using Predictions
    Niangjun Chen, Anish Agarwal, Adam Wierman, Siddharth Barman and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2015.
    [show/hide abstract]
    Making use of predictions is a crucial, but under-explored, area of online algorithms.This paper studies a class of online optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio using only a constant-sized prediction window. Furthermore, we show that typical performance of AFHC is tightly concentrated around its mean.
  • Adam Wierman, Zhenhua Liu, Iris Liu and Hamed Mohsenian-Rad
    Proceedings of IEEE IGCC, 2014.
    [show/hide abstract]
    This paper surveys the opportunities and challenges in an emerging area of research that has the potential to significantly ease the incorporation of renewable energy into the grid as well as electric power peak-load shaving: data center demand response. Data center demand response sits at the intersection of two growing fields: energy efficient data centers and demand response in the smart grid. As such, the literature related to data center demand response is sprinkled across multiple areas and worked on by diverse groups. Our goal in this survey is to demonstrate the potential of the field while also summarizing the progress that has been made and the challenges that remain.
  • Optimal power procurement for data centers in day-ahead and real-time electricity markets
    Mahdi Ghamkhari, Hamed Mohsenian-Rad and Adam Wierman
    Proceedings of Infocom Workshop on Smart Data Pricing, 2014.
    [show/hide abstract]
    With the growing trends in the amount of power consumed by data centers, finding ways to cut electricity bills has become an important and challenging problem. In this paper, we seek to understand the cost reductions that data centers may achieve by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. Based on a stochastic optimization framework, we propose to jointly select the data centers' service rates and their demand bids to the day-ahead and real-time electricity markets. In our analysis, we take into account service level-agreements, risk management constraints, and also the statistical characteristics of the workload and the electricity prices. Using empirical electricity price and Internet workload data, our numerical studies show that directly participating in the day-ahead and real-time electricity markets can significantly help data centers to reduce their energy expenditure.
  • Zhenhua Liu, Iris Liu, Steven Low and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    Demand response is a crucial tool for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large scale storage, if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply-function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are not accurate, and so we provide analytic, worst-case bounds on the impact of prediction accuracy on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.
  • Unifying market power measure for deregulated transmission-constrained electricity markets
    Subhonmesh Bose, Chenye Wu, Yunjian Xu, Adam Wierman and Hamed Mohsenian-Rad
    IEEE Transactions on Power Systems, 2014. to appear.
    [show/hide abstract]
    Market power assessment is a prime concern when designing a deregulated electricity market. In this paper, we propose a new functional market power measure, termed \emphtransmission constrained network flow TCNF, that takes into account an AC model of the network. The measure unifies three large classes of long-term transmission constrained market power indices in the literature: residual supply based, network flow based, and minimal generation based. Furthermore it is built upon the recent advances in semidefinite relaxations of AC power flow equations to model the underlying power network. Previously, market power measure that took into account the network did so via DC approximations of power flow models. Our results highlight that using the more accurate AC model can yield fundamentally different conclusions both about whether market power exists and about which generators can exploit market power.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Desmond Cai and Adam Wierman
    Proceedings of IEEE CDC, 2013.
    [show/hide abstract]
    The growth of renewable resources will introduce significant variability and uncertainty into the grid. It is likely that ``peaker'' plants will be a crucial dispatchable resource for compensating for the variations in renewable supply. Thus, it is important to understand the strategic incentives of peaker plants and their potential for exploiting market power due to having responsive supply. To this end, we study an oligopolistic two-settlement market comprising of two types of generation (baseloads and peakers) where there is perfect foresight. We characterize symmetric equilibria in this context via closed-form expressions. However, we also show that, when the system is capacity-constrained, there may not exist equilibria in which baseloads and peakers play symmetric strategies. This happens because of opportunities for both types of generation to exploit market power to increase prices.
  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Performance Evaluation, 2013. 70(10): ``Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed \emphtransmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.
  • The need for new measures to assess market power in deregulated electricity markets
    Subhonmesh Bose, Chenye Wu, Adam Wierman and Hamed Mohsenian-Rad
    IEEE Smart Grid Newsletter, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Integrating distributed energy resource pricing and control
    Paul de Martini, Adam Wierman, Sean Meyn and Eilyan Bitar
    Proceedings of CIGRE USNC Grid of the Future Symposium, 2012.
    [show/hide abstract]
    As the market adoption of distributed energy resources (DER) reaches regional scale it will create significant challenges in the management of the distribution system related to existing protection and control systems. This is likely to lead to issues for power quality and reliability because of three issues. In this paper, we describe a framework for the development of a class of pricing mechanisms that both induce deep customer participation and enable efficient management of their end-use devices to provide both distribution and transmission side support. The basic challenge resides in reliably extracting the desired response from customers on short time-scales. Thus, new pricing mechanisms are needed to create effective closed loop systems that are tightly coupled with distribution control systems to ensure reliability and power quality.
  • Na Li, Lijun Chen and Steven Low
    Proceedings of IEEE Power & Energy Society General Meeting, 2011.
    [show/hide abstract]
    Demand side management will be a key component of future smart grid that can help reduce peak load and adapt elastic demand to fluctuating generations. In this paper, we consider households that operate different appliances including PHEVs and batteries and propose a demand response approach based on utility maximization. Each appliance provides a certain benefit depending on the pattern or volume of power it consumes. Each household wishes to optimally schedule its power consumption so as to maximize its individual net benefit subject to various consumption and power flow constraints. We show that there exist time-varying prices that can align individual optimality with social optimality, i.e., under such prices, when the households selfishly optimize their own benefits, they automatically also maximize the social welfare. The utility company can thus use dynamic pricing to coordinate demand responses to the benefit of the overall system. We propose a distributed algorithm for the utility company and the customers to jointly compute this optimal prices and demand schedules. Finally, we present simulation results that illustrate several interesting properties of the proposed scheme.
  • Libin Jiang and Steven Low
    Proceedings of Allerton, 2011.
    [show/hide abstract]
    We consider a set of users served by a single load- serving entity (LSE) in the electricity grid. The LSE procures capacity a day ahead. When random renewable energy is realized at delivery time, it actively manages user load through real-time demand response and purchases balancing power on the spot market to meet the aggregate demand. Hence, to maximize the social welfare, decisions must be coordinated over two timescales (a day ahead and in real time), in the presence of supply uncertainty, and computed jointly by the LSE and the users since the necessary information is distributed among them. We formulate the problem as a dynamic program. We propose a distributed heuristic algorithm and prove its optimality when the welfare function is quadratic and the LSE’s decisions are strictly positive. Otherwise, we bound the gap between the welfare achieved by the heuristic algorithm and the maximum in certain cases. Simulation results suggest that the performance gap is small. As we scale up the size of a renewable generation plant, both its mean production and its variance will likely increase. We characterize the impact of the mean and variance of renewable energy on the maximum welfare. This paper is a continuation of [2], focusing on time-correlated demand.
  • Libin Jiang and Steven Low
    Proceedings of IEEE CDC, 2011.
    [show/hide abstract]
    We propose a simple model that integrates two-period electricity markets, uncertainty in renewable generation, and real-time dynamic demand response. A load-serving entity decides its day-ahead procurement to optimize expected social welfare a day before energy delivery. At delivery time when renewable generation is realized, it sets prices to manage demand and purchase additional power on the real-time market, if necessary, to balance supply and demand. We derive the optimal day-ahead decision, propose real-time demand response algorithm, and study the effect of volume and variability of renewable generation on these optimal decisions and on social welfare.

Dynamic Capacity Provisioning

  • Lingwen Gan, Anwar Walid and Steven Low
    Proceedings of ACM Sigmetrics, 2012. Sigmetrics held jointly with IFIP Performance.
    [show/hide abstract]
    Various link bandwidth adjustment mechanisms are being developed to save network energy. However, their interaction with congestion control can significantly reduce throughput, and is not well understood. We firstly put forward an easily implementable link dynamic bandwidth adjustment (DBA) mechanism, and then study its iteration with congestion control. In DBA, each link updates its bandwidth according to an integral control law to match its average buffer size with a target buffer size. We prove that DBA reduces link bandwidth without sacrificing throughput--DBA only turns o excess bandwidth--in the presence of congestion control. Preliminary ns2 simulations confirm this result.
  • Zhenhua Liu, Yuan Chen, Cullen Bash, Adam Wierman, Daniel Gmach, Zhikui Wang, Manish Marwah and Chris Hyser
    Proceedings of ACM Sigmetrics, 2012. Sigmetrics held jointly with IFIP Performance.
    [show/hide abstract]
    The demand for data center computing increased significantly in recent years resulting in huge energy consumption. Data centers typically comprise three main subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes the generated heat. This work presents a novel approach to model the energy flows in a data center and optimize its holistic operation. Traditionally, supply-side constraints such as energy or cooling availability were largely treated independently from IT workload management. This work reduces cost and environmental impact using a holistic approach that integrates energy supply, e.g., renewable supply and dynamic pricing, and cooling supply, e.g., chiller and outside air cooling, with IT workload planning to improve the overall attainability of data center operations. Specifically, we predict renewable energy as well as IT demand and design an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce the recurring power costs and the use of non-renewable energy by as much as 60\% compared to existing, non-integrated techniques, while still meeting operational goals and Service Level Agreements.
  • Lachlan L. H. Andrew, Minghong Lin, Zhenhua Liu and Adam Wierman
    Proceedings of IEEE COIN, 2012. Invited paper.
    [show/hide abstract]
    Data center power consumption can be reduced by switching off servers during low load. However, excess switching is wasteful. This paper reviews online algorithms for optimizing this tradeoff, including the benefits of shifting load between geographically distant data centres. These algorithms can also adjust a links number of parallel lightpaths.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    Proceedings of IEEE Infocom, 2011. ''Best Paper'' award recipient.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    Proceedings of Allerton, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of low load. In this work, we study how to avoid such waste via an online dynamic capacity provisioning. We overview recent results showing that the optimal offline algorithm for dynamic capacity provisioning has a simple structure when viewed in reverse time, and this structure can be exploited to develop a new `lazy' online algorithm which is 3-competitive. Additionally, we analyze the performance of the more traditional approach of receding horizon control and introduce a new variant with a significantly improved worst-case performance guarantee.
  • Jayakrishnan Nair, Adam Wierman and Bert Zwart
    Proceedings of IFIP Performance, 2011.
    [show/hide abstract]
    Online services today are characterized by a highly congestion sensitive user base, that also experiences strong positive network effects. A majority of these services are supported by advertising and are offered for free to the end user. We study the problem of optimal capacity provisioning for a profit maximizing firm operating such an online service in the asymptotic regime of a large market size. We show that network effects heavily influence the optimal capacity provisioning strategy, as well as the profit of the firm. In particular, strong positive network effects allow the firm to operate the service with fewer servers, which translates to increased profit.

Electricity Markets

  • Adam Wierman, Zhenhua Liu, Iris Liu and Hamed Mohsenian-Rad
    Proceedings of IEEE IGCC, 2014.
    [show/hide abstract]
    This paper surveys the opportunities and challenges in an emerging area of research that has the potential to significantly ease the incorporation of renewable energy into the grid as well as electric power peak-load shaving: data center demand response. Data center demand response sits at the intersection of two growing fields: energy efficient data centers and demand response in the smart grid. As such, the literature related to data center demand response is sprinkled across multiple areas and worked on by diverse groups. Our goal in this survey is to demonstrate the potential of the field while also summarizing the progress that has been made and the challenges that remain.
  • Subhonmesh Bose, Desmond Cai, Steven Low and Adam Wierman
    Proceedings of IEEE CDC, 2014.
    [show/hide abstract]
    We study the role of a market maker (or system operator) in a transmission constrained electricity market. We model the market as a one-shot networked Cournot competition where generators supply quantity bids and load serving entities provide downward sloping inverse demand functions. This mimics the operation of a spot market in a deregulated market structure. In this paper, we focus on possible mechanisms employed by the market maker to balance demand and supply. In particular, we consider three candidate objective functions that the market maker optimizes -- social welfare, residual social welfare, and consumer surplus. We characterize the existence of Generalized Nash Equilibrium (GNE) in this setting and demonstrate that market outcomes at equilibrium can be very different under the candidate objective functions.
  • Optimal power procurement for data centers in day-ahead and real-time electricity markets
    Mahdi Ghamkhari, Hamed Mohsenian-Rad and Adam Wierman
    Proceedings of Infocom Workshop on Smart Data Pricing, 2014.
    [show/hide abstract]
    With the growing trends in the amount of power consumed by data centers, finding ways to cut electricity bills has become an important and challenging problem. In this paper, we seek to understand the cost reductions that data centers may achieve by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. Based on a stochastic optimization framework, we propose to jointly select the data centers' service rates and their demand bids to the day-ahead and real-time electricity markets. In our analysis, we take into account service level-agreements, risk management constraints, and also the statistical characteristics of the workload and the electricity prices. Using empirical electricity price and Internet workload data, our numerical studies show that directly participating in the day-ahead and real-time electricity markets can significantly help data centers to reduce their energy expenditure.
  • Zhenhua Liu, Iris Liu, Steven Low and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    Demand response is a crucial tool for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large scale storage, if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply-function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are not accurate, and so we provide analytic, worst-case bounds on the impact of prediction accuracy on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.
  • Jayakrishnan Nair, Sachin Adlakha and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    The increasing penetration of intermittent, unpredictable renewable energy sources, such as wind energy, pose significant challenges for the utility companies trying to incorporate renewable energy in their portfolio. In this talk, we discuss inventory management issues that arise in the presence of intermittent renewable resources. We model the problem as a three stage newsvendor problem with uncertain supply and model the estimates of wind using a martingale model of forecast evolution. We describe the optimal procurement strategy and use it to study the impact of proposed market changes and of increased renewable penetration. A key insight from our results is to show a separation between the impact of the structure of electricity markets and the impact of increased penetration. In particular, the effect of market structure on the optimal procurement policy is independent of the level of wind penetration. Additionally, we study two proposed changes to the market structure: the addition and the placement of an intermediate market. Importantly, we show that addition of an intermediate market does not necessarily reduce the total amount of energy procured by the utility company.
  • Unifying market power measure for deregulated transmission-constrained electricity markets
    Subhonmesh Bose, Chenye Wu, Yunjian Xu, Adam Wierman and Hamed Mohsenian-Rad
    IEEE Transactions on Power Systems, 2014. to appear.
    [show/hide abstract]
    Market power assessment is a prime concern when designing a deregulated electricity market. In this paper, we propose a new functional market power measure, termed \emphtransmission constrained network flow TCNF, that takes into account an AC model of the network. The measure unifies three large classes of long-term transmission constrained market power indices in the literature: residual supply based, network flow based, and minimal generation based. Furthermore it is built upon the recent advances in semidefinite relaxations of AC power flow equations to model the underlying power network. Previously, market power measure that took into account the network did so via DC approximations of power flow models. Our results highlight that using the more accurate AC model can yield fundamentally different conclusions both about whether market power exists and about which generators can exploit market power.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Desmond Cai and Adam Wierman
    Proceedings of IEEE CDC, 2013.
    [show/hide abstract]
    The growth of renewable resources will introduce significant variability and uncertainty into the grid. It is likely that ``peaker'' plants will be a crucial dispatchable resource for compensating for the variations in renewable supply. Thus, it is important to understand the strategic incentives of peaker plants and their potential for exploiting market power due to having responsive supply. To this end, we study an oligopolistic two-settlement market comprising of two types of generation (baseloads and peakers) where there is perfect foresight. We characterize symmetric equilibria in this context via closed-form expressions. However, we also show that, when the system is capacity-constrained, there may not exist equilibria in which baseloads and peakers play symmetric strategies. This happens because of opportunities for both types of generation to exploit market power to increase prices.
  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Performance Evaluation, 2013. 70(10): ``Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed \emphtransmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.
  • The need for new measures to assess market power in deregulated electricity markets
    Subhonmesh Bose, Chenye Wu, Adam Wierman and Hamed Mohsenian-Rad
    IEEE Smart Grid Newsletter, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Proceedings of IEEE Power & Energy Society General Meeting, 2013. ''Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed transmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.
  • Integrating distributed energy resource pricing and control
    Paul de Martini, Adam Wierman, Sean Meyn and Eilyan Bitar
    Proceedings of CIGRE USNC Grid of the Future Symposium, 2012.
    [show/hide abstract]
    As the market adoption of distributed energy resources (DER) reaches regional scale it will create significant challenges in the management of the distribution system related to existing protection and control systems. This is likely to lead to issues for power quality and reliability because of three issues. In this paper, we describe a framework for the development of a class of pricing mechanisms that both induce deep customer participation and enable efficient management of their end-use devices to provide both distribution and transmission side support. The basic challenge resides in reliably extracting the desired response from customers on short time-scales. Thus, new pricing mechanisms are needed to create effective closed loop systems that are tightly coupled with distribution control systems to ensure reliability and power quality.

Geographical Load Balancing

  • Lachlan L. H. Andrew, Minghong Lin, Zhenhua Liu and Adam Wierman
    Proceedings of IEEE COIN, 2012. Invited paper.
    [show/hide abstract]
    Data center power consumption can be reduced by switching off servers during low load. However, excess switching is wasteful. This paper reviews online algorithms for optimizing this tradeoff, including the benefits of shifting load between geographically distant data centres. These algorithms can also adjust a links number of parallel lightpaths.
  • Minghong Lin, Zhenhua Liu, Adam Wierman and Lachlan L. H. Andrew
    Proceedings of IEEE IGCC, 2012. ''Best Paper'' award recipient.
    [show/hide abstract]
    A common approach for dynamic capacity provisioning is 'receding horizon control (RHC)', which computes the provisioning for the current time by optimizing over a given window of predictions about the future arrivals. In this work, we provide new results characterizing the performance of RHC. We prove that RHC performs well when servers are homogeneous, i.e., it has performance that quickly tends toward optimality as the prediction window increases. However, we also prove that RHC performs badly when servers are heterogeneous, regardless of the length of the prediction window. This is problematic given that in practice data centers are heterogeneous. To address this issue, we introduce a variant of RHC that performs well in the heterogeneous setting. In fact, we prove that it matches the competitive ratio of RHC in the homogeneous setting under both the homogeneous and heterogeneous settings.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    Proceedings of ACM Greenmetrics, 2011. ''Best Student Paper'' award recipient.
    [show/hide abstract]
    Given the significant energy consumption of data centers, improving their energy efficiency is an important social problem. However, energy efficiency is necessary but not sufficient for sustainability, which demands reduced usage of energy from fossil fuels. This paper investigates the feasibility of powering internet-scale systems using (nearly) entirely renewable energy. We perform a trace-based study to evaluate three issues related to achieving this goal: the impact of geographical load balancing, the role of storage, and the optimal mix of renewables. Our results highlight that geographical load balancing can significantly reduce the required capacity of renewable energy by using the energy more efficiently with ``follow the renewables'' routing. Further, our results show that small-scale storage can be useful, especially in combination with geographical load balancing, and that an optimal mix of renewables includes significantly more wind than photovoltaic solar.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2011.
    [show/hide abstract]
    Energy expenditure has become a signifficant fraction of data center operating costs. Recently, `geographical load balancing' has been suggested as an approach for taking advantage of the geographical diversity of Internet-scale distributed systems in order to reduce energy expenditures by exploiting the electricity price differences across regions. However, the fact that such designs reduce energy costs does not imply that they reduce energy usage. In fact, such designs often increase energy usage.

    This paper explores whether the geographical diversity of Internet-scale systems can be used to provide environmental gains in addition to reducing data center costs. Specifically, we explore whether geographical load balancing can encourage usage of `green' energy from renewable sources and reduce usage of `brown' energy from fossil fuels. We make two contributions. First, we derive three algorithms, with varying degrees of distributed computation, for achieving optimal geographical load balancing. Second, using these algorithms, we show that if dynamic pricing of electricity is done in proportion to the fraction of the total energy that is brown at each time, then geographical load balancing provides signifficant reductions in brown energy usage. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.

Large Deviations

  • Jayakrishnan Nair, Sachin Adlakha and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    The increasing penetration of intermittent, unpredictable renewable energy sources, such as wind energy, pose significant challenges for the utility companies trying to incorporate renewable energy in their portfolio. In this talk, we discuss inventory management issues that arise in the presence of intermittent renewable resources. We model the problem as a three stage newsvendor problem with uncertain supply and model the estimates of wind using a martingale model of forecast evolution. We describe the optimal procurement strategy and use it to study the impact of proposed market changes and of increased renewable penetration. A key insight from our results is to show a separation between the impact of the structure of electricity markets and the impact of increased penetration. In particular, the effect of market structure on the optimal procurement policy is independent of the level of wind penetration. Additionally, we study two proposed changes to the market structure: the addition and the placement of an intermediate market. Importantly, we show that addition of an intermediate market does not necessarily reduce the total amount of energy procured by the utility company.

Market Power

  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Proceedings of IEEE Power & Energy Society General Meeting, 2013. ''Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed transmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.

Network Economics

  • Jayakrishnan Nair, Vijay Subramanian and Adam Wierman
    Proceedings of IFIP Performance, 2014. Extended abstract.
    [show/hide abstract]
    Motivated by cloud services, we consider the interplay of network effects, congestion, and competition in ad-supported services. We study the strategic interactions between competing service providers and a user base, modeling congestion sensitivity and two forms of positive network effects: network effects that are either 'firm-specific' or 'industry-wide.' Our analysis reveals that users are generally no better off due to the competition in a marketplace with positive network effects. Further, our analysis highlights an important contrast between firm-specific and industry-wide network effects: Firms can coexist in a marketplace with industry-wide network effects, but near-monopolies tend to emerge in marketplaces with firm-specific network effects.
  • Adam Wierman, Zhenhua Liu, Iris Liu and Hamed Mohsenian-Rad
    Proceedings of IEEE IGCC, 2014.
    [show/hide abstract]
    This paper surveys the opportunities and challenges in an emerging area of research that has the potential to significantly ease the incorporation of renewable energy into the grid as well as electric power peak-load shaving: data center demand response. Data center demand response sits at the intersection of two growing fields: energy efficient data centers and demand response in the smart grid. As such, the literature related to data center demand response is sprinkled across multiple areas and worked on by diverse groups. Our goal in this survey is to demonstrate the potential of the field while also summarizing the progress that has been made and the challenges that remain.
  • Subhonmesh Bose, Desmond Cai, Steven Low and Adam Wierman
    Proceedings of IEEE CDC, 2014.
    [show/hide abstract]
    We study the role of a market maker (or system operator) in a transmission constrained electricity market. We model the market as a one-shot networked Cournot competition where generators supply quantity bids and load serving entities provide downward sloping inverse demand functions. This mimics the operation of a spot market in a deregulated market structure. In this paper, we focus on possible mechanisms employed by the market maker to balance demand and supply. In particular, we consider three candidate objective functions that the market maker optimizes -- social welfare, residual social welfare, and consumer surplus. We characterize the existence of Generalized Nash Equilibrium (GNE) in this setting and demonstrate that market outcomes at equilibrium can be very different under the candidate objective functions.
  • Optimal power procurement for data centers in day-ahead and real-time electricity markets
    Mahdi Ghamkhari, Hamed Mohsenian-Rad and Adam Wierman
    Proceedings of Infocom Workshop on Smart Data Pricing, 2014.
    [show/hide abstract]
    With the growing trends in the amount of power consumed by data centers, finding ways to cut electricity bills has become an important and challenging problem. In this paper, we seek to understand the cost reductions that data centers may achieve by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. Based on a stochastic optimization framework, we propose to jointly select the data centers' service rates and their demand bids to the day-ahead and real-time electricity markets. In our analysis, we take into account service level-agreements, risk management constraints, and also the statistical characteristics of the workload and the electricity prices. Using empirical electricity price and Internet workload data, our numerical studies show that directly participating in the day-ahead and real-time electricity markets can significantly help data centers to reduce their energy expenditure.
  • Zhenhua Liu, Iris Liu, Steven Low and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    Demand response is a crucial tool for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large scale storage, if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply-function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are not accurate, and so we provide analytic, worst-case bounds on the impact of prediction accuracy on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.
  • Jonatha Anselmi, Danilo Ardagna, John C. S. Lui, Adam Wierman, Yunjian Xu and Zichao Yang
    Proceedings of NetEcon, 2013. NetEcon was held jointly with ACM WPIN, 2013.
    [show/hide abstract]
    This paper proposes a model to study the interaction of price competition and congestion in the cloud computing marketplace. Specifically, we propose a three-tier market model that captures a marketplace with users purchasing services from Software-as-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) providers or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize competitive equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs, and to understand the impact of price competition on the user experienced performance, i.e., the `price of anarchy' of the cloud marketplace. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Networking

  • Characterizing the impact of the workload on the value of dynamic resizing in data centers
    Kai Wang, Minghong Lin, Florin Ciucu, Adam Wierman and Chuang Lin
    Performance Evaluation, 2015. This is an extension of a paper that appeared in IEEE Infocom, 2013.
    [show/hide abstract]
    Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
  • Jayakrishnan Nair, Vijay Subramanian and Adam Wierman
    Proceedings of IFIP Performance, 2014. Extended abstract.
    [show/hide abstract]
    Motivated by cloud services, we consider the interplay of network effects, congestion, and competition in ad-supported services. We study the strategic interactions between competing service providers and a user base, modeling congestion sensitivity and two forms of positive network effects: network effects that are either 'firm-specific' or 'industry-wide.' Our analysis reveals that users are generally no better off due to the competition in a marketplace with positive network effects. Further, our analysis highlights an important contrast between firm-specific and industry-wide network effects: Firms can coexist in a marketplace with industry-wide network effects, but near-monopolies tend to emerge in marketplaces with firm-specific network effects.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    IEEE Transactions on Networking, 2014. Extension of a paper that appeared in ACM Sigmetrics, 2011.
    [show/hide abstract]
    Energy expenditure has become a signifficant fraction of data center operating costs. Recently, `geographical load balancing' has been suggested as an approach for taking advantage of the geographical diversity of Internet-scale distributed systems in order to reduce energy expenditures by exploiting the electricity price differences across regions. However, the fact that such designs reduce energy costs does not imply that they reduce energy usage. In fact, such designs often increase energy usage.

    This paper explores whether the geographical diversity of Internet-scale systems can be used to provide environmental gains in addition to reducing data center costs. Specifically, we explore whether geographical load balancing can encourage usage of `green' energy from renewable sources and reduce usage of `brown' energy from fossil fuels. We make two contributions. First, we derive three algorithms, with varying degrees of distributed computation, for achieving optimal geographical load balancing. Second, using these algorithms, we show that if dynamic pricing of electricity is done in proportion to the fraction of the total energy that is brown at each time, then geographical load balancing provides signifficant reductions in brown energy usage. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.
  • Jonatha Anselmi, Danilo Ardagna, John C. S. Lui, Adam Wierman, Yunjian Xu and Zichao Yang
    Proceedings of NetEcon, 2013. NetEcon was held jointly with ACM WPIN, 2013.
    [show/hide abstract]
    This paper proposes a model to study the interaction of price competition and congestion in the cloud computing marketplace. Specifically, we propose a three-tier market model that captures a marketplace with users purchasing services from Software-as-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) providers or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize competitive equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs, and to understand the impact of price competition on the user experienced performance, i.e., the `price of anarchy' of the cloud marketplace. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    IEEE Transactions on Networking, 2013. 21:1378-1391. Received the 2014 IEEE Communication Society William R. Bennett Prize. Extended version of a paper that appeared in IEEE Infocom, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.
  • Kai Wang, Minghong Lin, Florin Ciucu, Adam Wierman and Chuang Lin
    Proceedings of ACM Sigmetrics, 2012. (Accepted as a poster.) Sigmetrics held jointly with IFIP Performance. An extended version of this work appeared in IEEE Infocom 2013.
    [show/hide abstract]
    Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Online Optimization

  • Online Convex Optimization Using Predictions
    Niangjun Chen, Anish Agarwal, Adam Wierman, Siddharth Barman and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2015.
    [show/hide abstract]
    Making use of predictions is a crucial, but under-explored, area of online algorithms.This paper studies a class of online optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio using only a constant-sized prediction window. Furthermore, we show that typical performance of AFHC is tightly concentrated around its mean.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    IEEE Transactions on Networking, 2013. 21:1378-1391. Received the 2014 IEEE Communication Society William R. Bennett Prize. Extended version of a paper that appeared in IEEE Infocom, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.

Queueing Games

  • Jayakrishnan Nair, Vijay Subramanian and Adam Wierman
    Proceedings of IFIP Performance, 2014. Extended abstract.
    [show/hide abstract]
    Motivated by cloud services, we consider the interplay of network effects, congestion, and competition in ad-supported services. We study the strategic interactions between competing service providers and a user base, modeling congestion sensitivity and two forms of positive network effects: network effects that are either 'firm-specific' or 'industry-wide.' Our analysis reveals that users are generally no better off due to the competition in a marketplace with positive network effects. Further, our analysis highlights an important contrast between firm-specific and industry-wide network effects: Firms can coexist in a marketplace with industry-wide network effects, but near-monopolies tend to emerge in marketplaces with firm-specific network effects.
  • Jonatha Anselmi, Danilo Ardagna, John C. S. Lui, Adam Wierman, Yunjian Xu and Zichao Yang
    Proceedings of NetEcon, 2013. NetEcon was held jointly with ACM WPIN, 2013.
    [show/hide abstract]
    This paper proposes a model to study the interaction of price competition and congestion in the cloud computing marketplace. Specifically, we propose a three-tier market model that captures a marketplace with users purchasing services from Software-as-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) providers or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize competitive equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs, and to understand the impact of price competition on the user experienced performance, i.e., the `price of anarchy' of the cloud marketplace. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of ACM MAMA, 2008.
    [show/hide abstract]
    Load balancing is a common approach to distributed task assignment. In this paper we study the degree of inefficiency in load balancing designs. We show that the degree of inefficiency is highly dependent on the back-end scheduler.

Renewable Energy

  • Jayakrishnan Nair, Sachin Adlakha and Adam Wierman
    Preprint.
    [show/hide abstract]
    The increasing penetration of intermittent, unpredictable renewable energy sources, such as wind energy, pose significant challenges for the utility companies trying to incorporate renewable energy in their portfolio. In this talk, we discuss inventory management issues that arise in the presence of intermittent renewable resources. We model the problem as a three stage newsvendor problem with uncertain supply and model the estimates of wind using a martingale model of forecast evolution. We describe the optimal procurement strategy and use it to study the impact of proposed market changes and of increased renewable penetration. A key insight from our results is to show a separation between the impact of the structure of electricity markets and the impact of increased penetration. In particular, the effect of market structure on the optimal procurement policy is independent of the level of wind penetration. Additionally, we study two proposed changes to the market structure: the addition and the placement of an intermediate market. Importantly, we show that addition of an intermediate market does not necessarily reduce the total amount of energy procured by the utility company.
  • Zhenhua Liu, Yuan Chen, Cullen Bash, Adam Wierman, Daniel Gmach, Zhikui Wang, Manish Marwah and Chris Hyser
    Proceedings of ACM Sigmetrics, 2012. Sigmetrics held jointly with IFIP Performance.
    [show/hide abstract]
    The demand for data center computing increased significantly in recent years resulting in huge energy consumption. Data centers typically comprise three main subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes the generated heat. This work presents a novel approach to model the energy flows in a data center and optimize its holistic operation. Traditionally, supply-side constraints such as energy or cooling availability were largely treated independently from IT workload management. This work reduces cost and environmental impact using a holistic approach that integrates energy supply, e.g., renewable supply and dynamic pricing, and cooling supply, e.g., chiller and outside air cooling, with IT workload planning to improve the overall attainability of data center operations. Specifically, we predict renewable energy as well as IT demand and design an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce the recurring power costs and the use of non-renewable energy by as much as 60\% compared to existing, non-integrated techniques, while still meeting operational goals and Service Level Agreements.
  • Libin Jiang and Steven Low
    Proceedings of Allerton, 2011.
    [show/hide abstract]
    We consider a set of users served by a single load- serving entity (LSE) in the electricity grid. The LSE procures capacity a day ahead. When random renewable energy is realized at delivery time, it actively manages user load through real-time demand response and purchases balancing power on the spot market to meet the aggregate demand. Hence, to maximize the social welfare, decisions must be coordinated over two timescales (a day ahead and in real time), in the presence of supply uncertainty, and computed jointly by the LSE and the users since the necessary information is distributed among them. We formulate the problem as a dynamic program. We propose a distributed heuristic algorithm and prove its optimality when the welfare function is quadratic and the LSE’s decisions are strictly positive. Otherwise, we bound the gap between the welfare achieved by the heuristic algorithm and the maximum in certain cases. Simulation results suggest that the performance gap is small. As we scale up the size of a renewable generation plant, both its mean production and its variance will likely increase. We characterize the impact of the mean and variance of renewable energy on the maximum welfare. This paper is a continuation of [2], focusing on time-correlated demand.
  • Libin Jiang and Steven Low
    Proceedings of IEEE CDC, 2011.
    [show/hide abstract]
    We propose a simple model that integrates two-period electricity markets, uncertainty in renewable generation, and real-time dynamic demand response. A load-serving entity decides its day-ahead procurement to optimize expected social welfare a day before energy delivery. At delivery time when renewable generation is realized, it sets prices to manage demand and purchase additional power on the real-time market, if necessary, to balance supply and demand. We derive the optimal day-ahead decision, propose real-time demand response algorithm, and study the effect of volume and variability of renewable generation on these optimal decisions and on social welfare.
  • Desmond Cai, Sachin Adlakha and K. Mani Chandy
    Proceedings of IEEE CDC, 2011.
    [show/hide abstract]
    The growth of wind energy production poses several challenges in its integration in current electric power systems. In this work, we study how a wind power producer can bid optimally in existing electricity markets. We derive optimal contract size and expected profit for a wind producer under arbitrary penalty function and generation costs. A key feature of our analysis is to allow for the wind producer to strategically withhold production once the day ahead contract is signed. Such strategic behavior is detrimental to the smooth functioning of electricity markets. We show that under simple conditions on the offered price and marginal imbalance penalty, a risk neutral profit maximizing wind power producer will produce as much as wind power is available (up to its contract size).
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    Proceedings of ACM Greenmetrics, 2011. ''Best Student Paper'' award recipient.
    [show/hide abstract]
    Given the significant energy consumption of data centers, improving their energy efficiency is an important social problem. However, energy efficiency is necessary but not sufficient for sustainability, which demands reduced usage of energy from fossil fuels. This paper investigates the feasibility of powering internet-scale systems using (nearly) entirely renewable energy. We perform a trace-based study to evaluate three issues related to achieving this goal: the impact of geographical load balancing, the role of storage, and the optimal mix of renewables. Our results highlight that geographical load balancing can significantly reduce the required capacity of renewable energy by using the energy more efficiently with ``follow the renewables'' routing. Further, our results show that small-scale storage can be useful, especially in combination with geographical load balancing, and that an optimal mix of renewables includes significantly more wind than photovoltaic solar.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2011.
    [show/hide abstract]
    Energy expenditure has become a signifficant fraction of data center operating costs. Recently, `geographical load balancing' has been suggested as an approach for taking advantage of the geographical diversity of Internet-scale distributed systems in order to reduce energy expenditures by exploiting the electricity price differences across regions. However, the fact that such designs reduce energy costs does not imply that they reduce energy usage. In fact, such designs often increase energy usage.

    This paper explores whether the geographical diversity of Internet-scale systems can be used to provide environmental gains in addition to reducing data center costs. Specifically, we explore whether geographical load balancing can encourage usage of `green' energy from renewable sources and reduce usage of `brown' energy from fossil fuels. We make two contributions. First, we derive three algorithms, with varying degrees of distributed computation, for achieving optimal geographical load balancing. Second, using these algorithms, we show that if dynamic pricing of electricity is done in proportion to the fraction of the total energy that is brown at each time, then geographical load balancing provides signifficant reductions in brown energy usage. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.

Scheduling

  • Ganesh Ananthanarayanan, Michael Chien-Chun Hung, Xiaoqi Ren, Ion Stoica, Adam Wierman and Minlan Yu
    Proceedings of USENIX NSDI, 2014.
    [show/hide abstract]
    In big data analytics timely results, even if based on only part of the data, are often good enough. For this reason, approximation jobs, which have deadline or error bounds and require only a subset of their tasks to complete, are projected to dominate big data workloads. In this paper, we present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. The design of GRASS is based on first principles analysis of the impact of speculative copies. GRASS delicately balances immediacy of improving the approximation goal with the long term implications of using extra resources for speculation. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes shows that name increases accuracy of deadline-bound jobs by 47\% and speeds up error-bound jobs by 38\%. GRASS's design also speeds up exact computations, making it a unified solution for straggler mitigation.
  • Minghong Lin, Jian Tan, Adam Wierman and Li Zhang
    Performance Evaluation, 2013. 70(10):720-735. This work also appeared in the Proceedings of IFIP Performance, 2013, where it recieved the ``Best Student Paper'' award. It was one of the ten most downloaded papers of Performance Evaluation during Fall and Winter 2013.
    [show/hide abstract]
    MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple processing phases, and thus an efficient job scheduling mechanism is crucial for ensuring efficient resource utilization. This paper studies the scheduling challenge that results from the overlapping of the ``map'' and ``shuffle'' phases in MapReduce. We propose a new, general model for this scheduling problem, and validate this model using cluster experiments. Further, we prove that scheduling to minimize average response time in this model is strongly NP-hard in the offline case and that no online algorithm can be constant-competitive. However, we provide two online algorithms that match the performance of the offline optimal when given a slightly faster service rate (i.e., in the resource augmentation framework). Finally, we validate the algorithms using a workload trace from a Google cluster and show that the algorithms are near optimal in practical settings.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Adam Wierman, Lachlan L. H. Andrew and Minghong Lin
    Chapter in Handbook on Energy-Aware and Green Computing, 2012.
    [show/hide abstract]
    Speed scaling has long been used as a power-saving mechanism at a chip level. However, in recent years, speed scaling has begun to be used as an approach for trading off energy usage and performance throughout all levels of computer systems. This wide-spread use of speed scaling has motivated significant research on the topic, but many fundamental questions about speed scaling are only beginning to be understood. In this chapter, we focus on a simple, but general, model of speed scaling and provide an algorithmic perspective on four fundamental questions: (i) What is the structure of the optimal speed scaling algorithm? (ii) How does speed scaling interact with scheduling? (iii) What is the impact of the sophistication of speed scaling algorithms? and (iv) Does speed scaling have any unintended consequences? For each question we provide a summary of insights from recent work on the topic in both worst-case and stochastic analysis as well as a discussion of interesting open questions that remain.
  • Speed scaling for processor sharing systems: Optimality and robustness
    Lachlan L. H. Andrew, Adam Wierman and Ao Tang
    Performance Evaluation, 2012. To appear.
    [show/hide abstract]
    Adapting the speed of a processor is an effective method to reduce energy consumption. This paper studies the optimal way to scale speed to balance response time and energy consumption under processor sharing scheduling. It is shown that using a static rate while the system is busy provides nearly optimal performance, but having more available speeds increases robustness to different traffic loads. In particular, the dynamic speed scaling optimal for Poisson arrivals is also constant-competitive in the worst case. The scheme which equates power consumption with queue occupancy is shown to be 10-competitive when power is cubic in speed.
  • Jonatha Anselmi, Urtzi Ayesta and Adam Wierman
    Performance Evaluation, 2011. 68(11):986-1001.
    Also appeared in the Proceedings of IFIP Performance 2011.
    [show/hide abstract]
    We study a nonatomic congestion game with N parallel links, with each link under the control of a profit maximizing provider. Within this 'load balancing game', each provider has the freedom to set a price, or toll, for access to the link and seeks to maximize its own profit. Within fixed prices, a Wardrop equilibrium among users is assumed, under which users all choose paths of minimal and identical effective cost. Within this model we have oligopolistic price competition which, in equilibrium, gives rise to situations where neither providers nor users have incentives to adjust their prices or routes, respectively. In this context, we provide new results about the existence and efficiency of oligopolistic equilibria. Our main theorem shows that, when the number of providers is small, oligopolistic equilibria can be extremely inefficient; however as the number of providers N grows, the oligopolistic equilibria become increasingly efficient (at a rate of 1/N) and, as N grows, the oligopolistic equilibrium matches the socially optimal allocation.
  • Lachlan L. H. Andrew, Minghong Lin and Adam Wierman
    Proceedings of ACM Sigmetrics, 2010.
    [show/hide abstract]
    System design must strike a balance between energy and performance by carefully selecting the speed at which the system will run. In this work, we examine fundamental tradeoffs incurred when designing a speed scaler to minimize a weighted sum of expected response time and energy use per job. We prove that a popular dynamic speed scaling algorithm is 2-competitive for this objective and that no ``natural'' speed scaler can improve on this. Further, we prove that energy-proportional speed scaling works well across two common scheduling policies: Shortest Remaining Processing Time (SRPT) and Processor Sharing (PS). Third, we show that under SRPT and PS, gated-static speed scaling is nearly optimal when the mean workload is known, but that dynamic speed scaling provides robustness against uncertain workloads. Finally, we prove that speed scaling magnifies unfairness, notably SRPT's bias against large jobs and the bias against short jobs in non-preemptive policies. However, PS remains fair under speed scaling. Together, these results show that the speed scalers studied here can achieve any two, but only two, of optimality, fairness, and robustness.
  • Lachlan L. H. Andrew, Minghong Lin, Ao Tang and Adam Wierman
    IEEE COMSOC Multimedia Communications Technical Committee, 2010.
  • Wei Chen, Dayu Huang, Ankur Kulkarni, Jayakrishnan Unnikrishnan, Quanyan Zhu, Prashant Mehta, Sean Meyn and Adam Wierman
    Proceedings of IEEE CDC, 2009.
    [show/hide abstract]
    TD learning and its refinements are powerful tools for approximating the solution to dynamic programming problems. However, the techniques provide the approximate solution only within a prescribed finite-dimensional function class. Thus, the question that always arises is how should the function class be chosen? The goal of this paper is to propose an approach for TD learning based on choosing the function class using the solutions to associated fluid and diffusion approximations. In order to illustrate this new approach, the paper focuses on an application to dynamic speed scaling for power management.
  • Ho-Lin Chen, Jason Marden and Adam Wierman
    Proceedings of IEEE Infocom, 2009.
    [show/hide abstract]
    Load balancing is a common approach to task assignment in distributed architectures. In this paper, we show that the degree of inefficiency in load balancing designs is highly dependent on the scheduling discipline used at each of the back-end servers. Traditionally, the back-end scheduler can be modeled as Processor Sharing (PS), in which case the degree of inefficiency grows linearly with the number of servers. However, if the back-end scheduler is changed to Shortest Remaining Processing Time (SRPT), the degree of inefficiency can be independent of the number of servers, instead depending only on the heterogeneity of the speeds of the servers. Further, switching the back-end scheduler to SRPT can provide significant improvements in the overall mean response time of the system as long as the heterogeneity of the server speeds is small.
  • Lachlan L. H. Andrew, Adam Wierman and Ao Tang
    Proceedings of ACM MAMA, 2009.
    [show/hide abstract]
    This paper investigates the performance of online dynamic speed scaling algorithms for the objective of minimizing a linear combination of energy and response time. We prove that (SRPT, $P^-1(n)$), which uses Shortest Remaining Processing Time (SRPT) scheduling and processes at speed such that the power used is equal to the queue length, is 2-competitive for a very wide class of power-speed tradeoff functions. Further, we prove that there exist tradeoff functions such that no algorithm can attain a competitive ratio less than 2.
  • Adam Wierman, Lachlan L. H. Andrew and Ao Tang
    Proceedings of IEEE Infocom, 2009.
    [show/hide abstract]
    Energy usage of computer communications systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean response time and mean energy consumption under processor sharing scheduling. Both bounds and asymptotics for the optimal speed scaling scheme are provided. These results show that a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, the results also highlight that dynamic speed scaling provide at least one key benefit -- significantly improved robustness to bursty traffic and mis-estimation of workload parameters.
  • Adam Wierman, Lachlan L. H. Andrew and Ao Tang
    Proceedings of Allerton, 2008.
    [show/hide abstract]
    Energy consumption in a computer system can be reduced by dynamic speed scaling, which adapts the processing speed to the current load. This paper studies the optimal way to adjust speed to balance mean response time and mean energy consumption, when jobs arrive as a Poisson process and processor sharing scheduling is used. Both bounds and asymptotics for the optimal speeds are provided. Interestingly, a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, dynamic speed scaling which allocates a higher speed when more jobs are present significantly improves robustness to bursty traffic and mis-estimation of workload parameters.

Scheduling Theory

  • Ganesh Ananthanarayanan, Michael Chien-Chun Hung, Xiaoqi Ren, Ion Stoica, Adam Wierman and Minlan Yu
    Proceedings of USENIX NSDI, 2014.
    [show/hide abstract]
    In big data analytics timely results, even if based on only part of the data, are often good enough. For this reason, approximation jobs, which have deadline or error bounds and require only a subset of their tasks to complete, are projected to dominate big data workloads. In this paper, we present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. The design of GRASS is based on first principles analysis of the impact of speculative copies. GRASS delicately balances immediacy of improving the approximation goal with the long term implications of using extra resources for speculation. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes shows that name increases accuracy of deadline-bound jobs by 47\% and speeds up error-bound jobs by 38\%. GRASS's design also speeds up exact computations, making it a unified solution for straggler mitigation.
  • Minghong Lin, Jian Tan, Adam Wierman and Li Zhang
    Performance Evaluation, 2013. 70(10):720-735. This work also appeared in the Proceedings of IFIP Performance, 2013, where it recieved the ``Best Student Paper'' award. It was one of the ten most downloaded papers of Performance Evaluation during Fall and Winter 2013.
    [show/hide abstract]
    MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple processing phases, and thus an efficient job scheduling mechanism is crucial for ensuring efficient resource utilization. This paper studies the scheduling challenge that results from the overlapping of the ``map'' and ``shuffle'' phases in MapReduce. We propose a new, general model for this scheduling problem, and validate this model using cluster experiments. Further, we prove that scheduling to minimize average response time in this model is strongly NP-hard in the offline case and that no online algorithm can be constant-competitive. However, we provide two online algorithms that match the performance of the offline optimal when given a slightly faster service rate (i.e., in the resource augmentation framework). Finally, we validate the algorithms using a workload trace from a Google cluster and show that the algorithms are near optimal in practical settings.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.

Smart Grid

  • Adam Wierman, Zhenhua Liu, Iris Liu and Hamed Mohsenian-Rad
    Proceedings of IEEE IGCC, 2014.
    [show/hide abstract]
    This paper surveys the opportunities and challenges in an emerging area of research that has the potential to significantly ease the incorporation of renewable energy into the grid as well as electric power peak-load shaving: data center demand response. Data center demand response sits at the intersection of two growing fields: energy efficient data centers and demand response in the smart grid. As such, the literature related to data center demand response is sprinkled across multiple areas and worked on by diverse groups. Our goal in this survey is to demonstrate the potential of the field while also summarizing the progress that has been made and the challenges that remain.
  • Subhonmesh Bose, Desmond Cai, Steven Low and Adam Wierman
    Proceedings of IEEE CDC, 2014.
    [show/hide abstract]
    We study the role of a market maker (or system operator) in a transmission constrained electricity market. We model the market as a one-shot networked Cournot competition where generators supply quantity bids and load serving entities provide downward sloping inverse demand functions. This mimics the operation of a spot market in a deregulated market structure. In this paper, we focus on possible mechanisms employed by the market maker to balance demand and supply. In particular, we consider three candidate objective functions that the market maker optimizes -- social welfare, residual social welfare, and consumer surplus. We characterize the existence of Generalized Nash Equilibrium (GNE) in this setting and demonstrate that market outcomes at equilibrium can be very different under the candidate objective functions.
  • Optimal power procurement for data centers in day-ahead and real-time electricity markets
    Mahdi Ghamkhari, Hamed Mohsenian-Rad and Adam Wierman
    Proceedings of Infocom Workshop on Smart Data Pricing, 2014.
    [show/hide abstract]
    With the growing trends in the amount of power consumed by data centers, finding ways to cut electricity bills has become an important and challenging problem. In this paper, we seek to understand the cost reductions that data centers may achieve by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. Based on a stochastic optimization framework, we propose to jointly select the data centers' service rates and their demand bids to the day-ahead and real-time electricity markets. In our analysis, we take into account service level-agreements, risk management constraints, and also the statistical characteristics of the workload and the electricity prices. Using empirical electricity price and Internet workload data, our numerical studies show that directly participating in the day-ahead and real-time electricity markets can significantly help data centers to reduce their energy expenditure.
  • Zhenhua Liu, Iris Liu, Steven Low and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    Demand response is a crucial tool for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large scale storage, if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply-function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are not accurate, and so we provide analytic, worst-case bounds on the impact of prediction accuracy on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.
  • Jayakrishnan Nair, Sachin Adlakha and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    The increasing penetration of intermittent, unpredictable renewable energy sources, such as wind energy, pose significant challenges for the utility companies trying to incorporate renewable energy in their portfolio. In this talk, we discuss inventory management issues that arise in the presence of intermittent renewable resources. We model the problem as a three stage newsvendor problem with uncertain supply and model the estimates of wind using a martingale model of forecast evolution. We describe the optimal procurement strategy and use it to study the impact of proposed market changes and of increased renewable penetration. A key insight from our results is to show a separation between the impact of the structure of electricity markets and the impact of increased penetration. In particular, the effect of market structure on the optimal procurement policy is independent of the level of wind penetration. Additionally, we study two proposed changes to the market structure: the addition and the placement of an intermediate market. Importantly, we show that addition of an intermediate market does not necessarily reduce the total amount of energy procured by the utility company.
  • Unifying market power measure for deregulated transmission-constrained electricity markets
    Subhonmesh Bose, Chenye Wu, Yunjian Xu, Adam Wierman and Hamed Mohsenian-Rad
    IEEE Transactions on Power Systems, 2014. to appear.
    [show/hide abstract]
    Market power assessment is a prime concern when designing a deregulated electricity market. In this paper, we propose a new functional market power measure, termed \emphtransmission constrained network flow TCNF, that takes into account an AC model of the network. The measure unifies three large classes of long-term transmission constrained market power indices in the literature: residual supply based, network flow based, and minimal generation based. Furthermore it is built upon the recent advances in semidefinite relaxations of AC power flow equations to model the underlying power network. Previously, market power measure that took into account the network did so via DC approximations of power flow models. Our results highlight that using the more accurate AC model can yield fundamentally different conclusions both about whether market power exists and about which generators can exploit market power.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Desmond Cai and Adam Wierman
    Proceedings of IEEE CDC, 2013.
    [show/hide abstract]
    The growth of renewable resources will introduce significant variability and uncertainty into the grid. It is likely that ``peaker'' plants will be a crucial dispatchable resource for compensating for the variations in renewable supply. Thus, it is important to understand the strategic incentives of peaker plants and their potential for exploiting market power due to having responsive supply. To this end, we study an oligopolistic two-settlement market comprising of two types of generation (baseloads and peakers) where there is perfect foresight. We characterize symmetric equilibria in this context via closed-form expressions. However, we also show that, when the system is capacity-constrained, there may not exist equilibria in which baseloads and peakers play symmetric strategies. This happens because of opportunities for both types of generation to exploit market power to increase prices.
  • Chenye Wu, Subhonmesh Bose, Adam Wierman and Hamed Mohsenian-Rad
    Performance Evaluation, 2013. 70(10): ``Best Paper on System Operations and Market Economics'' award recipient.
    [show/hide abstract]
    A competitive deregulated electricity market with increasingly active market players is foreseen to be the future of the electricity industry. In such settings, market power assessment is a primary concern. In this paper, we propose a novel functional approach for measuring long term market power that unifies a variety of popular market power indices. Specifically, the new measure, termed \emphtransmission constrained network flow (TCNF), unifies three large classes of market power measures: residual supply based, network flow based, and minimal generation based. Further, TCNF provides valuable information about market power not captured by prior indices. We derive its analytic properties and test its efficacy on IEEE test systems.
  • The need for new measures to assess market power in deregulated electricity markets
    Subhonmesh Bose, Chenye Wu, Adam Wierman and Hamed Mohsenian-Rad
    IEEE Smart Grid Newsletter, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Integrating distributed energy resource pricing and control
    Paul de Martini, Adam Wierman, Sean Meyn and Eilyan Bitar
    Proceedings of CIGRE USNC Grid of the Future Symposium, 2012.
    [show/hide abstract]
    As the market adoption of distributed energy resources (DER) reaches regional scale it will create significant challenges in the management of the distribution system related to existing protection and control systems. This is likely to lead to issues for power quality and reliability because of three issues. In this paper, we describe a framework for the development of a class of pricing mechanisms that both induce deep customer participation and enable efficient management of their end-use devices to provide both distribution and transmission side support. The basic challenge resides in reliably extracting the desired response from customers on short time-scales. Thus, new pricing mechanisms are needed to create effective closed loop systems that are tightly coupled with distribution control systems to ensure reliability and power quality.

Speed Scaling

  • Adam Wierman, Lachlan L. H. Andrew and Minghong Lin
    Chapter in Handbook on Energy-Aware and Green Computing, 2012.
    [show/hide abstract]
    Speed scaling has long been used as a power-saving mechanism at a chip level. However, in recent years, speed scaling has begun to be used as an approach for trading off energy usage and performance throughout all levels of computer systems. This wide-spread use of speed scaling has motivated significant research on the topic, but many fundamental questions about speed scaling are only beginning to be understood. In this chapter, we focus on a simple, but general, model of speed scaling and provide an algorithmic perspective on four fundamental questions: (i) What is the structure of the optimal speed scaling algorithm? (ii) How does speed scaling interact with scheduling? (iii) What is the impact of the sophistication of speed scaling algorithms? and (iv) Does speed scaling have any unintended consequences? For each question we provide a summary of insights from recent work on the topic in both worst-case and stochastic analysis as well as a discussion of interesting open questions that remain.
  • Speed scaling for processor sharing systems: Optimality and robustness
    Lachlan L. H. Andrew, Adam Wierman and Ao Tang
    Performance Evaluation, 2012. To appear.
    [show/hide abstract]
    Adapting the speed of a processor is an effective method to reduce energy consumption. This paper studies the optimal way to scale speed to balance response time and energy consumption under processor sharing scheduling. It is shown that using a static rate while the system is busy provides nearly optimal performance, but having more available speeds increases robustness to different traffic loads. In particular, the dynamic speed scaling optimal for Poisson arrivals is also constant-competitive in the worst case. The scheme which equates power consumption with queue occupancy is shown to be 10-competitive when power is cubic in speed.
  • Lachlan L. H. Andrew, Minghong Lin and Adam Wierman
    Proceedings of ACM Sigmetrics, 2010.
    [show/hide abstract]
    System design must strike a balance between energy and performance by carefully selecting the speed at which the system will run. In this work, we examine fundamental tradeoffs incurred when designing a speed scaler to minimize a weighted sum of expected response time and energy use per job. We prove that a popular dynamic speed scaling algorithm is 2-competitive for this objective and that no ``natural'' speed scaler can improve on this. Further, we prove that energy-proportional speed scaling works well across two common scheduling policies: Shortest Remaining Processing Time (SRPT) and Processor Sharing (PS). Third, we show that under SRPT and PS, gated-static speed scaling is nearly optimal when the mean workload is known, but that dynamic speed scaling provides robustness against uncertain workloads. Finally, we prove that speed scaling magnifies unfairness, notably SRPT's bias against large jobs and the bias against short jobs in non-preemptive policies. However, PS remains fair under speed scaling. Together, these results show that the speed scalers studied here can achieve any two, but only two, of optimality, fairness, and robustness.
  • Lachlan L. H. Andrew, Minghong Lin, Ao Tang and Adam Wierman
    IEEE COMSOC Multimedia Communications Technical Committee, 2010.
  • Wei Chen, Dayu Huang, Ankur Kulkarni, Jayakrishnan Unnikrishnan, Quanyan Zhu, Prashant Mehta, Sean Meyn and Adam Wierman
    Proceedings of IEEE CDC, 2009.
    [show/hide abstract]
    TD learning and its refinements are powerful tools for approximating the solution to dynamic programming problems. However, the techniques provide the approximate solution only within a prescribed finite-dimensional function class. Thus, the question that always arises is how should the function class be chosen? The goal of this paper is to propose an approach for TD learning based on choosing the function class using the solutions to associated fluid and diffusion approximations. In order to illustrate this new approach, the paper focuses on an application to dynamic speed scaling for power management.
  • Lachlan L. H. Andrew, Adam Wierman and Ao Tang
    Proceedings of ACM MAMA, 2009.
    [show/hide abstract]
    This paper investigates the performance of online dynamic speed scaling algorithms for the objective of minimizing a linear combination of energy and response time. We prove that (SRPT, $P^-1(n)$), which uses Shortest Remaining Processing Time (SRPT) scheduling and processes at speed such that the power used is equal to the queue length, is 2-competitive for a very wide class of power-speed tradeoff functions. Further, we prove that there exist tradeoff functions such that no algorithm can attain a competitive ratio less than 2.
  • Adam Wierman, Lachlan L. H. Andrew and Ao Tang
    Proceedings of IEEE Infocom, 2009.
    [show/hide abstract]
    Energy usage of computer communications systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean response time and mean energy consumption under processor sharing scheduling. Both bounds and asymptotics for the optimal speed scaling scheme are provided. These results show that a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, the results also highlight that dynamic speed scaling provide at least one key benefit -- significantly improved robustness to bursty traffic and mis-estimation of workload parameters.
  • Adam Wierman, Lachlan L. H. Andrew and Ao Tang
    Proceedings of Allerton, 2008.
    [show/hide abstract]
    Energy consumption in a computer system can be reduced by dynamic speed scaling, which adapts the processing speed to the current load. This paper studies the optimal way to adjust speed to balance mean response time and mean energy consumption, when jobs arrive as a Poisson process and processor sharing scheduling is used. Both bounds and asymptotics for the optimal speeds are provided. Interestingly, a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, dynamic speed scaling which allocates a higher speed when more jobs are present significantly improves robustness to bursty traffic and mis-estimation of workload parameters.

Sustainable IT

  • Online Convex Optimization Using Predictions
    Niangjun Chen, Anish Agarwal, Adam Wierman, Siddharth Barman and Lachlan L. H. Andrew
    Proceedings of ACM Sigmetrics, 2015.
    [show/hide abstract]
    Making use of predictions is a crucial, but under-explored, area of online algorithms.This paper studies a class of online optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio using only a constant-sized prediction window. Furthermore, we show that typical performance of AFHC is tightly concentrated around its mean.
  • Characterizing the impact of the workload on the value of dynamic resizing in data centers
    Kai Wang, Minghong Lin, Florin Ciucu, Adam Wierman and Chuang Lin
    Performance Evaluation, 2015. This is an extension of a paper that appeared in IEEE Infocom, 2013.
    [show/hide abstract]
    Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
  • Ganesh Ananthanarayanan, Michael Chien-Chun Hung, Xiaoqi Ren, Ion Stoica, Adam Wierman and Minlan Yu
    Proceedings of USENIX NSDI, 2014.
    [show/hide abstract]
    In big data analytics timely results, even if based on only part of the data, are often good enough. For this reason, approximation jobs, which have deadline or error bounds and require only a subset of their tasks to complete, are projected to dominate big data workloads. In this paper, we present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. The design of GRASS is based on first principles analysis of the impact of speculative copies. GRASS delicately balances immediacy of improving the approximation goal with the long term implications of using extra resources for speculation. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes shows that name increases accuracy of deadline-bound jobs by 47\% and speeds up error-bound jobs by 38\%. GRASS's design also speeds up exact computations, making it a unified solution for straggler mitigation.
  • Zhenhua Liu, Minghong Lin, Adam Wierman, Steven Low and Lachlan L. H. Andrew
    IEEE Transactions on Networking, 2014. Extension of a paper that appeared in ACM Sigmetrics, 2011.
    [show/hide abstract]
    Energy expenditure has become a signifficant fraction of data center operating costs. Recently, `geographical load balancing' has been suggested as an approach for taking advantage of the geographical diversity of Internet-scale distributed systems in order to reduce energy expenditures by exploiting the electricity price differences across regions. However, the fact that such designs reduce energy costs does not imply that they reduce energy usage. In fact, such designs often increase energy usage.

    This paper explores whether the geographical diversity of Internet-scale systems can be used to provide environmental gains in addition to reducing data center costs. Specifically, we explore whether geographical load balancing can encourage usage of `green' energy from renewable sources and reduce usage of `brown' energy from fossil fuels. We make two contributions. First, we derive three algorithms, with varying degrees of distributed computation, for achieving optimal geographical load balancing. Second, using these algorithms, we show that if dynamic pricing of electricity is done in proportion to the fraction of the total energy that is brown at each time, then geographical load balancing provides signifficant reductions in brown energy usage. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.
  • Adam Wierman, Zhenhua Liu, Iris Liu and Hamed Mohsenian-Rad
    Proceedings of IEEE IGCC, 2014.
    [show/hide abstract]
    This paper surveys the opportunities and challenges in an emerging area of research that has the potential to significantly ease the incorporation of renewable energy into the grid as well as electric power peak-load shaving: data center demand response. Data center demand response sits at the intersection of two growing fields: energy efficient data centers and demand response in the smart grid. As such, the literature related to data center demand response is sprinkled across multiple areas and worked on by diverse groups. Our goal in this survey is to demonstrate the potential of the field while also summarizing the progress that has been made and the challenges that remain.
  • Optimal power procurement for data centers in day-ahead and real-time electricity markets
    Mahdi Ghamkhari, Hamed Mohsenian-Rad and Adam Wierman
    Proceedings of Infocom Workshop on Smart Data Pricing, 2014.
    [show/hide abstract]
    With the growing trends in the amount of power consumed by data centers, finding ways to cut electricity bills has become an important and challenging problem. In this paper, we seek to understand the cost reductions that data centers may achieve by exploiting the diversity in the price of electricity in the day-ahead and real-time electricity markets. Based on a stochastic optimization framework, we propose to jointly select the data centers' service rates and their demand bids to the day-ahead and real-time electricity markets. In our analysis, we take into account service level-agreements, risk management constraints, and also the statistical characteristics of the workload and the electricity prices. Using empirical electricity price and Internet workload data, our numerical studies show that directly participating in the day-ahead and real-time electricity markets can significantly help data centers to reduce their energy expenditure.
  • Zhenhua Liu, Iris Liu, Steven Low and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    Demand response is a crucial tool for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large scale storage, if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply-function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are not accurate, and so we provide analytic, worst-case bounds on the impact of prediction accuracy on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.
  • Jayakrishnan Nair, Sachin Adlakha and Adam Wierman
    Proceedings of ACM Sigmetrics, 2014.
    [show/hide abstract]
    The increasing penetration of intermittent, unpredictable renewable energy sources, such as wind energy, pose significant challenges for the utility companies trying to incorporate renewable energy in their portfolio. In this talk, we discuss inventory management issues that arise in the presence of intermittent renewable resources. We model the problem as a three stage newsvendor problem with uncertain supply and model the estimates of wind using a martingale model of forecast evolution. We describe the optimal procurement strategy and use it to study the impact of proposed market changes and of increased renewable penetration. A key insight from our results is to show a separation between the impact of the structure of electricity markets and the impact of increased penetration. In particular, the effect of market structure on the optimal procurement policy is independent of the level of wind penetration. Additionally, we study two proposed changes to the market structure: the addition and the placement of an intermediate market. Importantly, we show that addition of an intermediate market does not necessarily reduce the total amount of energy procured by the utility company.
  • Minghong Lin, Jian Tan, Adam Wierman and Li Zhang
    Performance Evaluation, 2013. 70(10):720-735. This work also appeared in the Proceedings of IFIP Performance, 2013, where it recieved the ``Best Student Paper'' award. It was one of the ten most downloaded papers of Performance Evaluation during Fall and Winter 2013.
    [show/hide abstract]
    MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple processing phases, and thus an efficient job scheduling mechanism is crucial for ensuring efficient resource utilization. This paper studies the scheduling challenge that results from the overlapping of the ``map'' and ``shuffle'' phases in MapReduce. We propose a new, general model for this scheduling problem, and validate this model using cluster experiments. Further, we prove that scheduling to minimize average response time in this model is strongly NP-hard in the offline case and that no online algorithm can be constant-competitive. However, we provide two online algorithms that match the performance of the offline optimal when given a slightly faster service rate (i.e., in the resource augmentation framework). Finally, we validate the algorithms using a workload trace from a Google cluster and show that the algorithms are near optimal in practical settings.
  • Data center demand response: Avoiding the coincident peak via workload shifting and local generation
    Zhenhua Liu, Adam Wierman, Yuan Chen, Benjamin Razon and Niangjun Chen
    Proceedings of ACM Sigmetrics, 2013. Accepted as a poster. The full version of this work appeared in Performance Evaluation, 2013.
    [show/hide abstract]
    Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone.
  • Minghong Lin, Adam Wierman, Lachlan L. H. Andrew and Eno Thereska
    IEEE Transactions on Networking, 2013. 21:1378-1391. Received the 2014 IEEE Communication Society William R. Bennett Prize. Extended version of a paper that appeared in IEEE Infocom, 2011.
    [show/hide abstract]
    Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically `right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new `lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.
  • Integrating distributed energy resource pricing and control
    Paul de Martini, Adam Wierman, Sean Meyn and Eilyan Bitar
    Proceedings of CIGRE USNC Grid of the Future Symposium, 2012.
    [show/hide abstract]
    As the market adoption of distributed energy resources (DER) reaches regional scale it will create significant challenges in the management of the distribution system related to existing protection and control systems. This is likely to lead to issues for power quality and reliability because of three issues. In this paper, we describe a framework for the development of a class of pricing mechanisms that both induce deep customer participation and enable efficient management of their end-use devices to provide both distribution and transmission side support. The basic challenge resides in reliably extracting the desired response from customers on short time-scales. Thus, new pricing mechanisms are needed to create effective closed loop systems that are tightly coupled with distribution control systems to ensure reliability and power quality.
  • Kai Wang, Minghong Lin, Florin Ciucu, Adam Wierman and Chuang Lin
    Proceedings of ACM Sigmetrics, 2012. (Accepted as a poster.) Sigmetrics held jointly with IFIP Performance. An extended version of this work appeared in IEEE Infocom 2013.
    [show/hide abstract]
    Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.