XRDS

Crossroads The ACM Magazine for Students

Sign In

Association for Computing Machinery

Magazine: Features
Flexible computing for intermittent energy

Flexible computing for intermittent energy

By ,

Full text also available in the ACM Digital Library as PDF | HTML | Digital Edition

Tags: Distributed computing methodologies, Renewable energy

back to top 

Coping with the intermittency of renewables is a fundamental challenge, with load shifting and grid-scale storage as key responses. In our recent work, we argue that computing, which is inherently flexible in both time and space, can play an important role in this mitigation.

Renewables are no longer limited by their price: In 2018 it became more cost-effective to build new solar capacity than to continue operating existing coal-fired power plants. Wind is similarly inexpensive, and yet three years later non-hydro renewables make up just 11% of total electricity generation in the U.S. [1].

Lack of infrastructure and political opposition also present their own barriers, but renewables suffer from an even more fundamental issue: intermittency. Solar and wind availability fluctuate widely on a daily and seasonal basis, and this fluctuation is oftentimes not in phase with energy demands (see Figure 1).

The resulting problem is two-fold. During periods of underproduction, there isn't enough renewable energy to meet demands, and consumers have no choice but to rely on non-renewable sources. Then, during periods of overproduction, too much energy is available, to the point that the excess must be discarded (known as curtailed power) or sold at a negative price point.

Curtailed and negatively-priced power are together referred to as "opportunity power" and represent renewable energy that has the potential to be useful but is instead thrown away. The problem is significant, with an estimated 7.5–20 TWh/year worth of opportunity power available across California and the Midwest [1, 2]. At its current compound annual growth rate of 40%, in 2025 California could generate 22 TWh of opportunity power, enough to power the city of Los Angeles for a year.

Battery storage could mitigate this, but it is—as of now—far too expensive. At $209/kWh (excluding installation costs), adding just one hour's worth of battery storage capacity to the Midcontinent Independent System Operator (MISO)—the transmission system that services much of the midwestern and southern United States—would cost between $140–260 million. Furthermore, Chien et. al found adding more storage provides diminishing returns and adding 50 hours of storage (the amount needed to significantly reduce opportunity power) in wind-dominated MISO would cost $50–$400M per wind generation site, a price tag on par with the cost of the turbines themselves [1].

An alternative approach is possible: Rather than shifting the renewable supply to meet demands, we can instead modulate our energy use to meet the renewable supply. This is accomplished via so-called flexible loads, which scale their energy demand up and down as needed. The ideal flexible load is high energy, latency insensitive, and available often. It mitigates opportunity power by matching its demand to the availability of renewables. Currently deployed products that meet this specification include smart washing machines, which can be loaded with clothing at any time but will wait until renewable energy is available before running.

Cloud providers have begun to adopt this idea. At Google, long-running, time-insensitive jobs—like creating photo filters or adding words to Google Translate—are scheduled for times of day during which renewables are available [3]. While this works well for the internal scheduling of long-running, latency insensitive jobs, it is difficult to apply this framework to general cloud computing, where workload types vary and results are expected quickly.

One solution is to build a limited-service that specializes in long-running, low-latency tasks, which can easily be scheduled for whenever opportunity power is available.

Or we can try something else—redesigned computer systems built to work within the limitations of intermittency. We've proposed two different (but complementary) approaches to this, based on observations of the specific possibilities afforded by flexible load computing.

back to top  Observation 1: Unlike Other Flexible Loads, we can Pre-Compute Computational Results for When They are Needed.

It's not possible to wash your clothes before they get dirty, but it is possible to compute a result before it is explicitly requested. If we can intelligently predict which computational tasks will be submitted, then we can precompute the results of these tasks whenever opportunity power is available, storing the results for when they are needed. We call this approach "information batteries" since we are essentially converting and storing energy as information—the results of completed computations.

For instance, imagine a cloud storage service such as Dropbox. Based on a given users' previous requests, the service provider knows there's a high chance they will download a particular large folder. Before the user submits the download request, the service provider preemptively queues up the compression of the folder to be executed during a period of time when opportunity power is available.

An important observation is that, when implementing this framework, we do not always have to be right. Since opportunity power is free or negatively priced, there is relatively little risk in using it to optimistically pre-compute results. Therefore, the problem of prediction becomes a matter of maximizing accuracy rather than attempting to ensure that every prediction is sound.

In our recent paper [4], we provide a design and proof of concept implementation of an information batteries system. It consists of four central components: a renewable prediction engine, which predicts the availability of opportunity power; a task predictor, which predicts which computations have a high probability of being requested next based on a trace of previously completed tasks; a key-value store called the IB cache, where the results of precomputed functions are stored; and a modified compiler to enable the automatic storage and retrieval of these results.


Rather than shifting the renewable supply to meet demands, we can instead modulate our energy use to meet the renewable supply.


Jobs submitted to the information batteries system are instrumented with the custom compiler, which augments function calls to first check the IB cache for previously computed results. We populate the cache during periods of opportunity power when the task predictor generates a list of tasks that are likely to be requested soon. While opportunity power is still available, the results of these tasks are computed and cached. The outcome is that, for any correctly predicted task, we have shifted the computational load of the task to opportunity power, replacing it at runtime with a single fetch from the IB cache. Our experiments indicate this system has the potential to offload significant computational load to opportunity power, even when task prediction accuracy is only 50%.

While our experience points to the feasibility of this approach, there are still many systems problems to overcome. In particular, task prediction and caching is a significant challenge. Improving prediction efficiency is a multifaceted challenge, involving: 1) compiler toolchains that enable the segmentation of programs into pre-computable components, perhaps allowing for memoization at different granularities (e.g., not just function-level); 2) better machine learning models for job prediction; and 3) improved cache efficiency, including the ability for multiple processes to securely share a single cache.

back to top  Observation 2: Compute can Happen Anywhere, and Information is Easier to Transfer Than Energy.

Wind and solar availability is highly location-dependent. At any given moment, the sun is shining and the wind is blowing somewhere. Even within North America, there is a wide variation in the temporal availability of renewables across the country, with Midwestern wind farms overproducing at night while Californian solar cells hit their peak at midday.

We use this observation to motivate TerraWatt, a geographically distributed, zero-carbon compute infrastructure [5]. The TerraWatt design consists of many small, distributed data centers, each located in a region of the country where wind and solar production is known to be high, and distributed such that, at any given time, renewable energy is likely to be available at a subset of the chosen locations.

Here, the problem lies in geographic rather than temporal load shifting. At a predefined interval, we predict the future availability of opportunity power at each of our locations. We then migrate jobs accordingly in order to maximize the amount of computation being performed on opportunity power.

When a deployment site's power availability changes, jobs must be quickly suspended, compressed, and sent over the network to the closest location where opportunity power is available. This involves the movement of not one but an entire set of VMs across the wide-area network, performed within a few minutes and without a significant impact to the end-user. This approach brings its own host of system challenges, including minimizing the state information associated with each job and implementing effective networking for fast and low-power data transmission.

Despite these challenges, it is worth noting that transferring information is fundamentally easier than transferring energy. The cost to deliver power grows non-linearly with distance, and the electrical characteristics of the grid mean that it is oftentimes not physically possible to move excess energy across it. On the other hand, highspeed data transmission lines like optical fiber are widely available and relatively inexpensive to use.

back to top  Conclusion

Renewable energy has an intermittency problem, and batteries alone aren't going to fix it. The sustainable revolution requires something more—a fundamental shift in the way we consume energy, with flexible loads taking central importance. In this article, we have provided two strategies for building flexible computing systems, taking advantage of computing's inherent flexibility in time and space. Widespread adoption of these techniques has the potential to not only lower computing's carbon footprint, but also to increase the overall economic viability of intermittent renewables.

back to top  References

[1] Chien, A. A., Yang, F., and Zhang, C. Characterizing curtailed and uneconomic renewable power in the mid-continent independent system operator. arXiv preprint arXiv:1702.05403. 2016.

[2] Chien, A. A. Characterizing opportunity power in the California Independent System Operator (CAISO) in years 2015–2017. Technical Report TR-2018-07. University of Chicago, 2018.

[3] Radovanovic, A. Our data centers now work harder when the sun shines and wind blows. The Keyword. Google. April 22, 2020; https://blog.google/inside-google/infrastructure/data-centers-work-harder-sun-shines-wind-blows.

[4] Switzer, J. and Raghavan, B. Information batteries. [Under review).

[5] Switzer, J., McGuinness, R., Pannuto, P., Porter, G., Schulman, A., and Raghavan, B. Terrawatt: Sustaining sustainable computing of containers in containers. 2021.

back to top  Author

Jennifer Switzer is a first-year Ph.D. candidate in the Computer Science and Engineering (CSE) department at the University of California, San Diego. She received a Master of Engineering (2020) and Bachelor of Science (2018) in electrical engineering and computer science from the Massachusetts Institute of Technology. Her research interests include sustainable computing and e-waste reduction, environmental sensing, and the design of resource-constrained computing systems.

back to top  Figures

F1Figure 1. Gas generation [black diamonds] can be adjusted to meet demand; wind generation [green triangles] is dependent on environmental conditions and is often out of phase with demand. Prices [grey] tend to be anticorrelated with wind generation. [Based on 2019 price and fuel mix data from MISO.]

F2Figure 2. Sunlight peeks over a solar PV array on UCSD's Gilman Parking Structure. The building's solar arrays produce 400kWh/day, the majority of this (65%) within the hours of 10 a.m. and 2 p.m.

back to top 

xrds_ccby.gif This work is licensed under a Creative Commons Attribution International 4.0 License.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

Comments

There are no comments at this time.

 

To comment you must create or log in with your ACM account.