TerraWatt: Sustaining Sustainable Computing of Containers in Containers
Jennifer Switzer, Rob McGuinness, Pat Pannuto, George Porter, Aaron Schulman, Barath Raghavan
TTerraWatt: Sustaining Sustainable Computing of Containers in Containers
Jennifer Switzer
UC San Diego
Rob McGuinness
UC San Diego
Pat Pannuto
UC San Diego
George Porter
UC San Diego
Aaron Schulman
UC San Diego
Barath Raghavan
USC
Abstract
Each day the world inches closer to a climate catastrophe and a sustainability revolution. To avoid the former andachieve the latter we must transform our use of energy.Surprisingly, today’s growing problem is that there is toomuch wind and solar power generation at the wrong timesand in the wrong places.We argue for the construction of TerraWatt: ageographically-distributed, large-scale, zero-carbon com-pute infrastructure using renewable energy and older hard-ware. Delivering zero-carbon compute for general cloudworkloads is challenging due to spatiotemporal powervariability. We describe the systems challenges in usingintermittent renewable power at scale to fuel such an older,decentralized compute infrastructure.
To reduce the carbon impact of datacenters, operators havepledged to rely on renewable energy. Sometimes they haveplaced datacenters near cheap, renewable generation, suchas Google’s facility at The Dalles, Oregon. They also time-shift workloads to use periodically-abundant renewablepower [20].However, few cloud service providers have demon-strated that they can achieve zero-carbon computethat uses only renewable energy and hardware whoseembodied energy—the energy used in mining andmanufacturing—is zero or near zero. Zero-carbon com-pute avoids the waste of (1) renewable energy that isthrown away or sold at a negative price and (2) embodiedenergy of hardware that is thrown away when machinesare retired. It is this challenge we tackle in this paper.The electric grid is often unable to collect and storeenergy or transport it to where it could be used. Theseissues are both fundamental (e.g., intermittent renewablesources, losses due to transport inefficiency, lack of gridinterconnection) and practical (e.g., high energy storage costs). While it is hard to move remote power to datacenters, moving data and compute near renewable genera-tion is feasible. At the same time, compute, storage, andnetworking gear within data centers have a fast hardwarerefresh cycle. Since the vast majority of a computer’scarbon footprint stems from its manufacture [3], shortlifespans waste significant embodied energy [30].Researchers [5,10,37] and industry [24,34] have foundthat bulk-compute workloads are the easiest to run onsuch zero-carbon compute. Such workloads have loosedeadlines and can be paused, which allows them to movetoward renewable power and stop when it becomes un-available. Although bulk workloads are not particularlyefficient on older compute, they make use of hardware thatwould otherwise go to waste. However, prior work hasconsidered the hardware and software abstractions nei-ther to support zero-carbon compute for general-purpose cloud compute nor to tolerate the data movement andintermittent power inherent in this context.Carrying these ideas to fruition gives license to aseemingly-fantastical dream: that there is no fundamentalbarrier to building (distributed) hyper-scale data centersthat have zero or near-zero carbon footprint. Since thereis often surplus power available somewhere , such datacenters could provide high availability (e.g., the wind isblowing in the US Midwest at night while the sun is shin-ing in the US West during the day).To realize this dream we must rethink the systemsand networking abstractions that underpin the infrastruc-ture of this new type of cloud platform. In this paper weimagine the construction and deployment of TerraWatt, ageographically-distributed, zero-carbon compute infras-tructure, based on the trends highlighted above. This in-frastructure is instantiated as individual shipping contain-ers called SunDrops with predominantly old hardware.The three central challenges in designing and buildingTerraWatt are (1) designing abstractions and infrastruc-ture for distributed, intermittently-powered compute thatexpose some but not all of the vagaries of the underlying1 a r X i v : . [ c s . D C ] F e b igure 1: Gas-fired plants can be adjusted to meet demand(e.g., produce more when prices are higher); wind energyis weather-dependent and often not in phase with demand.power availability, (2) addressing the systems challengesin using legacy compute infrastructure to still servicemeaningful workloads, and (3) designing a frameworkand metrics for evaluating the energy and carbon footprintof TerraWatt at macro-scale and individual tasks run on itat micro-scale. We begin with the current limits and opportunities ofrenewable energy sources. We then describe how we canuse curtailed and negative-priced power with recycledhardware to build zero-carbon compute infrastructure.
Renewables became cheaper than coal in 2018 [25]. In2019, 40% of new U.S. generation capacity came from so-lar [2]. However, adding more capacity alone isn’t enough.Renewables’ intermittency is their Achilles heel. Solarand wind, in particular, fluctuate with weather and solarirradiance [6, 7] and are not in phase with one another.This puts them at a disadvantage compared to fossil fuel-based energy sources, which adjust production to meetdemand, as shown in Figure 1.
Renewables have periods of underproduction, duringwhich not enough renewable energy is available to meetdemands, and periods of overproduction, during which too much renewable energy is available, to the point thatthe excess must be discarded (curtailed) or sold at a nega-tive price point [37]. Negative-price power and curtailedpower are together referred to as opportunity power [7]. Opportunity power in the U.S. is significant, growing,and often available. North America’s two largest ISOs—the California Independent System Operator (CAISO) andthe Midcontinent Independent System Operator (MISO)—together produced between 7–20 TWh of opportunitypower in 2017 [6, 7]. Opportunity power is availablesomewhere in MISO 99% of the time—since pricesare location-dependent—and often in intervals of >100hours [7]. In CAISO, some solar generators experience3.3 hours of opportunity power per day [6].As solar and wind generation grow, so too will theamount of curtailed and negative-priced power. Indeed,CAISO estimates their compound annual growth rate ofopportunity power to be 40% [6]. Assuming 1.5 TWhof opportunity power in CAISO in 2017 (a conservativeestimate) and a constant growth rate, CAISO alone couldprovide 22 TWh of opportunity power by 2025, enoughto power the city of Los Angeles.
Energy storage is expensive: grid-scale lithium-ion bat-teries cost approximately $209 per kWh [13], not includ-ing installation costs. A basic analysis of CAISO andMISO, assuming 1.5 TWh/year and 6 TWh/year of op-portunity power, respectively, yields a conservative esti-mate of $35 million to add one hour of storage to CAISOand $140 million to add one hour of storage to MISO. Amore advanced analysis by MISO suggests that addinggrid-level storage provides diminishing returns, and thatadding 50 hours of grid-level storage to MISO would cost$50-$400M per wind generation site [7], a price tag onpar with the cost of the wind turbines themselves.
Another simple response to excess renewable power isto deliver it to existing datacenter deployments. Unfortu-nately, the cost to deliver power grows non-linearly withdistance, with transmission lines alone varying widely dueto land and construction costs, often costing anywherefrom $2,500 per MW-mile to $16,000 per MW-mile [22].Other work [11] finds further combined transmission, dis-tribution, and administrative costs for power over time.
Renewable energy suffers from a supply-demand mis-match, but shifting the supply in time and space is infeasi-ble. Many renewable energy proponents have recognizedthe need for so-called flexible loads, which adapt in real-time to the carbon intensity of the grid [9, 27, 35]. Suchflexible loads scale their energy consumption up or downin response to supply, are spatially located near the site ofoverproduction, and absorb significant extra energy.2 W a tt s p e r C P U c o r e Power usage by machine year and CPU load
Figure 2: Ratio of server power usage to 100% utilizedcore count. Three 2012 server configurations are shown.Data centers represent a potential power sink. Theamount of compute being performed can be scaled upor down with relative ease; data centers can be locatedanywhere and can be distributed geographically.American data centers consume approximately 70 TWhyearly, 1.8% of national energy consumption [33]. Withestimates of opportunity power in CAISO and MISO sit-ting at 7–20 TWh available per year [6, 7], opportunitypower in just these two regions has the potential to providebetween 10-30% of the power needed by data centers.
Zero-carbon computing also requires decarbonizing theembodied energy of that computation. Embodied energyincludes energy for mining, refining, and manufacturing,which can be quite significant [30]. Unfortunately, theseprocesses may result in the majority of the lifetime CO emissions associated with a device [3].While very old servers can be acquired for cheap, manyservers are in a “midlife crisis.” Their embodied energywould be wasted if recycled for their raw materials, butthey are too old to deploy in most high-end data centers.Therefore, a zero-carbon cloud will need to make use,as much as possible, of existing computing equipment,notably equipment that has been replaced after hardwarerefresh and would have ended up as eWaste [10].Some in industry have sought to squeeze out squeezingevery embodied Joule out of using these “midlife” com-pute platforms by using them to run bulk compute tasks,such as training machine learning models [24]. However,in this work, we argue that these machines can also benefitgeneral cloud compute tasks, namely the common cloudtasks that do not saturate every CPU core in a server. In-deed, it has proved difficult to saturate all available CPUcores with typical cloud workloads [15, 26].For “midlife” servers at reduced CPU load for zero- carbon cloud tasks, we compared the watts per numberof active CPU cores for servers manufactured from 2009to 2017 (Figure 2). The average power per core whenusing only a few cores has not changed significantly overtime, likely due to static thermal limitations in processorpackages. Thus “midlife crisis” servers that have beendecommissioned from high-performance workloads canrun more dynamic jobs on a zero-carbon cloud platformwith similar operating energy as more modern servers,without incurring any additional embodied energy. Abundant renewable energy and discarded hardware areavailable and can be used to build low-carbon computing.However, there are many design challenges faced in build-ing TerraWatt as a truly zero-carbon platform. Here wedescribe those challenges, and point to potential solutions.
Our compute clusters must be physically located in re-gions with substantial renewable energy. We encapsulatecompute, storage, and networking close to these powersources in a component we call a SunDrop. The Sun’sand Google’s modular data center architectures are likelymodels, due to their portable and weatherproof design.Within each SunDrop we imagine mostly legacy equip-ment with a minimal addition of new equipment. Legacyequipment has a zero carbon footprint and low or zerocapital cost, whereas newer equipment has a high upfrontcost but lower operational cost due to density and powerefficiency. Thus, what mixture of old and new equipmentmaximizes energy efficiency while minimizing overallcost and footprint? As described below, we assume a fiberinterconnect to the Internet. The “core” of a SunDrop’snetwork should be built from new, high-speed networkinggear (in 2021 this would likely be 100 Gbps switches),with long-reach optical transceivers facing the outer world.Each rack should have a comparable (e.g., 100 Gbps) ToRswitch, with short-reach optics between racks and opticsor copper to servers within a rack.Although deploying entirely legacy server equipmentwould maximize cost savings and minimize the embod-ied carbon/energy aspect of the SunDrop, modern data-intensive workloads are almost always memory con-strained, rather than CPU constrained according to datapublished by Alibaba in 2019 [18], where memory wasfully utilized over 80% of the time, and often nearly allthe time, while at the same time the average CPU loadwas under 40%, peaking at under 70%.We propose a hybrid deployment of servers withineach SunDrop: the majority consisting of older machines3with less cores and memory) as well as a small num-ber of modern RAM-dense machines. Today machineswith several terabytes of RAM are commercially avail-able, and through techniques such as network disaggrega-tion [14, 31] and network-enabled swapping [17], oldermachines could “spill” some of their working set acrossthe network to these RAM machines. Gao et al. [14] studythe feasibility of applying this idea to data-intensive pro-cessing jobs. A single RAM-dense machine with one ortwo 100 Gbps NICs could interconnect anywhere from 4to 20 legacy servers at sustained bandwidths of 10- to 25-Gbps. In addition to RAM-dense modern servers, a smallnumber of servers that are dense with flash storage couldlogically connect to the SunDrop’s high-speed networkcore. Intel has a 1U chassis that can potentially host up to1 petabyte of m.2 flash storage modules [38].Thus, as workloads have increased their memory de-mands at a rate faster than their compute demands, con-figuring a hybrid design with a large number of older“compute” servers coupled with a small number of newerRAM- and Flash-storage nodes has the potential to sup-port the widest possible set of workloads at the lowestmonetary- and embodied energy-costs.
Unlike stable data center environments that can offer long-running VMs to end users, SunDrops have to respond to in-creases and decreases of demand, both on short timescalesand potentially with little warning. Thus we argue thatthe overhead of setting up, deploying, and migrating VMsmight be too heavyweight for rapidly changing environ-mental conditions. Instead, the recent development ofFaaS (Function-as-a-service) and “serverless” computingbetter fits the timescales and statelessness we target.A range of applications have been ported to serverlessplatforms, including video compression [12], video pro-cessing [1], and data analysis [21]. Common among theseapplications is the ability to rapidly burst the level of par-allelism to match available compute or energy resources.Backing the activation of these functions are containerimages and requisite datasets for processing. These inputscould be stored on the flash-dense infrastructure nodesdescribed above. This permits them high-speed access tothe legacy servers and to the high-speed core for whenentire jobs need to be migrated.
Figure 3 shows how renewable power production changesover time in five-minute intervals. Each region does notalways have curtailed power ready. Also, the regions arenot in phase with one another. Because each region hasephemeral availability of curtailed power, we must support Figure 3: Hourly renewable generation for CAISO (solar-dominant) and MISO (wind-dominant). Wind and solarproduction tend to be out of phase.dynamically relocating customer data and workloads toother locations when a site’s available power changes. VMmigration is one response, but we discuss in this sectionthat our needs for migration are different than that of atraditional cloud platform. We argue that new reliabilitymetrics must be created and communicated to users inTerraWatt; we hypothesize that we may face the realityof fluctuating power generation by dynamically shiftingworkloads from one location to another.
For reliability.
Reliability is key for a cloud platform.Users expect that containers or VMs will meet SLOs with-out being affected by external factors like node and net-work failures. We focus on platform reliability in two con-texts: within a deployment site and between sites, each ofwhich must meet different requirements.Cloud platform reliability inside of a deployment sitecan build upon a wealth of prior work. VM migration iswell studied and previous solutions can guarantee smoothsite operation [8, 23, 32]. Leveraging the state replicationand fault tolerance techniques described in such work ismore than sufficient to ensure that any one deploymentsite can be maintained within the bounds of its own walls.The more challenging and significant issue comes fromhow reliability is achieved as a whole between deploy-ment sites. The core issues derive from the inalienable factthat the compute ability of a deployment site is non-static,and may even be taken completely offline.
In response to power down/up events.
We must en-sure that when a deployment site’s power availabilitychanges, that we can proportionally shift the deploymentload appropriately. More concretely, if site A loses renew-able power while site B gains power, we need a way totransparently move customer jobs from site A to B.4revious work in VM migration has considered move-ment of a single VM, which typically takes ~10’s of mil-liseconds [8]. However, in our case, we must understandhow to move a set of VMs across a WAN without a sig-nificant impact to the user over the course of minutes.To shift users from one region to another, we must knowthe job dependency graph to detect what data and servicesmust be updated between regions. Detecting the DAG incloud environments is a well-studied problem, with recentwork [36] revealing that these DAGs have predictableand detectable forms. Modern cloud job schedulers [15,26] have used DAG information to provide insight toefficiently schedule large sets of jobs in cloud datacenters.Using power production data similar to that in Figure 3,we can predict how available power will change over time.We similarly can predict how long it will take to moveVMs and user data from one region to another. Using thesetwo metrics along with DAG analysis and scheduling, itbecomes possible to freeze and move VMs/data earlyenough before the region becomes unavailable to do so.However, meeting homogeneous VM needs is challeng-ing. We discuss in Section 3.1 that we plan to deploy olderhardware at each location. Even if site B has more power,it may not have an equivalent amount of compute rela-tive to that power. Site B may also not have any curtailedpower, as shown by the simultaneous fluctuations in pricein Figure 3. Thus, it may become necessary to freeze jobsand write them to cold storage and resume them whenpower returns. For those who pay more, jobs could bemoved between ISOs to follow renewable capacity.
Traditional data center networking assumes an always-onmodel. Such fat-tree deployments assume all-or-nothingpower availability. SunDrops do not have constant power,so we must scale the network to both take advantage ofopportunity power and to scale back when power is scarce.A fat-tree model still provides significant advantagesfor intra-container networking. ElasticTree [19] providesa way to flexibly enable and disable parts of a fat-treenetwork based on job demand, which we can extend toscale the network based on available power.Inter-datacenter networking requires high bandwidth,always-available links to move data between regions. Such links are likely to begin with wide-area fiber linksalready laid in parallel with construction of renewableenergy generators [16]. The control and network of theselinks depends on the specifics of the deployment location.It is possible that dedicated links between locations canbe provided, which will be used as necessary to shift jobsbetween locations as discussed previously. If no links are Independently, we must make the Internet zero-carbon [29]. available, it remains as an open question whether linksthrough ISP exchanges can provide sufficient spot band-width to transfer data and jobs between regions.
We must carefully track carbon emissions due to eachindividual job. Thus we require a fine-grained energy andcarbon accounting system in software and hardware, totrack the energy use “billable” to each task and customer.Given the physical hardware of a specific SunDrop, wemust include its embodied energy, new and old alike. Per-forming such calculations is known to be challenging anderror prone, as there is often scarce data on life-cycle en-ergy for hardware, energy-accounting system boundariesare inherently fuzzy, and amortization is subjective. Wedo not expect to innovate on this matter but just to buildupon the best available data for the hardware in use.
Many organizations have begun to track their carbon foot-prints. Our aim with TerraWatt is to deliver near-zerocarbon computation. To this end, users can specify theirtarget renewable energy usage (e.g., “no less than X%renewable”) and/or carbon targets (e.g., “no more than Ykg of CO e”), which, given TerraWatt’s goals, are likelyalways to be achievable given job completion flexibility.Users of TerraWatt are also likely to be price sensitive.Thus we must expose real-time power price informationfrom the relevant ISO(s) and an estimate of ongoing fixedcosts (e.g., cooling, bandwidth). Both CAISO and MISOreport real-time, regional prices [4,28]. Furthermore, sinceprices follow patterns, price prediction is feasible. Siting of SunDrops must be grid specific. For those withlarge-scale renewable deployments far from loads (e.g.,wind farms in rural areas), we would likely locate moreSunDrops in a higher density near generation sources. Byfocusing on these high-production sites first, we can getlarge gains at relatively low cost: MISO’s top producingwind farms each had as much as 250 MW of opportunitypower available at duty factors of up to 70% [7]. In otherareas with more diffuse renewable sources, we wouldscatter SunDrops across the region. This overall approachaims to ensure that opportunity power will be available atsome subset of SunDrops at any time.
Anthropogenic climate change, caused in large part byfossil-fuel based power generation and manufacturing pro-5esses, poses an existential threat to our civilization. Overthe next decade, with solar and wind installed prices al-ready below that of virtually all fossil fuels, we may facea glut of renewable electricity yet will struggle to useit. Computing has a high electricity and embodied foot-print, yet its flexible nature across place, time, and type ofequipment lets us mitigate its climate impact. We arguefor the creation of large-scale compute platforms that canuse renewable generation and older hardware to sustainsustainable computing for the next generation.
References [1] Lixiang Ao, Liz Izhikevich, Geoffrey M. Voelker, andGeorge Porter. Sprocket: A serverless video processingframework. In
Proceedings of the ACM Symposium onCloud Computing (SOCC) , pages 263–274, Carlsbad, CA,October 2018.[2] Solar Energy Industries Association. Solar accounts for40% of U.S. electric generating capacity additions in2019 adds 13.3 GW. , Mar 2020.[3] Dustin Benton, Emily Coats, and Jonny Hazell. A circu-lar economy for smart devices. Technical report, GreenAlliance, London, UK, Jan 2015.[4] CAISO. California ISO open access same-time infor-mation system (OASIS). http://oasis.caiso.com/mrioasis/logon.do .[5] Andrew A. Chien. New opportunities for PODC? Massive,volatile, but highly predictable resources. In
Proceedingsof the 2016 ACM Symposium on Principles of DistributedComputing , PODC ’16, page 1, New York, NY, USA, 2016.Association for Computing Machinery.[6] Andrew A Chien. Characterizing opportunity power in theCalifornia independent system operator (CAISO) in years2015-2017. Technical report, Technical Report TR-2018-07. University of Chicago, 2018.[7] Andrew A Chien, Fan Yang, and Chaojie Zhang. Charac-terizing curtailed and uneconomic renewable power in themid-continent independent system operator. arXiv preprintarXiv:1702.05403 , 2016.[8] Michael Dalton, David Schultz, Jacob Adriaens, AhsanArefin, Anshuman Gupta, Brian Fahs, Dima Rubinstein,Enrique Cauich Zermeno, Erik Rubow, James AlexanderDocauer, et al. Andromeda: Performance, isolation, andvelocity at scale in cloud network virtualization. In , pages 373–387, 2018.[9] Paul Denholm and Maureen Hand. Grid flexibility and stor-age required to achieve very high penetration of variablerenewable electricity.
Energy Policy , 39(3):1817–1830,2011.[10] Mark Dietrich and Andrew A Chien. Options for extend-ing the life of scientific computing equipment. TechnicalReport TR-2020-08, The University of Chicago, Oct 2020. [11] Robert L. Fares and Carey W. King. Trends in transmis-sion, distribution, and administration costs for u.s. investor-owned electric utilities.
Energy Policy , 105:354 – 362,2017.[12] Sadjad Fouladi, Riad S. Wahby, Brennan Shacklett,Karthikeyan Vasuki Balasubramaniam, William Zeng,Rahul Bhalerao, Anirudh Sivaraman, George Porter, andKeith Winstein. Encoding, fast and slow: Low-latencyvideo processing using thousands of tiny threads. In
Proceedings of the 14th ACM/USENIX Symposium onNetworked Systems Design and Implementation (NSDI) ,Boston, MA, March 2014.[13] Ran Fu, Timothy W Remo, and Robert M Margolis. 2018us utility-scale photovoltaics-plus-energy storage systemcosts benchmark. Technical report, National RenewableEnergy Lab.(NREL), Golden, CO (United States), 2018.[14] Peter X. Gao, Akshay Narayan, Sagar Karandikar, JoaoCarreira, Sangjin Han, Rachit Agarwal, Sylvia Ratnasamy,and Scott Shenker. Network requirements for resourcedisaggregation. In , pages249–264, Savannah, GA, November 2016. USENIX Asso-ciation.[15] Robert Grandl, Mosharaf Chowdhury, Aditya Akella, andGanesh Ananthanarayanan. Altruistic scheduling in multi-resource clusters. In , pages65–80, 2016.[16] ABB Group. Communication solutions for mission criticalapplications.
BU PNSM , page 1218, Apr 2015.[17] Juncheng Gu, Youngmoon Lee, Yiwen Zhang, MosharafChowdhury, and Kang G Shin. Efficient memory disag-gregation with infiniswap. In , pages 649–667, 2017.[18] Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, YihuiFeng, Liang Mao, and Yungang Bao. Who limits the re-source efficiency of my datacenter: An analysis of Alibabadatacenter traces. In , pages 1–10.IEEE, 2019.[19] Brandon Heller, Srinivasan Seetharaman, Priya Mahade-van, Yiannis Yiakoumis, Puneet Sharma, Sujata Banerjee,and Nick McKeown. Elastictree: Saving energy in datacenter networks. In , pages 249–264, 2010.[20] Urs Hölzle. Announcing ‘round-the-clock clean energyfor cloud. https://cloud.google.com/blog/topics/inside-google-cloud/announcing-round-the-clock-clean-energy-for-cloud .[21] Eric Jonas, Qifan Pu, Shivaram Venkataraman, Ion Sto-ica, and Benjamin Recht. Occupy the cloud: Distributedcomputing for the 99
22] Ross Baldick Juan Andrade. Estimation of transmissioncosts for new generation. Technical report, The Universityof Texas at Austin, 9 2016.[23] Chinmay Kulkarni, Aniraj Kesavan, Tian Zhang, RobertRicci, and Ryan Stutsman. Rocksteady: Fast migrationfor low-latency in-memory storage. In
Proceedings of the26th Symposium on Operating Systems Principles , pages390–405, 2017.[24] Lancium. https://lancium.com/ , Feb 2021.[25] Lazard. Levelized cost of energy and levelized cost of stor-age 2018. , 2018.[26] Hongzi Mao, Malte Schwarzkopf, Shaileshh BojjaVenkatakrishnan, Zili Meng, and Mohammad Alizadeh.Learning scheduling algorithms for data processing clus-ters. In
Proceedings of the ACM Special Interest Groupon Data Communication , SIGCOMM ’19, page 270–288,New York, NY, USA, 2019. Association for ComputingMachinery.[27] Daud Mustafa Minhas, Raja Rehan Khalid, and GeorgFrey. Load control for supply-demand balancing underrenewable energy forecasting. In , pages365–370. IEEE, 2017.[28] Midcontinent Independent System Operator. Markets andoperations. .[29] Barath Raghavan, David Irwin, Jeannie Albrecht, JustinMa, and Adam Streed. An intermittent energy internet ar-chitecture. In
Proceedings of the 3rd International Confer-ence on Future Energy Systems: Where Energy, Computingand Communication Meet , pages 1–4, 2012.[30] Barath Raghavan and Justin Ma. The energy and emergyof the internet. In
Proceedings of the 10th ACM Workshopon hot topics in networks , pages 1–6, 2011.[31] Pramod Subba Rao and George Porter. Is memory disag-gregation feasible? A case study with Spark SQL. In
Pro-ceedings of the ACM/IEEE Symposium on Architecturesfor Networking and Communications Systems (ANCS) ,Santa Clara, CA, March 2016.[32] Daniel J. Scales, Mike Nelson, and Ganesh Venkitacha-lam. The design of a practical system for fault-tolerantvirtual machines.
SIGOPS Oper. Syst. Rev. , 44(4):30–39,December 2010.[33] Arman Shehabi, Sarah Smith, Dale Sartor, RichardBrown, Magnus Herrlin, Jonathan Koomey, Eric Masanet,Nathaniel Horner, Inês Azevedo, and William Lintner.United States data center energy usage report. Technicalreport, Lawrence Berkeley National Lab (LBNL), 2016.[34] Soluna. , Feb 2021.[35] Sun Sun, Min Dong, and Ben Liang. Distributed real-timepower balancing in renewable-integrated power grids withstorage and flexible loads.
IEEE Transactions on SmartGrid , 7(5):2337–2349, 2015. [36] Huangshi Tian, Yunchuan Zheng, and Wei Wang. Char-acterizing and synthesizing task dependencies of data-parallel jobs in Alibaba Cloud. In
Proceedings of theACM Symposium on Cloud Computing , SoCC ’19, page139–151, New York, NY, USA, 2019. Association for Com-puting Machinery.[37] F. Yang and A. A. Chien. Zccloud: Exploring wastedgreen power for high-performance computing. In , pages 1051–1060, 2016.[38] ZDNet.com. Intel ruler form-factor chassis. , Aug 2018., Aug 2018.