Disaggregated Memory at the Edge
DDisaggregated Memory at the Edge
Luis M Vaquero [email protected] of BristolUnited Kingdom
Yehia Elkhatib [email protected] of GlasgowUnited Kingdom
Felix Cuadrado [email protected] Politecnica de MadridSpain
ABSTRACT
This paper describes how to augment techniques such as DistributedShared Memory with recent trends on disaggregated Non VolatileMemory in the data centre so that the combination can be used in anedge environment with potentially volatile and mobile resources.This article identifies the main advantages and challenges, andoffers an architectural evolution to incorporate recent researchtrends into production-ready disaggregated edges. We also presenttwo prototypes showing the feasibility of this proposal.
CCS CONCEPTS • Applied computing ; •
Hardware → Emerging architectures ; KEYWORDS edge, cloud, disaggregation, NVM
ACM Reference Format:
Luis M Vaquero, Yehia Elkhatib, and Felix Cuadrado. 2021. DisaggregatedMemory at the Edge. In
Proceedings of Edinburgh ’21: EdgeSys (EdgeSys’21).
ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
Resource disaggregation started with storage (e.g. storage vol-umes are dynamically attached to virtual machines), but it has alsoreached memory systems [9]. Memory disaggregation makes idlememory available to other nodes; this available memory can befrom the same physical node (node level memory disaggregation)or from remote nodes in the same cluster (cluster level memorydisaggregation) [10]. New non-volatile memory (NVM) and opticalcommunication technologies are making it possible to realise thevision of disaggregated memory in the data centre [3, 21].One key element in data centre disaggregation have been NVMtechnologies which enable lower energy consumption and the abil-ity to preserve state independently of compute nodes. These advan-tages fit the idea of the Internet-of-Things (IoT) and the resourceconstraints of most edge devices [5]. Our work takes memory dis-aggregation to a new level by expanding it to the edge (edge leveldisaggregation).
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected].
EdgeSys ’21, Apr 26, 2021, Edinburgh, UK © 2021 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00https://doi.org/10.1145/nnnnnnn.nnnnnnn
Beyond the boundaries of the data centre, latency limits are oneof the drivers towards smaller units of computation and disaggre-gated devices and volatility and reliability call for NVM as a safeharbour for data. Also, on the privacy front, with increasing aware-ness and regulation, the most cost-effective mechanism of all isnot to store/process data centrally. The ability to dynamically andseamlessly integrate edge memory nodes to provide persistence(via NVM exposed as a service) resources to moving and potentiallybattery limited compute nodes could simplify handling of state andbenefiting from physical locality.To illustrate the potential of disaggregated NVM at the edge, letus imagine each fixed element in a street (e.g. buildings, lamp posts,etc.) has its own addressable non-volatile memory including itsGPS coordinates and a specification of its spatial position, volume,and other physical properties. This information could be used by: • A self-driven vehicle to reconstruct the scene with much lessrequired computation/sensors, better performance under adverseweather conditions, and simpler (energy efficient) machine learn-ing models. • An advertisement company trying to show interactive holo-graphic ads to pedestrians walking in the area. Some of the ele-ments of the ads to be projected could be pre-stored in specificmemory locations on the edge (based on the angle of approach ).These two use cases highlight the need for ultra low latency(preload of some frames across local memory locations and directmemory access for the nearby projectors/cars) to reconstruct theholographic image or a map of obstacles. They also show how NVMaccess can help expedite local computations in the edge.Many businesses such as energy, airlines, and hotel marketsexhibit substantial demand fluctuations and high capacity costs.Increasing utilisation via disaggregation of resources and higherreliance on edge deployment becomes a strong economic incen-tive [7, 22].Here, we present our effort towards building a new breed ofsystems that brings the advantage of data centre memory disag-gregation to the edge.
Disaggregated edges provide consistent,low-latency, distributed access to data stored in distributed NVMby addressing it with no centralised control. The remainder of thispaper is organised as follows: Section 2 presents related work thatis a precursor of disaggregated edges. In Section 3 we introduceour framework to enable dynamically expandable virtual memoryspace seamless integration of edge devices. Then, two prototypesare described that demonstrate the feasibility of this approach (seeSection 4). Section 5 discusses the main pros and cons of our frame-work in the context of recent work and we summarise the mainconclusions in Section 6. a r X i v : . [ c s . D C ] F e b dgeSys ’21, Apr 26, 2021, Edinburgh, UK Vaquero et al. Our work builds on the concept of distributed shared memory,adapting it to edge systems that are mobile and of highly volatileresources.
Software distributed shared memory (DSM) systems provide sharedmemory abstractions for clusters. DSM systems have long held thepromise of a distributed virtual memory space that enables severalprocessors to share data with each other. Confined to a cluster ina data centre, traditional DSMs rely on static address spaces (e.gthe classic partitioned global address space PGAS [2]), where eachaddress is a tuple of a rank in the job (or global process identifier)and an address within that same process. Dynamic and expandableaddress spaces will be essential to enable dynamic virtual memoryspace integration across edge devices.Our work builds on the notions of request-centric DSMs and,thus, shares some of its challenges (such as consistency, coherence,latency). At the same time, adding DSM to mobile and volatile edgeresources imposes a few additional challenges: • Memory management modules (MMMs) in DSMs do not routerequests and learn memory addresses dynamically. • The links between different MMMs are predefined and often hard-wired and confined to nearby processors in most DSM systems. • The size of the virtual memory space is not dynamic and newnodes cannot join in and out of the shared memory space as theymove around.While some works have explored DSMs built in at a web browserlevel [18], latency has been a traditional problem for DSMs to func-tion at scale [13]. It builds up when working at the top of the stackand byte addressable load and store operations are not possible.Edge scenarios beget unpredictability; mobility, geographicallocation and its effect on the network and reliability would adviseto minimise the encapsulation and system calls required to makeremote bytes available as if they were local. Hence our work ex-plores techniques that are closer to managing specific hardware(e.g. bespoke NVMs).
Memory disaggregation detaches physical memory allocated tovirtual servers at initialisation from the runtime management of thememory [10], so that the virtual servers can use idle local (from thesame physical node - known as node level memory disaggregation)or from remote nodes in the same cluster (known as cluster levelmemory disaggregation) [10]. The present work takes memorydisaggregation to a new level by expanding it to the edge (we referto this as edge level disaggregation).Accessing NVM by several compute units requires coherenceprotocols. This challenge is not new since NICs and GPUs do notmaintain cache coherence with CPUs. Many multi-core processorshave already demonstrated the use of non-coherent shared memory.Data is usually accessed through some form of logical hierarchy thathelps to support access rights and multi-tenancy. Thus, a softwarebooking system, in the form of a distributed memory controller, is required (e.g. HPE’s ‘librarian’ ). [3, 21] propose data centre leveldisaggregation with a routable protocol to direct load and storeoperations to the right machine in the data centre. In contrast,[10] use non-routable remote memory access, RDMA, for memorydisaggregation. We build on routable memory abstractions but takethis ability beyond the boundaries of the data centre.There is experimental evidence backing up the feasibility ofthe memory disaggregated, accelerated, optical data center. Theseprototypes apply a few changes to the hypervisor and operatingsystem [1], or propose a hierarchical orchestration of memory re-sources in a cluster [6]. Rather than working within the boundariesof a data centre, our work builds on the self-organising principlesthat govern routed networks to help nodes in the edge self-organisein the creation of a dynamic virtual memory space. [11] centralise application management of edge deployed contain-ers. In their work disaggregation does not refer to individual hard-ware resources.Going one step further into hardware disaggregation withoutcentral coordination, edge devices have discovery and negotiation‘agents’ that enable them to request distributed resources and ex-pose them to the applications as if they were local. These agentscan be part of a decomposed operating system [17] and supportedby local purpose-specific accelerators. The ‘software-defined ma-chine’ [19] is one of the precursors of this type of system. Ourwork expands on these ideas to enable a software-defined dynamicvirtual memory space. Unlike many recent approaches (see [23] forexample), we do not assume the edge is just a nearby cloud whereresources can be assigned and tasks run. We now give an overview of our framework design and its internalmechanisms.
Edge resources are inherently different to cloud resources in thatthey tend to be more volatile; the nodes they serve are often mobileand their presence in the physical neighbourhood of the edge isephemeral. This paper proposes flattening the communication stackto move data between edge devices. While traditional computingmoves data to the nearby compute unit, or compute to pre-existentdata, our proposal is looking for a middle ground.In our model data is persisted in NVM and can be dynamicallyaccessed by passing-by devices, which seamlessly and dynamicallyfederate their virtual memory address space with that exposed bynearby NVM modules.NVM modules in close proximity participate in the creation of amesh/overlay. This is a shared virtual memory space, rather than aset physical topology.Any device can access local NVM modules as part of their mem-ory hierarchy in a classic, static way (as NVM RAM memory).Devices can also access memory that is remote to its local machinevia a set of protocols, the local operating system expanding its vir-tual memory hierarchy to consider remote memory modules as if https://github.com/FabricAttachedMemory/ isaggregated Memory at the Edge EdgeSys ’21, Apr 26, 2021, Edinburgh, UK they were local. Memory requests from the CPU are routed to theappropriate remote module or a gateway memory module.Our work assumes memory requests are sent over wireless pro-tocols such as Zigbee, although other wireless (e.g. Z-wave) orwired communication protocols (see Gen-Z Phy ) would also beemployable. Figure 1 describes a high level view of the architectural elementsof a disaggregated edge. As can be observed, it consists of: 1) Anaerial (wireless) layer in charge of encapsulating memory load/storerequests and delivering them to memory modules in the range ofthe sender memory block. 2) A set of memory modules in charge ofstoring the data and converting aerial messages to electrical signals.3) Memory routing protocols (predefined sequences of messagesexchanged by memory modules) in charge of forwarding memoryload/store requests through the appropriate memory module inrange. Note that routing towards devices not in the immediaterange of the emitter memory module is possible thanks to these setof routing protocols.
Figure 1: Architectural View of the Proposed System: in-cluding hardware elements (memory modules and aeriallayer/antennas and transducers), protocols (exchanges ofmessages for routing or for data transmission), and interac-tions using those protocols (arrows).
Memory routing protocols are in charge of 3 fundamental tasks: • Discovery of nearby available NVM modules that can be dynam-ically and seamlessly integrated into the local virtual memory sothat the local operating system sees it as if it were local. Theseprotocols are also in charge of creating a unified virtual memoryaddress space between all the modules participating in the mesh(see Figure 3). • Memory routing protocols for delivering load and store requestsinto a remote memory module and retrieve the data. • Data coherence protocols that cope with several CPUs accessingthe same remote memory (several implementations are possi-ble: e.g. assuming immutable data and versioning for updates,distributed locks, etc.).In our initial implementation of the aerial layer, we encapsu-lated classic load/store memory primitives into the ZigBee protocol.Zigbee is defined by layer 3 and above and relies on 802.15.4 forlayers 1 and 2. Zigbee allows us to create mesh topologies for de-vices within range, so that any node can communicate with any https://genzconsortium.org/specification/gen-z-physical-layer-specification-1-1/ other node either directly or by relaying the transmission throughmultiple additional memory modules.We have defined a hardware memory interface to enable memorymodules to interact with each other and create a virtual addressspace. This interface receives aerial layer load/store requests andextracts them as a payload of an aerial protocol such as ZigBee. The memory management protocol is based on the Gen-Z stan-dard, enabling memory modules to join/leave and discover nearbymemory modules, "the goal of expanding a simple compute nodewith additional fabric-based memory, storage, networking, or ac-celerators" which itself relies on Distributed Management TaskForce’s Redfish . This subsection defines some distinctive elementsour memory modules had to implement to perform several of theroles defined in the Redfish specification (mainly the Gen-Z FabricManager). Figure 2: Logical view of the hardware components of amemory router
Figure 2 shows the main components of the memory module.Memory modules can act as destinations or relays. They consist of 1)an interface for receiving memory-addressing requests from othermodules in the memory fabric or end devices, and 2) an addressing-request forwarder to control routing of memory-addressing re-quests. The interface extracts load/store operations from the Zigbeemessage and passes it on to the forwarder.The forwarder accesses a dynamically created overlay routingtable to determine a value/cost associated with inter-overlay-nodepaths and make routing decisions. The interface enables a number ofmemory modules to be connected (10 in its current implementation),effectively forming a mesh of memory modules that are directlyconnected to each other.The forwarder accesses a memory management unit in chargeof maintaining a list of modules who are part of the same overlay(thus creating a shared virtual memory space).Overlay routers and membership managers may exchange ad-dressing requests over the overlay forwarding mesh itself, ratherthan over direct in-memory paths. This way, even if some underly-ing fabric paths fail these messages can still be forwarded. Figure 3shows how a load and store operation from the top left memorymodule (1) is forwarded to a nearby (i.e., within wireless range) https://genzconsortium.org/ dgeSys ’21, Apr 26, 2021, Edinburgh, UK Vaquero et al. Figure 3: Mesh of hardware modules [21] module (2) , which decides the best route (3) and forwards it to (4) ,its final destination.The memory module supports a traffic classification elementthat enables traffic to be routed over preferential paths (e.g. lowcongestion, high throughput, etc.) supporting quality of service insome memory requests.Virtual addresses in this vast and dynamically federated memoryfabric (modules can come in and out of the overlay) are unique andconsist of a MAC address uniquely identifying the module and anumber of bits that depends on the capacity of the modules in themesh. This way, the MAC address can be used to route memoryrequests and an offset in bytes is then applied over the capacity ofthat memory module [21].
This section presents two prototypes, with two clear objectives,namely to demonstrate that it is possible to effectively (1) create adynamically sized virtual memory federation and (2) route messagesbetween NVM modules in mobile targets.
Reducing the power consumption of a modern building requirescontinuous monitoring of various environmental parameters insideand outside the building. The key requirement for efficient monitor-ing and controlling is that all sensors and actuators are addressableover the network.Smart buildings include highly configurable setups for ther-mostats, personal voice assistants and also lighting. In commercialsettings light is used to deliver marketing messages or affect shop-pers behaviour. In this experiment, we wanted to ensure commercialmessages can be delivered as a set of bulbs deployed in the glasswindow of a shop and, eventually, a skyscraper.We attached a set of colour-configurable bulbs to a Raspberry Pirunning a modified version of Ubuntu v16.04.We created a virtual address consisting of one more bit than thenumber of available hardware bits. When this additional virtual bitwas used, a remote location was employed. In practice, we simplyredirected addresses corresponding to the last addressable bit topoint to the remote record in the Raspberry Pi.We assumed a simple protocol by which devices configure eachother with a colour code to change the light of the bulb and atimestamp (the latest version of the configuration prevails). Here,we used an implementation of the NVM hardware memory module.As shown in Figure 4, a simple implementation of our protocoldelivers better performance than equivalent TCP connections. The panel on the left hand side shows the throughput with a singleNVM: as distance from the NVM increases, throughput decreasesto a point that makes communications unfeasible. Adding a secondNVM 16m away (shown on the right hand side panel) means there isa valley of performance nearly half way between the two antennas.
Figure 4: Comparison of the throughput obtained for a sin-gle static NVM module (left) and two NVM modules (right)set 16m apart for Disaggregated Edge protocols (simplifiedGen-Z over Zigbee) vs. TCP over WiFi.
We also want to investigate how a fully working prototype wouldbehave in a more dynamic environment.We aimed to simulate urban elements that would not need tobe recomputed every time (e.g. buildings and other non-movableobstacles). Hence, as shown in Figure 5, we used a set of 30 Rasp-berry Pis with attached SD cards and a similarly modified versionof Ubuntu v16.04 (labelled as NVM in the figure). We placed themin strategic locations in a room with sticky tape defining the lanelimit lines for a th scaled self-driving vehicle.In this prototype, we also used a modified version of the scaledself-driven vehicle by [15], which includes cameras and a Lidar andour modified Ubuntu v16.04 running on a Jetson TX2 device (notrepresented in Figure 5).
Figure 5: A simple setup for simulating an urban environ-ment with fixed elements (e.g. lamp posts) including NVMmodules.
As the car moves along the track, the simulated NVM module(SD card) on the Jetson TX2 tries to sync up with nearby memorymodules to become part of the memory fabric. The car tries to loadinformation for bytes associated with a fixed standpoint tag, which isaggregated Memory at the Edge EdgeSys ’21, Apr 26, 2021, Edinburgh, UK then can be used to identify obstacles as the car drives along thetrack.We implemented a convolutional neural network and concate-nated the image input with an embedding of the obstacles declaredin the NVM modules representing route obstacles. The proposedmethod enables the reinforcing learning algorithm to process 60%less information on a shallower neural architecture as the algo-rithm can read static road properties (bends, trees, lamp posts, etc.)directly from memory.The simulated obstacles can be changed dynamically by simplyupdating the obstacle coordinates in the track side NVM.The model is trained under the Q-learning algorithm. After con-tinuous training for 240 minutes, the model learns the control poli-cies to stay on track and avoid changing obstacles. We placed thecar on the track (schematic shown in Figure 5) in order to checkthe behaviour of the car a dynamically federated virtually memoryspace.Latency stays under 200 ms for payloads under 200 bytes and lessthan 5 hops between MMs ( 𝑡ℎ percentile). While latency tendsto be a dominant problem in the edge, high throughput is essentialfor self-driving cars. As the device moves, the re-connections andre-sending of data can severely affect throughput.Figure 6 shows experiments comparing data transfers in a Dis-aggregated Edge (right) versus traditional TCP (WiFi) transfers(left). Data requests in a Disaggregated Egde have a smaller stackand reduced encapsulation results in smaller data sizes and slightlyhigher throughput .Transfer rates for TCP connections present more variabilitybecause the transfers are more susceptible to connection transitionsbetween WiFi nodes. Figure 6: Effect of Speed on Throughput for TCP (left) andDisaggregated Memory Modules (right). Error bars show theestimated standard error of the mean (SEM) calculated withat least 5 samples per speed.
The synchronisation of the different elements worked well atlow speed, but it failed for speeds faster than 6Km/h as the protocoldid not have the time to converge in the inclusion of the car in theoverlay. In some cases, the request is made available to the overlayand the car joins the overlay, but the response cannot be deliveredif the car is out of the range of all the memory modules, causing itto crash against obstacles. Note this prototype is not using a real NVM hardware chip, but we simulate itsbehaviour with an SD card. The simulation means there will be higher latencies andlower throughput than in a setup with real NVM deployed.
Note that our goal here was not to prove the feasibility of NVMdynamic aerial fabrics for safe self-driving cars, but we ratherwanted to show the feasibility of this architecture to reduce theprocessing needs in reality enhancement scenarios.
Disaggregated edges can provide differentiating performance foressential infrastructure, such as: reduced VM/container bootingtimes, container acceleration using NVM-supported data replica-tion (instead of building application specific solutions), or live VMmigration. This can be done by changing the ownership of remoteNVM pages associated with a migrating VM transferred betweencompute hosts [4, 9].Having a set of resources that can be dynamically put togetherwhen needed in very small units of execution means that the distinc-tion between horizontal and vertical scalability disappears, whichis comparable to aggregating more resources for their applicationor service. The illusion of a dynamically expandable virtual mem-ory space where edge nodes route memory requests even whenthe requester is moving simplifies data sharing and enables morestability in distributed memory requests.In the edge, a portion of the resources will likely fail and someexecution units will have to be re-executed (unless preemptivetechniques are applied) and some of the data in failed memoryunits will have to be replicated so as to ensure data availability.For instance, the performance of straggler memory nodes (or highlatency/unreliable networks) dominates response time, which isvery critical in latency-sensitive applications.
Security is essential in hierarchical or fully disaggregated NVMedge clouds. These are highly dynamic and multi-tenant environ-ments where acceleration of encryption, policy engines and en-forcement controls tightly-integrated with networking, and decen-tralised identity management infrastructures will prove crucial [16].NVM introduces some security considerations of its own. As datais persistently stored in memory, new mechanisms for namingand access control would be needed to prevent the possibility ofcross-talk. Also, large scale hardware-accelerated ubiquitous en-cryption requires local services to help encrypt/decrypt data andnew techniques for distributed identity management.
New programming models are required that will take into accountNVM memory locality, coherent access, churn, energy-efficiency,resource constraints, and heterogeneity at the edge of the network.Thus, system software engineers will have to adapt current oper-ating systems and middleware to hide changes in hardware/firmwareof cloud providers [17]. Programming abstractions helping withNVM edge memory management. In the case of remote byte-addressable,examples of these mechanisms are: dealing with different coherencedomains, durable atomic updates, memory garbage collection/zero-ing, etc. [4].Consistent updates to remote memory may also be delegated tolibraries that free developers from handling functions like memory dgeSys ’21, Apr 26, 2021, Edinburgh, UK Vaquero et al. allocation, leak prevention, type checking, durable transactions andatomic updates. These libraries tend to be general and they canpreclude low-level application-specific optimisations and result inconservative ordering constraints [12].
In order to perform a query or transaction across multiple objects,the application needs to do some extra work. [20] shows how toenable commit based operations in a distributed fabric of NVMmemory modules.We see a future of edge databases that store each piece of infor-mation as a NVM bytearray and databases that operate at the edge.In future works, we will explore how the main building blocks ofdatabases will have to be adapted to work over disaggregated edgeclouds.As shown in our car prototype, disaggregated edges can operatewell at human speed, but fall short to support devices on the move atspeeds slightly superior to the average human pace. Future researchon more efficient aerial protocols and faster overlay convergence isrequired.We also plan to benefit from knowledge on the direction andspeed of a moving object to predict future location and preemptivelyallocate data that may be needed in nearby NVMs. This has beenexplored as a library sitting on top of traditional operating systemssuch as DAL [14]. Also, we foresee the exploration of Conflict-free Replicated Data Types (CRDTs) to deal with coherence andconcurrent access to shared data [8].
New hardware technologies like NVM, DSM systems, photonicinterconnects, and hardware disaggregation at the edge of the net-work are reshaping how future services will be supported and built.These technologies concur into disaggregated edges, which pro-vide the means for a dynamically expandable virtual memory spacewere edge nodes route memory requests even when the requester ismoving. Disaggregated edges simplify data sharing and enable morestability in distributed memory requests with moving or volatileresources.We have presented an architectural realisation of the amalgama-tion of these technologies and proved its feasibility with 2 proto-types: an initial implementation on a NVM hardware module and asimulation of NVM module.Some of the main challenges are shared with DSM systems, suchas keeping developers unaware of the complexity, heterogeneity,and high churn rates in unreliable networks with objects movingat high speeds.
REFERENCES [1] Blake Caldwell, Youngbin Im, Sangtae Ha, Richard Han, and Eric Keller. 2017.FluidMem: Memory as a Service for the Datacenter. arXiv:1707.07780 [cs.OS][2] Tarek El-Ghazawi, William Carlson, Thomas Sterling, and Katherine Yelick. 2005.
Dynamic Shared Memory Allocation . John Wiley & Sons, Ltd, Chapter 5, 73–90.https://doi.org/10.1002/0471478369.ch5[3] Paolo Faraboschi, Kimberly Keeton, Tim Marsland, and Dejan Milojicic. 2015.Beyond Processor-Centric Operating Systems. In
Proceedings of the 15th USENIXConference on Hot Topics in Operating Systems (HOTOS’15) . USENIX Association.[4] Peter X. Gao, Akshay Narayan, Sagar Karandikar, Joao Carreira, Sangjin Han,Rachit Agarwal, Sylvia Ratnasamy, and Scott Shenker. 2016. Network Require-ments for Resource Disaggregation. In
Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI’16) . USENIX Association.[5] D. Georgakopoulos, P. P. Jayaraman, M. Fazia, M. Villari, and R. Ranjan. 2016.Internet of Things and Edge Cloud Computing Roadmap for Manufacturing.
IEEECloud Computing
3, 4 (2016), 66–73. https://doi.org/10.1109/MCC.2016.91[6] GIT-DiSL. 2019. XMemPod. https://github.com/git-disl/XMemPod[7] Cinar Kilcioglu, Justin M. Rao, Aadharsh Kannan, and R. Preston McAfee. 2017.Usage Patterns and the Economics of the Public Cloud. In
Proceedings of the26th International Conference on World Wide Web (WWW ’17) . 83–91. https://doi.org/10.1145/3038912.3052707[8] Michał Król, Spyridon Mastorakis, David Oran, and Dirk Kutscher. 2019. ComputeFirst Networking: Distributed Computing Meets ICN. In
Proceedings of the 6thACM Conference on Information-Centric Networking (ICN ’19) . Association forComputing Machinery, 67–77. https://doi.org/10.1145/3357150.3357395[9] K. Lim, Y. Turner, J. R. Santos, A. AuYoung, J. Chang, P. Ranganathan, and T. F.Wenisch. 2012. System-level implications of disaggregated memory. In
IEEEInternational Symposium on High-Performance Computer Architecture . https://doi.org/10.1109/HPCA.2012.6168955[10] L. Liu, W. Cao, S. Sahin, Q. Zhang, J. Bae, and Y. Wu. 2019. Memory Disaggre-gation: Research Problems and Opportunities. In . 1664–1673. https://doi.org/10.1109/ICDCS.2019.00165[11] R. Moreno-Vozmediano, E. Huedo, R. S. Montero, and I. M. Llorente. 2019. ADisaggregated Cloud Architecture for Edge Computing.
IEEE Internet Computing
23, 3 (2019), 31–36. https://doi.org/10.1109/MIC.2019.2918079[12] Sanketh Nalli, Swapnil Haria, Mark D. Hill, Michael M. Swift, Haris Volos, andKimberly Keeton. 2017. An Analysis of Persistent Memory Use with WHIS-PER.
SIGPLAN Not.
52, 4 (April 2017), 135–148. https://doi.org/10.1145/3093336.3037730[13] Jacob Nelson, Brandon Holt, Brandon Myers, Preston Briggs, Luis Ceze, SimonKahan, and Mark Oskin. 2015. Latency-Tolerant Software Distributed SharedMemory. In
USENIX Annual Technical Conference (ATC)
CoRR abs/1901.08567 (2019). http://arxiv.org/abs/1901.08567[16] Rodrigo Roman, Jianying Zhou, and Javier Lopez. 2013. On the Features andChallenges of Security and Privacy in Distributed Internet of Things.
Comput.Netw.
57, 10 (July 2013), 2266–2279. https://doi.org/10.1016/j.comnet.2012.12.018[17] Yizhou Shan, Yutong Huang, Yilun Chen, and Yiying Zhang. 2018. LegoOS: ADisseminated, Distributed OS for Hardware Resource Disaggregation. In
Cloud Computing, Big Data & Emerging Topics ,Enzo Rucci, Marcelo Naiouf, Franco Chichizola, and Laura De Giusti (Eds.).Springer International Publishing, 16–29.[19] H. Truong and S. Dustdar. 2015. Principles for Engineering IoT Cloud Systems.
IEEE Cloud Computing
2, 2 (2015), 68–76. https://doi.org/10.1109/MCC.2015.23[20] Luis M. Vaquero and Suksant Sae Lor. U.S. Patent Application US15/572,996,Jun. 2020. Commit based memory operation in a memory system. https://patents.google.com/patent/US20180165165A1/en[21] Luis M. Vaquero and Suksant Sae Lor. U.S. Patent US10701152B2, Jun. 2020.Memory system management. https://patents.google.com/patent/US10701152B2/en[22] Hong Xu and Baochun Li. 2013. A Study of Pricing for Cloud Resources.
SIG-METRICS Perform. Eval. Rev.
40, 4 (April 2013), 3–12. https://doi.org/10.1145/2479942.2479944[23] Aleksandr Zavodovski, Nitinder Mohan, Suzan Bayhan, Walter Wong, and JussiKangasharju. 2019. ExEC: Elastic Extensible Edge Cloud. In