Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wyatt Lloyd is active.

Publication


Featured researches published by Wyatt Lloyd.


symposium on operating systems principles | 2013

An analysis of Facebook photo caching

Qi Huang; Kenneth P. Birman; Robbert van Renesse; Wyatt Lloyd; Sanjeev Kumar; Harry C. Li

This paper examines the workload of Facebooks photo-serving stack and the effectiveness of the many layers of caching it employs. Facebooks image-management infrastructure is complex and geographically distributed. It includes browser caches on end-user systems, Edge Caches at ~20 PoPs, an Origin Cache, and for some kinds of images, additional caching via Akamai. The underlying image storage layer is widely distributed, and includes multiple data centers. We instrumented every Facebook-controlled layer of the stack and sampled the resulting event stream to obtain traces covering over 77 million requests for more than 1 million unique photos. This permits us to study traffic patterns, cache access patterns, geolocation of clients and servers, and to explore correlation between properties of the content and accesses. Our results (1) quantify the overall traffic percentages served by different layers: 65.5% browser cache, 20.0% Edge Cache, 4.6% Origin Cache, and 9.9% Backend storage, (2) reveal that a significant portion of photo requests are routed to remote PoPs and data centers as a consequence both of load-balancing and peering policy, (3) demonstrate the potential performance benefits of coordinating Edge Caches and adopting S4LRU eviction algorithms at both Edge and Origin layers, and (4) show that the popularity of photos is highly dependent on content age and conditionally dependent on the social-networking metrics we considered.


ieee international conference on pervasive computing and communications | 2008

IP Address Passing for VANETs

Todd Arnold; Wyatt Lloyd; Jing Zhao; Guohong Cao

In vehicular Ad-hoc networks (VANETs), vehicles can gain short connections to the Internet by using wireless access points (AP). A significant part of the connection time is the time required for acquiring an IP address via dynamic host configuration protocol (DHCP). Depending on a vehicles speed and the AP coverage area, DHCP can consume up to 100 percent of a vehicles available connection time. We propose the IP Passing Protocol to reduce the overhead of obtaining an IP address to under one-tenth of a second. This is done without modifying either DHCP or AP software. We explore scalable implementations and describe the dynamics of the IP Passing Protocol. We also show our protocol will significantly improve efficiency, reduce latency, and increase vehicle connectivity.


symposium on operating systems principles | 2015

Existential consistency: measuring and understanding consistency at Facebook

Haonan Lu; Kaushik Veeraraghavan; Philippe Vincent Ajoux; Jim Hunt; Yee Jiun Song; Wendy Tobagus; Sanjeev Kumar; Wyatt Lloyd

Replicated storage for large Web services faces a trade-off between stronger forms of consistency and higher performance properties. Stronger consistency prevents anomalies, i.e., unexpected behavior visible to users, and reduces programming complexity. There is much recent work on improving the performance properties of systems with stronger consistency, yet the flip-side of this trade-off remains elusively hard to quantify. To the best of our knowledge, no prior work does so for a large, production Web service. We use measurement and analysis of requests to Facebooks TAO system to quantify how often anomalies happen in practice, i.e., when results returned by eventually consistent TAO differ from what is allowed by stronger consistency models. For instance, our analysis shows that 0.0004% of reads to vertices would return different results in a linearizable system. This in turn gives insight into the benefits of stronger consistency; 0.0004% of reads are potential anomalies that a linearizable system would prevent. We directly study local consistency models---i.e., those we can analyze using requests to a sample of objects---and use the relationships between models to infer bounds on the others. We also describe a practical consistency monitoring system that tracks φ-consistency, a new consistency metric ideally suited for health monitoring. In addition, we give insight into the increased programming complexity of weaker consistency by discussing bugs our monitoring uncovered, and anti-patterns we teach developers to avoid.


ACM Queue | 2014

Don't settle for eventual consistency

Wyatt Lloyd; Michael J. Freedman; Michael Kaminsky; David G. Andersen

Geo-replicated storage provides copies of the same data at multiple, geographically distinct locations. Facebook, for example, geo-replicates its data (profiles, friends lists, likes, etc.) to data centers on the east and west coasts of the United States, and in Europe. In each data center, a tier of separate Web servers accepts browser requests and then handles those requests by reading and writing data from the storage system.


dependable systems and networks | 2011

Coercing clients into facilitating failover for object delivery

Wyatt Lloyd; Michael J. Freedman

Application-level protocols used for object delivery, such as HTTP, are built atop TCP/IP and inherit its host-to-host abstraction. Given that these services are replicated for scalability, this unnecessarily exposes failures of individual servers to their clients. While changes to both client and server applications can be used to mask such failures, this paper explores the feasibility of transparent recovery for unmodified object delivery services (TRODS). The key insight in TRODS is cross-layer visibility and control: TRODS carefully derives reliable storage for application-level state from the mechanics of the transport layer. This state is used to reconstruct object delivery sessions, which are then transparently spliced into the clients ongoing connection. TRODS is fully backwards-compatible, requiring no changes to the clients or server applications. Its performance is competitive with unmodified HTTP services, providing nearly identical throughput while enabling timely failover.


symposium on operating systems principles | 2017

SVE: Distributed Video Processing at Facebook Scale

Qi Huang; Petchean Ang; Peter Knowles; Tomasz Nykiel; Iaroslav Tverdokhlib; Amit Yajurvedi; Paul Dapolito Iv; Xifan Yan; Maxim Bykov; Chuen Liang; Mohit Talwar; Abhishek Mathur; Sachin Kulkarni; Matthew Burke; Wyatt Lloyd

Videos are an increasingly utilized part of the experience of the billions of people that use Facebook. These videos must be uploaded and processed before they can be shared and downloaded. Uploading and processing videos at our scale, and across our many applications, brings three key requirements: low latency to support interactive applications; a flexible programming model for application developers that is simple to program, enables efficient processing, and improves reliability; and robustness to faults and overload. This paper describes the evolution from our initial monolithic encoding script (MES) system to our current Streaming Video Engine (SVE) that overcomes each of the challenges. SVE has been in production since the fall of 2015, provides lower latency than MES, supports many diverse video applications, and has proven to be reliable despite faults and overload.


international conference on acoustics, speech, and signal processing | 2016

Context adaptive thresholding and entropy coding for very low complexity JPEG transcoding

Xing Xu; Zahaib Aichta; Ramesh Govindan; Wyatt Lloyd; Antonio Ortega

The ever increasing quantity of user generated photos, nearly all compressed using JPEG, has created a growing storage burden on photo storage and sharing services. This creates the need for compression techniques that take JPEG compressed images as inputs. In this paper we propose two novel very low complexity codecs, ROMP and L-ROMP to recompress JPEG photos, achieving increased coding efficiency by making use of very large entropy coding tables. ROMP is a lossless JPEG recompression codec that achieves 15% average gains over JPEG, while L-ROMP is a lossy codec that can achieve 29% average compression gains over JPEG, by applying coefficient thresholding based on a perceptual criterion to a JPEG image before using the entropy coding of ROMP.


symposium on operating systems principles | 2011

Don't settle for eventual: scalable causal consistency for wide-area storage with COPS

Wyatt Lloyd; Michael J. Freedman; Michael Kaminsky; David G. Andersen


networked systems design and implementation | 2013

Stronger semantics for low-latency geo-replicated storage

Wyatt Lloyd; Michael J. Freedman; Michael Kaminsky; David G. Andersen


;login:: the magazine of USENIX & SAGE | 2013

PRObE: A Thousand-Node Experimental Cluster for Computer Systems Research

Garth A. Gibson; Gary Grider; Andree Jacobson; Wy At T Lloyd; Wyatt Lloyd

Collaboration


Dive into the Wyatt Lloyd's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David G. Andersen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haonan Lu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Li

Princeton University

View shared research outputs
Researchain Logo
Decentralizing Knowledge