Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jian Yin is active.

Publication


Featured researches published by Jian Yin.


symposium on operating systems principles | 2003

Separating agreement from execution for byzantine fault tolerant services

Jian Yin; Jean-Philippe Martin; Arun Venkataramani; Lorenzo Alvisi; Michael Dahlin

We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement/state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.


IEEE Transactions on Knowledge and Data Engineering | 1999

Volume leases for consistency in large-scale systems

Jian Yin; Lorenzo Alvisi; Michael Dahlin; Calvin Lin

This article introduces volume leases as a mechanism for providing server-driven cache consistency for large-scale, geographically distributed networks. Volume leases retain the good performance, fault tolerance, and server scalability of the semantically weaker client-driven protocols that are now used on the Web. Volume leases are a variation of object leases, which were originally designed for distributed file systems. However, whereas traditional object leases amortize overheads over long lease periods, volume leases exploit spatial locality to amortize overheads across multiple objects in a volume. This approach allows systems to maintain good write performance even in the presence of failures. Using trace-driven simulation, we compare three volume lease algorithms against four existing cache consistency algorithms and show that our new algorithms provide strong consistency while maintaining scalability and fault-tolerance. For a trace-based workload of Web accesses, we find that volumes can reduce message traffic at servers by 40 percent compared to a standard lease algorithm, and that volumes can considerably reduce the peak load at servers when popular objects are modified.


international conference on distributed computing systems | 1998

Using leases to support server-driven consistency in large-scale systems

Jian Yin; Lorenzo Alvisi; Michael Dahlin; Calvin Lin

The paper introduces volume leases as a mechanism for providing cache consistency for large scale, geographically distributed networks. Volume leases are a variation of leases, which were originally designed for distributed file systems. Using trace driven simulation, we compare two new algorithms against four existing cache consistency algorithms and show that our new algorithms provide strong consistency while maintaining scalability and fault tolerance. For a trace based workload of Web accesses, we find that volumes can reduce message traffic at servers by 40% compared to a standard lease algorithm, and that volumes can considerably reduce the peak load at servers when popular objects are modified.


international world wide web conferences | 2001

Engineering server-driven consistency for large scale dynamic Web services

Jian Yin; Lorenzo Alvisi; Michael Dahlin; Arun Iyengar

Many researchers have shown that server-driven consistency protocols can potentially reduce read latency. Server-driven consistency protocols are particularly attractive for largescale dynamic web workloads because dynamically generated data can change rapidly and unpredictably. However, there have been no reports on engineering server-driven consistency for such a workload. This paper reports our experience in engineering server-driven consistency for a Sporting and Event web site hosted by IBM, one of the most popular web sites on the Internet for the duration of the event. Our study focuses on scalability and cachability of dynamic content. To assess scalability, we measure both the amount of state that a server needs to maintain to ensure consistency and the bursts of load that a server sustains to send out invalidation messages when a popular object is modi ed. We nd that it is possible to limit the size of the servers state without signi cant performance costs and that bursts of load can be smoothed out with minimal impact on the consistency guarantees. To improve performance, we systematically investigate several design issues for which prior research has suggested widely di erent solutions, including how long servers should send invalidations to idle clients. Finally, we quantify the performance impact of caching dynamic data with server-driven consistency protocols and nd that it can reduce read latency by more than 10%. We have implemented a prototype of a server-driven consistency protocol based on our ndings on top of the popular Squid cache.


ACM Transactions on Internet Technology | 2002

Engineering web cache consistency

Jian Yin; Lorenzo Alvisi; Michael Dahlin; Arun Iyengar

Server-driven consistency protocols can reduce read latency and improve data freshness for a given network and server overhead, compared to the traditional consistency protocols that rely on client polling. Server-driven consistency protocols appear particularly attractive for large-scale dynamic Web workloads because dynamically generated data can change rapidly and unpredictably. However, there have been few reports on engineering server-driven consistency for such workloads. This article reports our experience in engineering server-driven consistency for a sporting and event Web site hosted by IBM, one of the most popular sites on the Internet for the duration of the event. We also examine an e-commerce site for a national retail store. Our study focuses on scalability and cachability of dynamic content. To assess scalability, we measure both the amount of state that a server needs to maintain to ensure consistency and the bursts of load in sending out invalidation messages when a popular object is modified. We find that server-driven protocols can cap the size of the servers state to a given amount without significant performance costs, and can smooth the bursts of load with minimal impact on the consistency guarantees. To improve performance, we systematically investigate several design issues for which prior research has suggested widely different solutions, including whether servers should send invalidations to idle clients. Finally, we quantify the performance impact of caching dynamic data with server-driven consistency protocols and the benefits of server-driven consistency protocols for large-scale dynamic Web services. We find that (i) caching dynamically generated data can increase cache hit rates by up to 10%, compared to the systems that do not cache dynamically generated data; and (ii) server-driven consistency protocols can increase cache hit rates by a factor of 1.5-3 for large-scale dynamic Web services, compared to client polling protocols. We have implemented a prototype of a server-driven consistency protocol based on our findings by augmenting the popular Squid cache.


Lecture Notes in Computer Science | 2003

Towards a practical approach to confidential Byzantine fault tolerance

Jian Yin; Jean-Philippe Martin; Arun Venkataramani; Lorenzo Alvisi; Michael Dahlin

As the world becomes increasingly interconnected, more and more important services such as business transactions are deployed as access-anywhere services - services that are accessible by remote devices through the Internet and mobile networks. Such services often must access confidential data to provide service. For example, an online bank service must access a user’s checking account to process an online transfer request. In such a scenario, guarantees of availability, integrity, and confidentiality are essential. By availability, we mean that services must provide service 24/7 without interruption. By integrity, we mean that services must process clients’ requests correctly. By confidentiality, we mean that services must restrict who sees what data. Given today’s economics and technology that make it infeasible to rigorously test and verify complex components, it is more attractive to allow untrustworthy components to be assembled into a trustworthy system. A traditional Byzantine fault-tolerant (BFT) system runs different implementations of the same service on several replicas and ensures that correct computation is performed by enough correct replicas to mask incorrect replicas [10.1], [10.7], [10.8]. Recent research has shown that BFT systems can be practical for several important services as they can be implemented with low overhead compared to the unreplicated services [10.3].Optimism is a well-known technique to enhance the performance of distributed protocols. Optimistic approaches exploit properties exhibited by the system with certain likelihood, (i.e., that certain kinds of scenarios will prevail over others) to outperform the corresponding conservative protocol. These properties are usually referred as optimistic assumptions (e.g., an optimistic assumption is that reliably multicast messages in a LAN are spontaneously totally ordered). When the optimistic assumption holds, the optimistic approach is more efficient than the conservative one. However, this gain usually implies a tradeoff. That is, if the optimistic assumption does not hold, the optimistic approach is less efficient than the conservative one. This is due to the need to undo or repair the incorrect actions and the dismissal of work already done. This is precisely the Achilles’ heel of traditional optimistic approaches. Therefore, what is crucial for an optimistic approach to be successful is that the resulting gains of optimism outweigh the penalties imposed by optimism failures. Researchers have long recognized the potential benefits of using optimism and have proposed optimistic versions of conventional distributed protocols [9.2, 9.19]. However, despite the many optimistic approaches suggested in distributed computing, they are not that common in industrial applications. The main reason for this reluctance is that whenever the optimistic assumption fails, the protocol behaves worse that the conventional protocol. This behavior might imply more messages, or undoing part of the work. It is our opinion that to increase the applicability of optimistic protocols they need to be enriched with safeguards that limit the consequences of those scenarios where the optimistic assumptions do not hold. These safeguards make optimistic approaches more robust and efficient, and therefore, more applicable. As a consequence, in those periods during which the optimistic assumptions do not hold frequently enough, the system will not degrade to unacceptable levels. In this paper, we try to point out the possible causes of the lack of success of some optimistic protocols, and show which directions can be taken to overcome these shortcomings in order to diminish the existing reluctance in industry for this kind of protocols. We think that optimistic protocols will play a crucial role in the upcoming wide-area distributed systems. Despite that bandwidth will grow more and more, latency will always be a problem in WANs due to physical limitations.


usenix symposium on internet technologies and systems | 1999

Hierarchical cache consistency in a WAN

Jian Yin; Lorenzo Alvisi; Michael Dahlin; Calvin Lin


Archive | 2007

Method and system for integrating model-based and search-based automatic software configuration

Arun Iyengar; Jian Yin


Archive | 2007

SECURE MEDIA BROADCASTING USING TEMPORAL ACCESS CONTROL

Arun Iyengar; Mudhakar Srivatsa; Jian Yin


Volume lease: a scalable cache consistency framework | 2003

Volume lease: a scalable cache consistency framework

Jian Yin; Michael Dahlin; Lorenzo Alvisi

Collaboration


Dive into the Jian Yin's collaboration.

Top Co-Authors

Avatar

Lorenzo Alvisi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Michael Dahlin

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Arun Venkataramani

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Calvin Lin

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jean-Philippe Martin

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge