Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Rabinovich is active.

Publication


Featured researches published by Michael Rabinovich.


international world wide web conferences | 2002

Flash crowds and denial of service attacks: characterization and implications for CDNs and web sites

Jaeyeon Jung; Balachander Krishnamurthy; Michael Rabinovich

The paper studies two types of events that often overload Web sites to a point when their services are degraded or disrupted entirely - flash events (FEs) and denial of service attacks (DoS). The former are created by legitimate requests and the latter contain malicious requests whose goal is to subvert the normal operation of the site. We study the properties of both types of events with a special attention to characteristics that distinguish the two. Identifying these characteristics allows a formulation of a strategy for Web sites to quickly discard malicious requests. We also show that some content distribution networks (CDNs) may not provide the desired level of protection to Web sites against flash events. We therefore propose an enhancement to CDNs that offers better protection and use trace-driven simulations to study the effect of our enhancement on CDNs and Web sites.


workshop on hot topics in operating systems | 1997

Reduce, reuse, recycle: an approach to building large Internet caches

Syam Gadde; Michael Rabinovich; Jeffrey S. Chase

New demands brought by the continuing growth of the Internet will be met in part by more effective use of caching in the Web and other services. We have developed CRISP, a distributed Internet object cache targeted to the needs of the organizations that aggregate the end users of Internet services, particularly the commercial Internet Service Providers (ISPs) where much of the new growth occurs. A CRISP cache consists of a group of cooperating caching servers sharing a central directory of cached objects. This simple and obvious strategy is easily overlooked due to the well known drawbacks of a centralized structure. However, we show that these drawbacks are easily overcome for well configured CRISP caches. We outline the rationale behind the CRISP design, and report on early studies of CRISP caches in actual use and under synthetic load. While our experience with CRISP to date is at the scale of hundreds or thousands of clients, CRISP caches could be deployed to maximize capacity at any level of a regional or global cache hierarchy.


international conference on distributed computing systems | 1999

A dynamic object replication and migration protocol for an Internet hosting service

Michael Rabinovich; Irina Rabinovich; Rajmohan Rajaraman; Amit Aggarwal

The paper proposes a protocol suite for dynamic replication and migration of Internet objects. It consists of an algorithm for deciding on the number and location of object replicas and an algorithm for distributing requests among currently available replicas. Our approach attempts to place replicas in the vicinity of a majority of requests, while ensuring at the same time that no servers are overloaded. The request distribution algorithm uses the same simple mechanism to take into account both server proximity and load, without actually knowing the latter. The replica placement algorithm executes autonomously on each node, without the knowledge of other object replicas in the system. The proposed algorithms rely on the information available in databases maintained by Internet routers. A simulation study using synthetic workloads and the network backbone of UUNET, one of the largest Internet service providers, shows that the proposed protocol is effective in eliminating hot spots and achieves a significant reduction in backbone traffic and server response time at the expense of creating only a small number of extra replicas.


business information systems | 1999

Time Management in Workflow Systems

Johann Eder; Euthimios Panagos; Heinz Pozewaunig; Michael Rabinovich

Management of workflow processes is more than just enactment of process activities according to business rules. Time management functionality should be provided to control the lifecycle of processes. Time management should address planning of workflow executions in time, provide various estimates about activity execution durations, avoid violations of deadlines assigned to activities and the entire process, and react to deadline violations when they occur. In this paper we describe how time information can be captured in the workflow definition, and we propose a technique for calculating internal activity deadlines with the goal to meet the overall deadlines during process execution.


international world wide web conferences | 1999

RaDaR: a scalable architecture for a global Web hosting service

Michael Rabinovich; Amit Aggarwal

Abstract As commercial interest in the Internet grows, more and more companies are offering the service of hosting and providing access to information that belongs to third-party information providers. In the future, successful hosting services may host millions of objects on thousands of servers deployed around the globe. To provide reasonable access performance to popular resources, these resources will have to be mirrored on multiple servers. In this paper, we identify some challenges due to the scale that a platform for such global services would face, and propose an architecture capable of handling this scale. The proposed architecture has no bottleneck points. A trace-driven simulation using an access trace from AT&Ts hosting service shows very promising results for our approach.


Computer Networks and Isdn Systems | 1998

Not all hits are created equal: cooperative proxy caching over a wide-area network

Michael Rabinovich; Jeffrey S. Chase; Syam Gadde

Abstract Given the benefits of sharing a cache among large user populations, Internet service providers will likely enter into peering agreements to share their caches. This position paper describes an approach for inter-proxy cooperation in this environment. While existing cooperation models focus on maximizing global hit ratios, values of cache hits in this environment depend on peering agreements and access latency of various proxies. It may well be that obtaining an object directly from the Internet is less expensive and faster than from a distant cache. Our approach takes advantage of these distinctions to reduce the overhead for locating objects in the global cache.


Web content caching and distribution | 2004

Computing on the edge: a platform for replicating internet applications

Michael Rabinovich; Zhen Xiao; Amit Aggarwal

Content delivery networks (CDNs) improve the scalability of accessing static and, recently, streaming content. However, proxy caching can improve access to these types of content as well. A unique value of CDNs is therefore in improving performance of accesses to dynamic content and other computer applications. We describe an architecture, algorithms, and a preliminary performance study of a CDN for applications (ACDN). Our system includes novel algorithms for automatic redeployment of applications on networked servers as required by changing demand and for distributing client requests among application replicas based on their load and proximity. The system also incorporates a mechanism for keeping application replicas consistent in the presence of developer updates to the content. A prototype of the system has been implemented.


international world wide web conferences | 2004

Characterization of a large web site population with implications for content delivery

Leeann Bent; Michael Rabinovich; Geoffrey M. Voelker; Zhen Xiao

This paper presents a systematic study of the properties of a large number of Web sites hosted by a major ISP. To our knowledge, ours is the first comprehensive study of a large server farm that contains thousands of commercial Web sites. We also perform a simulation analysis to estimate potential performance benefits of content delivery networks (CDNs) for these Web sites. We make several interesting observations about the current usage of Web technologies and Web site performance characteristics. First, compared with previous client workload studies, the Web server farm workload contains a much higher degree of uncacheable responses and responses that require mandatory cache validations. A significant reason for this is that cookie use is prevalent among our population, especially among more popular sites. However, we found an indication of wide-spread indiscriminate usage of cookies, which unnecessarily impedes the use of many content delivery optimizations. We also found that most Web sites do not utilize the cache-control features ofthe HTTP 1.1 protocol, resulting in suboptimal performance. Moreover, the implicit expiration time in client caches for responses is constrained by the maximum values allowed in the Squid proxy. Finally, our simulation results indicate that most Web sites benefit from the use of a CDN. The amount of the benefit depends on site popularity, and, somewhat surprisingly, a CDN may increase the peak to average request ratio at the origin server because the CDN can decrease the average request rate more than the peak request rate.


acm special interest group on data communication | 2013

On modern DNS behavior and properties

Thomas Richard Callahan; Mark Allman; Michael Rabinovich

The Internet crucially depends on the Domain Name System (DNS) to both allow users to interact with the system in human-friendly terms and also increasingly as a way to direct traffic to the best content replicas at the instant the content is requested. This paper is an initial study into the behavior and properties of the modern DNS system. We passively monitor DNS and related traffic within a residential network in an effort to understand server behavior--as viewed through DNS responses?and client behavior--as viewed through both DNS requests and traffic that follows DNS responses. We present an initial set of wide ranging findings.


NATO advanced study institute on workflow management systems | 1998

Reducing Escalation-Related Costs in WFMSs

Euthimios Panagos; Michael Rabinovich

Escalations refer to the actions taken when workflow activities miss their deadlines. Typically, escalations increase the cost of business processes due to the execution of additional activities, the compensation of finished activities, or the intervention of highly-paid workers. In this paper, we present two techniques for reducing costs related to escalations; namely, dynamic deadline adjustment and preemptive escalation. The former mechanism uses the slack accumulated during process execution to adjust the deadlines of the remaining activities, i.e., delay escalations. The latter mechanism predicts whether a process is going to escalate at some future point, and it decides whether and when to force escalation at an early stage during execution. Preliminary experimental results show the effectiveness of our techniques.

Collaboration


Dive into the Michael Rabinovich's collaboration.

Top Co-Authors

Avatar

Mark Allman

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hussein A. Alzoubi

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yin Zhang

University of Texas System

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge