Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Assaf Natanzon is active.

Publication


Featured researches published by Assaf Natanzon.


ACM Transactions on Storage | 2013

Dynamic Synchronous/Asynchronous Replication

Assaf Natanzon; Eitan Bachmat

Online, remote, data replication is critical for today’s enterprise IT organization. Availability of data is key to the success of the organization. A few hours of downtime can cost from thousands to millions of dollars With increasing frequency, companies are instituting disaster recovery plans to ensure appropriate data availability in the event of a catastrophic failure or disaster that destroys a site (e.g. flood, fire, or earthquake). Synchronous and asynchronous replication technologies have been available for a long period of time. Synchronous replication has the advantage of no data loss, but due to latency, synchronous replication is limited by distance and bandwidth. Asynchronous replication on the other hand has no distance limitation, but leads to some data loss which is proportional to the data lag. We present a novel method, implemented within EMC Recover-Point, which allows the system to dynamically move between these replication options without any disruption to the I/O path. As latency grows, the system will move from synchronous replication to semi-synchronous replication and then to snapshot shipping. It returns to synchronous replication as more bandwidth is available and latency allows.


high performance computing and communications | 2013

Automated Tiering in a QoS Environment Using Coarse Data

Gal Lipetz; Etai Hazan; Assaf Natanzon; Eitan Bachmat

We present a method for providing automated performance management for a storage array with quality of service demands. Our method dynamically configures application data to heterogeneous storage devices, taking into account response time targets for the different applications. The optimization process uses widely available coarse counter statistics. We use analytical modeling of the storage system components and a gradient descent mechanism to minimize the target function. We verify the effectiveness of our optimization procedure using traces from real production systems, and show that our method improves the performance of the more significant applications, and can do so using only the coarse statistics. Our results show substantial improvements for preferred applications over an optimization procedure which does not take QoS requirements into account. We also present evidence of the complex behavior of real production storage systems, which has to be taken into account in the optimization process.


measurement and modeling of computer systems | 2012

Analysis of SITA queues with many servers and spacetime geometry

Eitan Bachmat; Assaf Natanzon

SITA queues were introduced in [4] as a means for reducing job size variance at individual hosts in a server farm. It turns out that SITA queues are mathematically very interesting. For example, they satisfy a duality that is typical of automorphic forms in number theory. This leads to useful queueing theoretic insights, [1] and is also related to interesting number theoretic questions, [7], [5]. In this paper we will consider other aspects of SITA queues which turn out to be related to two dimensional Lorentzian geometry. In particular we will be interested in the behavior of SITA queues as h → ∞. The tail of the waiting time function was studied in detail in [8]. We will concentrate on the average waiting time E(W ), rather than the tail since it leads to some interesting analogy and insights. A SITA queue consists of h hosts, numbered 1, ..., h and a set of cutoffs 0 = s0 1 such that tj(x)/t1(x) = cj for all x. In this case, we again take t1 to be the identity (job sizes are measured by the time they take on host 1).


networking architecture and storages | 2016

Hybrid Replication: Optimizing Network Bandwidth and Primary Storage Performance for Remote Replication

Assaf Natanzon; Philip Shilane; Mark Abashkin; Leehod Baruch; Eitan Bachmat

Traditionally, there are two main forms of data replication to secondary locations: continuous and snapshot-based. Continuous replication mirrors every I/O to a remote server, which maintains the most up-to-date state, though at a large network bandwidth cost. Snapshot replication periodically transfers modified regions, so it has lower network bandwidth requirements since repeatedly overwritten regions are only transferred once. Snapshot replication, though, comes with larger I/O loads on primary storage since it must read modified regions to transfer. To achieve the benefits of both approaches, we present hybrid replication, which is a novel mix of continuous replication and snapshot-based replication. Hybrid replication selects data regions with high overwrite characteristics to be protected by snapshots, while data regions with fewer overwrites are protected by continuous replication. In experiments with real-world storage traces, hybrid replication reduces network bandwidth up to 40% relative to continuous replication and I/O requirements on primary storage up to 90% relative to snapshot replication.


acm international conference on systems and storage | 2016

Black box replication: Breaking the latency limits

Assaf Natanzon; Alex Winokur; Eitan Bachmat

Synchronous replication is critical for todays enterprise IT organization. It is mandatory by regulation in several countries for some types of organizations, including banks and insurance companies. The technology has been available for a long period of time, but due to speed of light and maximal latency limitations, it is usually limited to a distance of 50-100 miles. Flight data recorders, also known as black boxes, have long been used to record the last actions which happened in airplanes at times of disasters. We present an integration between an Enterprise Data Recorder and an asynchronous replication mechanism, which allows breaking the functional limits that light speed imposes on synchronous replication.


international performance computing and communications conference | 2015

Integrated caching and tiering according to use and QoS requirements

Mark Abashkin; Assaf Natanzon; Eitan Bachmat

In this paper we consider the management of a tiered storage system consisting of disk and flash drive storage and a DRAM cache, with the challenge of taking into account heterogeneous quality of service (QoS) requirements. We integrate and adjust methods which control the use of fast resources such as flash drives and cache, according to the user access patterns of different data extents and the QoS requirements of the extents, which we developed in previous work. Using traces from real production systems, we show that the benefits of the integrated system are substantially larger than that provided by each method alone. Our method is able to substantially improve the performance of data with high QoS demands, with little or no damage to data with low QoS demands. Thus we are able to exploit the resources of the storage system to the advantage of all data types. we show improvements in the range of 9%-71% for datasets with the highest QoS requirements, and 0%-70% response time improvement overall, compared to a QoS optimized system which took only disk drive resource allocation into consideration. In workloads where cache is useful we obtain large gains, showing that it is important to integrate back-end (drives) and front-end (cache) optimization.


acm international conference on systems and storage | 2013

Virtual point in time access

Assaf Natanzon; Eitan Bachmat

Continuous Data Protection or CDP is a method for capturing all changes occurring to a storage device, allowing fine granularity restore of objects from crash consistent images. In this paper we introduce a method for creating a virtual image of a block storage device, using a CDP journal log and an image of the device at one point in time. The creation of the disk image for any point in time is created on demand. The creation algorithm is very efficient and takes only a few minutes for multiple TeraBytes of changes. The algorithm for creating the image can be formalized as a map/reduce algorithm and can be parallelized easily over multiple machines to reduce the creation time. The creation on demand of the virtual image using journaling methods, minimizes the effect on the production volumes, allowing the use of CDPs for enterprise class applications.


Archive | 2006

Methods and apparatus for optimal journaling for continuous data replication

Shlomo Ahal; Assaf Natanzon; Tzach Sechner; Oded Kedem; Evgeny Drukh


Archive | 2006

Cross tagging to data for consistent recovery

Michael Lewin; Yair Heller; Ziv Kedem; Shlomo Ahal; Assaf Natanzon; Evgeny Drukh


Archive | 2006

METHODS AND APPARATUS FOR POINT IN TIME DATA ACCESS AND RECOVERY

Michael Lewin; Yair Heller; Ziv Kedem; Shlomo Ahal; Assaf Natanzon; Avi Shoshan; Evgeny Drukh; Efrat Angel; Oded Weber

Collaboration


Dive into the Assaf Natanzon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eitan Bachmat

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge