Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gala Yadgar is active.

Publication


Featured researches published by Gala Yadgar.


international conference on distributed computing systems | 2008

MC2: Multiple Clients on a Multilevel Cache

Gala Yadgar; Michael Factor; Kai Li; Assaf Schuster

In todays networked storage environment, it is common to have a hierarchy of caches where the lower levels of the hierarchy are accessed by multiple clients. This sharing can have both positive or negative effects. While data fetched by one client can be used by another client without incurring additional delays, clients competing for cache buffers can evict each others blocks and interfere with exclusive caching schemes. Our algorithm, MC2, combines local, per client management with a global, system-wide, scheme, to emphasize the positive effects of sharing and reduce the negative ones. The local scheme uses readily available information about the clients future access profile to save the most valuable blocks, and to choose the best replacement policy for them. The global scheme uses the same information to divide the shared cache space between clients, and to manage this space. Exclusive caching is maintained for non-shared data and is disabled when sharing is identified. Our simulation results show that the combined algorithm significantly reduces the overall I/O response times of the system.


ACM Transactions on Computer Systems | 2011

Management of Multilevel, Multiclient Cache Hierarchies with Application Hints

Gala Yadgar; Michael Factor; Kai Li; Assaf Schuster

Multilevel caching, common in many storage configurations, introduces new challenges to traditional cache management: data must be kept in the appropriate cache and replication avoided across the various cache levels. Additional challenges are introduced when the lower levels of the hierarchy are shared by multiple clients. Sharing can have both positive and negative effects. While data fetched by one client can be used by another client without incurring additional delays, clients competing for cache buffers can evict each other’s blocks and interfere with exclusive caching schemes. We present a global noncentralized, dynamic and informed management policy for multiple levels of cache, accessed by multiple clients. Our algorithm, MC2, combines local, per client management with a global, system-wide scheme, to emphasize the positive effects of sharing and reduce the negative ones. Our local management scheme, Karma, uses readily available information about the client’s future access profile to save the most valuable blocks, and to choose the best replacement policy for them. The global scheme uses the same information to divide the shared cache space between clients, and to manage this space. Exclusive caching is maintained for nonshared data and is disabled when sharing is identified. Previous studies have partially addressed these challenges through minor changes to the storage interface. We show that all these challenges can in fact be addressed by combining minor interface changes with smart allocation and replacement policies. We show the superiority of our approach through comparison to existing solutions, including LRU, ARC, MultiQ, LRU-SP, and Demote, as well as a lower bound on optimal I/O response times. Our simulation results demonstrate better cache performance than all other solutions and up to 87% better performance than LRU on representative workloads.


international symposium on information theory | 2015

When do WOM codes improve the erasure factor in flash memories

Eitan Yaakobi; Alexander Yucovich; Gal Maor; Gala Yadgar

Flash memory is a write-once medium in which re-programming cells requires first erasing the block that contains them. The lifetime of the flash is a function of the number of block erasures and can be as small as several thousands. To reduce the number of block erasures, pages, which are the smallest write unit, are rewritten out-of-place in the memory. A Write-once memory (WOM) code is a coding scheme which enables to write multiple times to the block before an erasure. However, these codes come with significant rate loss. For example, the rate for writing twice (with the same rate) is at most 0.77. In this paper, we study WOM codes and their tradeoff between rate loss and reduction in the number of block erasures, when pages are written uniformly at random. First, we introduce a new measure, called erasure factor, that reflects both the number of block erasures and the amount of data that can be written on each block. A key point in our analysis is that this tradeoff depends upon the specific implementation of WOM codes in the memory. We consider two systems that use WOM codes; a conventional scheme that was commonly used, and a new recent design that preserves the overall storage capacity. While the first system can improve the erasure factor only when the storage rate is at most 0.6442, we show that the second scheme always improves this figure of merit.


ieee conference on mass storage systems and technologies | 2013

Cooperative caching with return on investment

Gala Yadgar; Michael Factor; Assaf Schuster

Large scale consolidation of distributed systems introduces data sharing between consumers which are not centrally managed, but may be physically adjacent. For example, shared global data sets can be jointly used by different services of the same organization, possibly running on different virtual machines in the same data center. Similarly, neighboring CDNs provide fast access to the same content from the Internet. Cooperative caching, in which data are fetched from a neighboring cache instead of from the disk or from the Internet, can significantly improve resource utilization and performance in such scenarios. However, existing cooperative caching approaches fail to address the selfish nature of cache owners and their conflicting objectives. This calls for a new storage model that explicitly considers the cost of cooperation, and provides a framework for calculating the utility each owner derives from its cache and from cooperating with others. We define such a model, and construct four representative cooperation approaches to demonstrate how (and when) cooperative caching can be successfully employed in such large scale systems. We present principal guidelines for cooperative caching derived from our experimental analysis. We show that choosing the best cooperative approach can decrease the systems I/O delay by as much as 87%, while imposing cooperation when unwarranted might increase it by as much as 92%.


acm international conference on systems and storage | 2018

How to Best Share a Big Secret

Roman Shor; Gala Yadgar; Wentao Huang; Eitan Yaakobi; Jehoshua Bruck

When sensitive data is stored in the cloud, the only way to ensure its secrecy is by encrypting it before it is uploaded. The emerging multi-cloud model, in which data is stored redundantly in two or more independent clouds, provides an opportunity to protect sensitive data with secret-sharing schemes. Both data-protection approaches are considered computationally expensive, but recent advances reduce their costs considerably: (1) Hardware acceleration methods promise to eliminate the computational complexity of encryption, but leave clients with the challenge of securely managing encryption keys. (2) Secure RAID, a recently proposed scheme, minimizes the computational overheads of secret sharing, but requires non-negligible storage overhead and random data generation. Each data-protection approach offers different tradeoffs and security guarantees. However, when comparing them, it is difficult to determine which approach will provide the best application-perceived performance, because previous studies were performed before their recent advances were introduced. To bridge this gap, we present the first end-to-end comparison of state-of-the-art encryption-based and secret sharing data protection approaches. Our evaluation on a local cluster and on a multi-cloud prototype identifies the tipping point at which the bottleneck of data protection shifts from the computational overhead of encoding and random data generation to storage and network bandwidth and global availability.


ACM Transactions on Storage | 2018

An Analysis of Flash Page Reuse With WOM Codes

Gala Yadgar; Eitan Yaakobi; Fabio Margaglia; Yue Li; Alexander Yucovich; Nachum Bundak; Lior Gilon; Nir Yakovi; Assaf Schuster; André Brinkmann

Flash memory is prevalent in modern servers and devices. Coupled with the scaling down of flash technology, the popularity of flash memory motivates the search for methods to increase flash reliability and lifetime. Erasures are the dominant cause of flash cell wear, but reducing them is challenging because flash is a write-once medium— memory cells must be erased prior to writing. An approach that has recently received considerable attention relies on write-once memory (WOM) codes, designed to accommodate additional writes on write-once media. However, the techniques proposed for reusing flash pages with WOM codes are limited in their scope. Many focus on the coding theory alone, whereas others suggest FTL designs that are application specific, or not applicable due to their complexity, overheads, or specific constraints of multilevel cell (MLC) flash. This work is the first that addresses all aspects of page reuse within an end-to-end analysis of a general-purpose FTL on MLC flash. We use a hardware evaluation setup to directly measure the short- and long-term effects of page reuse on SSD durability and energy consumption, and show that FTL design must explicitly take them into account. We then provide a detailed analytical model for deriving the optimal garbage collection policy for such FTL designs, and for predicting the benefit from reuse on realistic hardware and workload characteristics.


ACM Transactions on Storage | 2017

Experience from Two Years of Visualizing Flash with SSDPlayer

Gala Yadgar; Roman Shor

Data visualization is a thriving field of computer science, with widespread impact on diverse scientific disciplines, from medicine and meteorology to visual data mining. Advances in large-scale storage systems, as well as low-level storage technology, played a significant role in accelerating the applicability and adoption of modern visualization techniques. Ironically, “the cobbler’s children have no shoes”: Researchers who wish to analyze storage systems and devices are usually limited to a variety of static histograms and basic displays. The dynamic nature of data movement on flash has motivated the introduction of SSDPlayer, a graphical tool for visualizing the various processes that cause data movement on solid-state drives (SSDs). In 2015, we used the initial version of SSDPlayer to demonstrate how visualization can assist researchers and developers in their understanding of modern, complex flash-based systems. While we continued to use SSDPlayer for analysis purposes, we found it extremely useful for education and presentation purposes as well. In this article, we describe our experience from two years of using, sharing, and extending SSDPlayer and how similar techniques can further advance storage systems research and education.


file and storage technologies | 2007

Karma: know-it-all replacement for a multilevel cache

Gala Yadgar; Michael Factor; Assaf Schuster


file and storage technologies | 2015

Write once, get 50% free: saving SSD erase costs using WOM codes

Gala Yadgar; Eitan Yaakobi; Assaf Schuster


file and storage technologies | 2016

The devil is in the details: implementing flash page reuse with WOM codes

Fabio Margaglia; Gala Yadgar; Eitan Yaakobi; Yue Li; Assaf Schuster; André Brinkmann

Collaboration


Dive into the Gala Yadgar's collaboration.

Top Co-Authors

Avatar

Assaf Schuster

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eitan Yaakobi

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Roman Shor

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Yucovich

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lior Gilon

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nachum Bundak

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge