Dalit Naor
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dalit Naor.
2005 IEEE International Symposium on Mass Storage Systems and Technology | 2005
Michael Factor; Kalman Z. Meth; Dalit Naor; Ohad Rodeh; Julian Satran
The concept of object storage was introduced in the early 1990s by the research community. Since then it has greatly matured and is now in its early stages of adoption by the industry. Yet, object storage is still not widely accepted. Viewing object store technology as the future building block particularly for large storage systems, our team in IBM Haifa Research Lab has invested substantial efforts in this area. In this position paper we survey the latest developments in the area of object store technology, focusing on standardization, research prototypes, and technology adoption and deployment. A major step has been the approval of the TIO OSD protocol (version I) as an OSD standard in late 2004. We also report on prototyping efforts that are carried out in IBM Haifa Research Lab in building an object store. Our latest prototype is compliant with a large subset of the TIO standard. To facilitate deployment of the new technology and protocol in the community at large, our team also implemented a TIO-compliant OSD (iSCSI) initiator for Linux. The initiator is interoperable with object disks of other vendors. The initiator is available as an open source driver for Linux.
Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference on | 2009
Miriam Allalouf; Yuriy Arbitman; Michael Factor; Ronen I. Kat; Kalman Z. Meth; Dalit Naor
Power consumption is a major issue in todays datacenters. Storage typically comprises a significant percentage of datacenter power. Thus, understanding, managing, and reducing storage power consumption is an essential aspect of any efforts that address the total power consumption of datacenters. We developed a scalable power modeling method that estimates the power consumption of storage workloads. The modeling concept is based on identifying the major workload contributors to the power consumed by the disk arrays. To estimate the power consumed by a given host workload, our method translates the workload to the primitive activities induced on the disks. In addition, we identified that I/O queues have a fundamental influence on the power consumption. Our power estimation results are highly accurate, with only 2% deviation for typical random workloads with small transfer sizes (up to 8K), and a deviation of up to 8% for workloads with large transfer sizes. We successfully integrated our modeling into a power-aware capacity planning tool to predict system power requirements and integrated it into an online storage system to provide online estimation for the power consumed.
international parallel and distributed processing symposium | 2009
Danny Harnik; Dalit Naor; Itai Segall
We consider large scale, distributed storage systems with a redundancy mechanism; cloud storage being a prime example. We investigate how such systems can reduce their power consumption during low-utilization time intervals by operating in a low-power mode. In a low power mode, a subset of the disks or nodes are powered down, yet we ask that each data item remains accessible in the system; this is called full coverage. The objective is to incorporate this option into an existing system rather than redesign the system. When doing so, it is crucial that the low power option should not affect the performance or other important characteristics of the system during full-power (normal) operation. This work is a comprehensive study of what can or cannot be achieved with respect to full coverage low power modes. The paper addresses this question for generic distributed storage systems (where the key component under investigation is the placement function of the system) as well as for specific popular system designs in the realm of storing data in the cloud. Our observations and techniques are instrumental for a wide spectrum of systems, ranging from distributed storage systems for the enterprise to cloud data services. In the cloud environment where low cost is imperative, the effects of such savings are magnified by the large scale.
ieee international conference on cloud computing technology and science | 2011
Elliot K. Kolodner; Sivan Tal; Dimosthenis Kyriazis; Dalit Naor; Miriam Allalouf; Lucia Bonelli; Per Brand; Albert Eckert; Erik Elmroth; Spyridon V. Gogouvitis; Danny Harnik; Francisco Hernández; Michael C. Jaeger; Ewnetu Bayuh Lakew; José Manuel López López; Mirko Lorenz; Alberto Messina; Alexandra Shulman-Peleg; Roman Talyansky; Athanasios Voulodimos; Yaron Wolfsthal
The emergence of cloud environments has made feasible the delivery of Internet-scale services by addressing a number of challenges such as live migration, fault tolerance and quality of service. However, current approaches do not tackle key issues related to cloud storage, which are of increasing importance given the enormous amount of data being produced in todays rich digital environment (e.g. by smart phones, social networks, sensors, user generated content). In this paper we present the architecture of a scalable and flexible cloud environment addressing the challenge of providing data-intensive storage cloud services through raising the abstraction level of storage, enabling data mobility across providers, allowing computational and content-centric access to storage and deploying new data-oriented mechanisms for QoS and security guarantees. We also demonstrate the added value and effectiveness of the proposed architecture through two real-life application scenarios from the healthcare and media domains.
First International IEEE Security in Storage Workshop, 2002. Proceedings. | 2002
Alain Azagury; Ran Canetti; Michael Factor; Shai Halevi; Ealan Henis; Dalit Naor; Noam Rinetzky; Ohad Rodeh; Julian Satran
Storage Area Networks (SAN) are based on direct interaction between clients and storage servers. This unmediated access exposes the storage server to network attacks, necessitating a verification, by the server, that the client requests conform with the system protection policy. Solutions today can only enforce access control at the granularity of entire storage servers. This is an outcome of the way storage servers abstract storage: an array of fixed size blocks. Providing access control at the granularity of blocks is infeasible there are too many active blocks in the server of entire servers is used. Object, stores (e.g, the NASD system) on the other hand provide means to address these issues. An object store control unit presents an abstraction of a dynamic collection of objects, each can be seen as a different array of blocks, thus providing the basis for Protection at the object level. In this paper we present a security model for the object store which leverages on existing security infrastructure. We give a simple generic mechanism capable of enforcing an arbitrary access control policy at object granularity. This mechanism is specifically designed to achieve low overhead by minimizing the cost of validating an operation along the critical data path, and lends itself for optimizations such as caching The key idea of the model is to separate the mechanisms for transport security from the one used for access control and to maximize the use standard security protocols when possible We utilize a standard industry protocol for authentication, integrity and privacy on the communication channel (IPSec for IP networks) anti fine a proprietary protocol for authorization on top of the secure communication layer.
ieee conference on mass storage systems and technologies | 2012
Danny Harnik; Oded Margalit; Dalit Naor; Dmitry Sotnikov; Gil Vernik
We study the problem of accurately estimating the data reduction ratio achieved by deduplication and compression on a specific data set. This turns out to be a challenging task - It has been shown both empirically and analytically that essentially all of the data at hand needs to be inspected in order to come up with a accurate estimation when deduplication is involved. Moreover, even when permitted to inspect all the data, there are challenges in devising an efficient, yet accurate, method. Efficiency in this case refers to the demanding CPU, memory and disk usage associated with deduplication and compression. Our study focuses on what can be done when scanning the entire data set. We present a novel two-phased framework for such estimations. Our techniques are provably accurate, yet run with very low memory requirements and avoid overheads associated with maintaining large deduplication tables. We give formal proofs of the correctness of our algorithm, compare it to existing techniques from the database and streaming literature and evaluate our technique on a number of real world workloads. For example, we estimate the data reduction ratio of a 7 TB data set with accuracy guarantees of at most a 1% relative error while using as little as 1 MB of RAM (and no additional disk access). In the interesting case of full-file deduplication, our framework readily accepts optimizations that allow estimation on a large data set without reading most of the actual data. For one of the workloads we used in this work we achieved accuracy guarantee of 2% relative error while reading only 27% of the data from disk. Our technique is practical, simple to implement, and useful for multiple scenarios, including estimating the number of disks to buy, choosing a deduplication technique, deciding whether to dedupe or not dedupe and conducting large-scale academic studies related to deduplication ratios.
computer and communications security | 1999
Boaz Barak; Amir Herzberg; Dalit Naor; Eldad Shai
Existing security mechanisms focus on prevention of penetrations, detection of a penetration and (manual) recovery tools Indeed attackers focus their penetration efforts on breaking into critical modules, and on avoiding detection of the attack. As a result, security tools and procedures may cause the attackers to lose control over a specific module (computer, account), since the attacker would rather lose control than risk detection of the attack. While controlling the module, attacker may learn critical secret information or modify the module that make it much easier for the attacker to regain control over that module later. Recent results in cryptography give some hope of improving this situation; they show that many fundamental security tasks can be achieved with proactive security. Proactive security does not assume that there is any module completely secure against penetration Instead, we assume that at any given time period (day, week,.), a sufficient number of the modules in the system are secure (not penetrated). The results obtained so far include some of the most important cryptographic primitives such as signatures, secret sharing, and secure communication However, there was no usable implementation, and several critical issues (for actual use) were not addressed In this work we report on a practical toolkit implementing the key proactive security mechanisms The toolkit provides secure interfaces to make it easy for applications to recover from penetrations. The toolkit also addresses other critical implementation issues, such as the initialization of the proactive secure system. We describe the toolkit and discuss some of the potential applications. Some applications require minimal enhancements to the existing implementations - e.g. for secure logging (especially for intrusion detection), secure end-to-end communication and timestamping. Other applications require more significant enhancements, mainly distribution over multiple servers, examples are certification authority, key recovery, and secure file system or archive
workshop on storage security and survivability | 2005
Dalit Naor; Amir Shenhav; Avishai Wool
Adding security capabilities to shared, remote and untrusted storage file systems leads to performance degradation that limits their use. Public-key cryptographic primitives, widely used in such file systems, are known to have worse performance than their symmetric key counterparts. In this paper we examine design alternatives that avoid public-key cryptography operations to achieve better performance. We present the trade-offs and limitations that are introduced by these substitutions.
IEEE Computer | 2003
Dalit Naor; Moni Naor
The authors describe two methods for protecting content by creating a legitimate distribution channel. One method broadcasts encrypted data to a selected set of users, and a tracing algorithm uncovers a compromised keys owner. The other method updates user keys to resecure a compromised network.
ieee conference on mass storage systems and technologies | 2007
Michael Factor; Dalit Naor; Eran Rom; Julian Satran
Today, access control security for storage area networks (zoning and masking) is implemented by mechanisms that are inherently insecure, and are tied to the physical network components. However, what we want to secure is at a higher logical level independent of the transport network; raising security to a logical level simplifies management, provides a more natural fit to a virtualized infrastructure, and enables a finer grained access control. In this paper, we describe the problems with existing access control security solutions, and present our approach which leverages the OSD (Object-based Storage Device) security model to provide a logical, cryptographically secured, in-band access control for todays existing devices. We then show how this model can easily be integrated into existing systems and demonstrate that this in-band security mechanism has negligible performance impact while simplifying management, providing a clean match to compute virtualization and enabling fine grained access control.