Norio Shimozono
Hitachi
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Norio Shimozono.
parallel, distributed and network-based processing | 2015
Hiroaki Akutsu; Kazunori Ueda; Takeru Chiba; Tomohiro Kawaguchi; Norio Shimozono
In recent data centres, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed storage areas from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamically changing amount of storage at each redundancy level due to multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, we found that the probability of data loss decreased by two orders of magnitude for systems with 384 or more drives compared to normal RAID. This technique turned out to scale well, and a system with 1536 inexpensive drives attained lower data loss probability than RAID 6 with 16 enterprise-class drives.
international conference on advanced applied informatics | 2016
Miho Imazaki; Norio Shimozono; Tomohiro Yoshihara; Norihisa Komoda
RAID5/6 is widely adopted in enterprise storage systems, despite having a write-penalty problem and consuming the cache memory bandwidth. One of the existing ways for reducing data transfer overhead using XDWRITE/XPWRITE determined in SCSI specification. However, this method still consumes bandwidth because of duplicating the XOR data into cache memories for failure recovery. In this paper, we propose a new destaging method that transfers the XOR data through the buffer memory. In addition, we propose new commands that provide a roll back function exploiting the FTL in SSDs. It simplifies the failure recovery in the case of a buffer memory failure while destasing. We evaluated the throughput improvement using an analytical model. Our proposal improved the Random Write throughput by 33% and SPC Benchmark-1™ throughput by 29%, from the conventional method.
Archive | 2003
Norio Shimozono; Naoko Iwami; Kiyoshi Honda
Archive | 2006
Norio Shimozono; Akira Fujibayashi
Archive | 2008
Norio Shimozono; Shintaro Ito
Archive | 2009
Hirofumi Inomata; Tomoki Sekiguchi; Futoshi Haga; Machiko Asaie; Takayuki Nagai; Norio Shimozono
Archive | 2006
Norio Shimozono; Kazuyoshi Serizawa; Yoshiaki Eguchi
Archive | 2006
Norio Shimozono
Archive | 2011
Shintaro Kudo; Norio Shimozono; Akira Deguchi
Archive | 2012
Sadahiro Sugimoto; Norio Shimozono; Kazuyoshi Serizawa