Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianxi Chen is active.

Publication


Featured researches published by Jianxi Chen.


international parallel and distributed processing symposium | 2010

HPDA: A hybrid parity-based disk array for enhanced performance and reliability

Bo Mao; Hong Jiang; Dan Feng; Suzhen Wu; Jianxi Chen; Lingfang Zeng; Lei Tian

A single flash-based Solid State Drive (SSD) can not satisfy the capacity, performance and reliability requirements of a modern storage system supporting increasingly demanding data-intensive computing applications. Applying RAID schemes to SSDs to meet these requirements, while a logical and viable solution, faces many challenges. In this paper, we propose a Hybrid Parity-based Disk Array architecture, HPDA, which combines a group of SSDs and two hard disk drives (HDDs) to improve the performance and reliability of SSD-based storage systems. In HPDA, the SSDs (data disks) and part of one HDD (parity disk) compose a RAID4 disk array. Meanwhile, a second HDD and the free space of the parity disk are mirrored to form a RAID1-style write buffer that temporarily absorbs the small write requests and acts as a surrogate set during recovery when a disk fails. The write data is reclaimed back to the data disks during the lightly loaded or idle periods of the system. Reliability analysis shows that the reliability of HPDA, in terms of MTTDL (Mean Time To Data Loss), is better than that of either pure HDD-based or SSD-based disk array. Our prototype implementation of HPDA and performance evaluations show that HPDA significantly outperforms either HDD-based or SSD-based disk array.


ieee conference on mass storage systems and technologies | 2013

Improving flash-based disk cache with Lazy Adaptive Replacement

Sai Huang; Qingsong Wei; Jianxi Chen; Cheng Chen; Dan Feng

The increasing popularity of flash memory has changed storage systems. Flash-based solid state drive(SSD) is now widely deployed as cache for magnetic hard disk drives(HDD) to speed up data intensive applications. However, existing cache algorithms focus exclusively on performance improvements and ignore the write endurance of SSD. In this paper, we proposed a novel cache management algorithm for flash-based disk cache, named Lazy Adaptive Replacement Cache(LARC). LARC can filter out seldom accessed blocks and prevent them from entering cache. This avoids cache pollution and keeps popular blocks in cache for a longer period of time, leading to higher hit rate. Meanwhile, LARC reduces the amount of cache replacements thus incurs less write traffics to SSD, especially for read dominant workloads. In this way, LARC improves performance and extends SSD lifetime at the same time. LARC is self-tuning and low overhead. It has been extensively evaluated by both trace-driven simulations and a prototype implementation in flashcache. Our experiments show that LARC outperforms state-of-art algorithms and reduces write traffics to SSD by up to 94.5% for read dominant workloads, 11.2-40.8% for write dominant workloads.


ACM Transactions on Storage | 2016

Improving Flash-Based Disk Cache with Lazy Adaptive Replacement

Sai Huang; Qingsong Wei; Dan Feng; Jianxi Chen; Cheng Chen

The increasing popularity of flash memory has changed storage systems. Flash-based solid state drive(SSD) is now widely deployed as cache for magnetic hard disk drives(HDD) to speed up data intensive applications. However, existing cache algorithms focus exclusively on performance improvements and ignore the write endurance of SSD. In this paper, we proposed a novel cache management algorithm for flash-based disk cache, named Lazy Adaptive Replacement Cache(LARC). LARC can filter out seldom accessed blocks and prevent them from entering cache. This avoids cache pollution and keeps popular blocks in cache for a longer period of time, leading to higher hit rate. Meanwhile, LARC reduces the amount of cache replacements thus incurs less write traffics to SSD, especially for read dominant workloads. In this way, LARC improves performance and extends SSD lifetime at the same time. LARC is self-tuning and low overhead. It has been extensively evaluated by both trace-driven simulations and a prototype implementation in flashcache. Our experiments show that LARC outperforms state-of-art algorithms and reduces write traffics to SSD by up to 94.5% for read dominant workloads, 11.2-40.8% for write dominant workloads.


modeling, analysis, and simulation on computer and telecommunication systems | 2008

GRAID: A Green RAID Storage Architecture with Improved Energy Efficiency and Reliability

Bo Mao; Dan Feng; Hong Jiang; Suzhen Wu; Jianxi Chen; Lingfang Zeng

Existing power-aware optimization schemes for disk-array systems tend to strike a delicate balance between energy consumption and performance while ignoring reliability. To achieve a reasonably good trade-off among these three important design objectives in this paper we introduce an energy efficient disk array architecture, called a Green RAID (or GRAID), which extends the data mirroring redundancy of RAID 10 by incorporating a dedicated log disk. The goal of GRAID is to significantly improve energy efficiency or reliability of existing RAID-based systems without noticeably sacrificing their reliability or energy efficiency. The main idea behind GRAID is to update the mirroring disks only periodically while storing all updates since the last mirror-disk update in a log disk, thus being able to spin down all the mirroring disks (or half of the total disks) most of the time to a lower power mode to save energy without sacrificing reliability. Reliability analysis shows that the reliability of GRAID, in terms of MTTDL (Mean Time To Data Loss), is only slightly worse than RAID 10. On the other hand, our prototype implementation of GRAID and performance evaluation show that GRAIDs energy efficiency is significantly better than that of RAID 10 by up to 32.1% and an average of 25.4%.


ieee conference on mass storage systems and technologies | 2013

FSMAC: A file system metadata accelerator with non-volatile memory

Jianxi Chen; Qingsong Wei; Cheng Chen; Lingkun Wu

File system performance is dominated by metadata access because it is small and popular. Metadata is stored as block in the file system. Partial metadata update results in whole block read and write which amplifies disk I/O. Huge performance gap between CPU and disk aggravates this problem. In this paper, a file system metadata accelerator (referred as FSMAC) is proposed to optimize metadata access by efficiently exploiting the advantages of Nonvolatile Memory (NVM). FSMAC decouples data and metadata I/O path, putting data on disk and metadata on NVM at runtime. Thus, data is accessed in block from I/O bus and metadata is accessed in byte-addressable manner from memory bus. Metadata access is significantly accelerated and metadata I/O is eliminated because metadata in NVM is not flushed back to disk periodically anymore. A light-weight consistency mechanism combining fine-grained versioning and transaction is introduced in the FSMAC. The FSMAC is implemented on the basis of Linux Ext4 file system and intensively evaluated under different workloads. Evaluation results show that the FSMAC accelerates file system up to 49.2 times for synchronized I/O and 7.22 times for asynchronized I/O.


international conference on parallel and distributed systems | 2009

JOR: A Journal-guided Reconstruction Optimization for RAID-Structured Storage Systems

Suzhen Wu; Dan Feng; Hong Jiang; Bo Mao; Lingfang Zeng; Jianxi Chen

This paper proposes a simple and practical RAID reconstruction optimization scheme, called JOurnal-guided Reconstruction (JOR). JOR exploits the fact that significant portions of data blocks in typical disk arrays are unused. JOR monitors the storage space utilization status at the block level to guide the reconstruction process so that only failed data on the used stripes is recovered to the spare disk. In JOR, data consistency is ensured by the requirement that all blocks in a disk array be initialized to zero (written with value zero) during synchronization while all blocks in the spare disk also be initialized to zero in the background. JOR can be easily incorporated into any existing reconstruction approach to optimize it, because the former is independent of and orthogonal to the latter. Experimental results obtained from our JOR prototype implementation demonstrate that JOR reduces reconstruction times of two state-of-the-art reconstruction schemes by an amount that is approximately proportional to the percentage of unused storage space while ensuring data consistency.


ieee conference on mass storage systems and technologies | 2012

HRAID6ML: A hybrid RAID6 storage architecture with mirrored logging

Lingfang Zeng; Dan Feng; Jianxi Chen; Qingsong Wei; Bharadwaj Veeravalli; Wenguo Liu

The RAID6 provides high reliability using double-parity-update at cost of high write penalty. In this paper, we propose HRAID6ML, a new logging architecture for RAID6 systems for enhanced energy efficiency, performance and reliability. HRAID6ML explores a group of Solid State Drives (SSDs) and Hard Disk Drives (HDDs): Two HDDs (parity disks) and several SSDs form RAID6. The free space of the two parity disks is used as mirrored log region of the whole system to absorb writes. The mirrored logging policy helps to recover system from parity disk failure. Mirrored logging operation does not introduce noticeable performance overhead to the whole system. HRAID6ML eliminates the additional hardware and energy costs, potential single point of failure and performance bottleneck. Furthermore, HRAID6ML prolongs the lifecycle of the SSDs and improves the systems energy efficiency by reducing the SSDs write frequency. We have implemented proposed HRAID6ML. Extensive trace-driven evaluations demonstrate the advantages of the HRAID6ML system over both traditional SSD-based RAID6 system and HDD-based RAID6 system.


ACM Transactions on Storage | 2015

Accelerating File System Metadata Access with Byte-Addressable Nonvolatile Memory

Qingsong Wei; Jianxi Chen; Cheng Chen

File system performance is dominated by small and frequent metadata access. Metadata is stored as blocks on the hard disk drive. Partial metadata update results in whole-block read or write, which significantly amplifies disk I/O. Furthermore, a huge performance gap between the CPU and disk aggravates this problem. In this article, a file system metadata accelerator (referred to as FSMAC) is proposed to optimize metadata access by efficiently exploiting the persistency and byte-addressability of Nonvolatile Memory (NVM). The FSMAC decouples data and metadata access path, putting data on disk and metadata in byte-addressable NVM at runtime. Thus, data is accessed in a block from I/O the bus and metadata is accessed in a byte-addressable manner from the memory bus. Metadata access is significantly accelerated and metadata I/O is eliminated because metadata in NVM is no longer flushed back to the disk periodically. A lightweight consistency mechanism combining fine-grained versioning and transaction is introduced in the FSMAC. The FSMAC is implemented on a real NVDIMM platform and intensively evaluated under different workloads. Evaluation results show that the FSMAC accelerates the file system up to 49.2 times for synchronized I/O and 7.22 times for asynchronized I/O. Moreover, it can achieve significant performance speedup in network storage and database environment, especially for metadata-intensive or write-dominated workloads.


international conference on cluster computing | 2012

HerpRap: A Hybrid Array Architecture Providing Any Point-in-Time Data Tracking for Datacenter

Lingfang Zeng; Dan Feng; Bo Mao; Jianxi Chen; Qingsong Wei; Wenguo Liu

Both physical disk failure and logical errors such as software error, user abuse and virus attacks may cause data lose. The risk of logical errors is far greater than physical disk failure. Moreover, existing RAID solution cannot satisfy the reliability requirement in face of the logical errors in data centers. It is therefore becoming increasingly important for RAID-based storage systems to be able to recover data to any point-in-time when logical errors occur. We proposed a novel storage array architecture, Herp Rap, which is able to recover data from both physical disk failure and logical errors. We have implemented a prototype of Herp Rap and carried out extensive performance measurements using DBT-2 and file system benchmarks. Our experiments demonstrated that the proposed Herp Rap is able to track or recover data to any point-in-time quickly by tracing back the history of block logs. Moreover, Herp Rap outperforms existing HDD-based or SSD-based RAID5 with copy-on-write (COW) snapshot in terms of performance, energy efficiency, failure recovery ability and reliability.


asia pacific magnetic recording conference | 2012

A Popularity-Aware Buffer Management to Improve Buffer Hit Ratio and Write Sequentiality for Solid-State Drive

Qingsong Wei; Lingfang Zeng; Jianxi Chen; Cheng Chen

Random writes significantly limit the application of flash memory in enterprise environment due to its poor latency, shorten lifetime and high garbage collection overhead. Solid-state drive (SSD) uses a small part of memory as buffer to reduce random write and extend lifetime. Existing block-based buffer management schemes exploit spatial locality to improve the write sequentiality at a cost of low buffer hit ratio. In this paper, we propose a novel buffer management scheme referred to as PAB, which adopts both buffer hit ratio and write sequentiality as design objectives. Leveraging block popularity, PAB makes full use of both temporal and spatial localities at block level. When replacement happens, PAB selects victim block based on block popularity, page counter and block dirty flag. As universal buffer, PAB serves both read and write requests to increase the possibility to form sequential write. PAB has been extensively evaluated under real enterprise workloads. Our benchmark results conclusively demonstrate that PAB can achieve up to 72% performance improvement and 308% block erasure reduction compared to existing buffer management schemes.

Collaboration


Dive into the Jianxi Chen's collaboration.

Top Co-Authors

Avatar

Dan Feng

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lingfang Zeng

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jingning Liu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sai Huang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Tian

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenguo Liu

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge