Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keewon Cho is active.

Publication


Featured researches published by Keewon Cho.


IEEE Transactions on Very Large Scale Integration Systems | 2014

A Delay Test Architecture for TSV With Resistive Open Defects in 3-D Stacked Memories

Hyungsu Sung; Keewon Cho; Kunsang Yoon; Sungho Kang

The limits of technology scaling for smaller chip size, higher performance, and lower power consumption are being reached. For this reason, the memory semiconductor industry is searching for new technology. 3-D stacked memory using through-silicon via (TSV) has been considered as a promising solution for overcoming this challenge. However, to guarantee quality and yield for mass production of 3-D stacked memories, effective test techniques for TSV are required. In this paper, a new test architecture for testing TSVs in 3-D stacked memories is proposed. By comparing voltage changes generated due to resistive open defects with a reference voltage applied externally, the test circuit estimates delay across the TSV. This allows the possibility of a delay test with low-frequency test equipment. Experimental results demonstrate that the proposed test architecture can be effective in the testing of TSV with resistive open defects, and have lower area overhead and lower peak current consumption.


IEEE Transactions on Very Large Scale Integration Systems | 2017

Hardware-Efficient Built-In Redundancy Analysis for Memory With Various Spares

Joo Young Kim; Woosung Lee; Keewon Cho; Sungho Kang

Memory capacity continues to increase, and many semiconductor manufacturing companies are trying to stack memory dice for larger memory capacities. Therefore, built-in redundancy analysis (BIRA) is of utmost importance because the probability of fault occurrence increases with a larger memory capacity. A traditional spare structure that consists of simple rows and columns is somewhat inadequate for multiple memory blocks BIRA because the hardware overhead and spare allocation efficiency are degraded. The proposed BIRA uses various types of spares and can achieve a higher yield than a simple row and column spare structure. Herein, we propose a BIRA that can achieve an optimal repair rate using various spare types. The proposed analyzer can exhaustively search not only row and column spare types but also global and local spare types. In addition, this paper proposes a fault-storing content-addressable memory (CAM) structure. The proposed CAM is small and collects faults efficiently. The experimental results show a high repair rate with a small hardware overhead and a short analysis time.


ACM Computing Surveys | 2016

A Survey of Repair Analysis Algorithms for Memories

Keewon Cho; Wooheon Kang; Hyungjun Cho; Changwook Lee; Sungho Kang

Current rapid advancements in deep submicron technologies have enabled the implementation of very large memory devices and embedded memories. However, the memory growth increases the number of defects, reducing the yield and reliability of such devices. Faulty cells are commonly repaired by using redundant cells, which are embedded in memory arrays by adding spare rows and columns. The repair process requires an efficient redundancy analysis (RA) algorithm. Spare architectures for the repair of faulty memory include one-dimensional (1D) spare architectures, two-dimensional (2D) spare architectures, and configurable spare architectures. Of these types, 2D spare architectures, which prepare extra rows and columns for repair, are popular because of their better repairing efficiency than 1D spare architectures and easier implementation than configurable spare architectures. However, because the complexity of the RA is NP-complete, the RA algorithm should consider various factors in order to determine a repair solution. The performance depends on three factors: analysis time, repair rate, and area overhead. In this article, we survey RA algorithms for memory devices as well as built-in repair algorithms for improving these performance factors. Built-in redundancy analysis techniques for emergent three-dimensional integrated circuits are also discussed. Based on this analysis, we then discuss future research challenges for faulty-memory repair studies.


asian test symposium | 2013

A Die Selection and Matching Method with Two Stages for Yield Enhancement of 3-D Memories

Wooheon Kang; Chang-Wook Lee; Keewon Cho; Sungho Kang

Three-dimensional (3-D) memories using through-silicon-vias (TSVs) as vertical buses across memory layers has regarded as one of 3-D integrated circuits (ICs) technology. The memory dies to stack together in a 3-D memory are selected by a die selection method. In order to improve yield of 3-D memories, redundancy sharing between inter-die using TSVs is an effective strategy. With the redundancy sharing strategy, the bad memory dies can become good 3-D memories through matching the good memory dies. To support die selection and matching efficiently, a novel redundancy analysis (RA) algorithm, which considers various repair solutions, is proposed. Because the repair solutions can be various, the proposed die selection and matching is performed with two stages; general die selection and matching method in the first stage and re-matched remained memory dies, after the first stage, applying other repair solutions in the second stage. Thus, the proposed die selection and matching algorithm using the proposed RA algorithm can improve yield of 3-D memories. The experimental results show that the proposed die selection and matching method can achieve higher yield of 3-D memories than that of the previous state-of-the-art the die selection and matching methods.


Journal of Semiconductor Technology and Science | 2017

Low cost endurance test-pattern generation for multi-level cell flash memory

Jaewon Cha; Keewon Cho; Seunggeon Yu; Sungho Kang

A new endurance test-pattern generation on NAND-flash memory is proposed to improve test cost. We mainly focus on the correlation between the data-pattern and the device error-rate during endurance testing. The novelty is the development of testing method using quasi-random pattern based on device architectures in order to increase the test efficiency during time-consuming endurance testing. It has been proven by the experiments using the commercial 32 nm NAND flash-memory. Using the proposed method, the error-rate increases up to 18.6% compared to that of the conventional method which uses pseudo-random pattern. Endurance testing time using the proposed quasi-random pattern is faster than that of using the conventional pseudorandom pattern since it is possible to reach the target error rate quickly using the proposed one. Accordingly, the proposed method provides more lowcost testing solutions compared to the previous pseudo-random testing patterns.


international soc design conference | 2016

Discussion of cost-effective redundancy architectures

Keewon Cho; Joo Young Kim; Ha-young Lee; Sungho Kang

To get a reasonable yield, memories incorporate redundancies to substitute for faulty cells. As the performance of repair algorithm reaches some saturation point, recent studies focus on various redundancy architectures for higher repair rate. In this paper, three kinds of spares, i.e., local, common, and global spares, are discussed to analyze the efficiency of redundancy architectures in respect of the repair cost. In order to estimate the impact of each spare, more than a hundred redundancy architectures are simulated with different faulty patterns. This paper performs a data analysis and suggests cost-effective redundancy architectures.


international symposium on quality electronic design | 2015

Near optimal repair rate built-in redundancy analysis with very small hardware overhead

Woosung Lee; Keewon Cho; Joo Young Kim; Sungho Kang

As the memory density and capacity grows, it is more likely that the number of defects increases. For this reason, in order to improve memory yield, repair analysis is widely used. Built-in redundancy analysis (BIRA) is regarded as one of the solutions to improve memory yield. However, the previous BIRA approaches require large hardware overhead to achieve an optimal repair rate. This is the main obstacle to use BIRA practically. Therefore, a new BIRA is proposed to reduce the hardware overhead significantly using spare allocation probability according to the number of faults on a sparse faulty line. The experimental results show that the hardware overhead of the proposed approach can be considerably reduced with slight loss of repair rate. Therefore, it can be used as a practical solution for BIRA.


international soc design conference | 2015

A new built-in redundancy analysis algorithm based on multiple memory blocks

Joo Young Kim; Keewon Cho; Woosung Lee; Sungho Kang

With the development of memory density, the probability of occurring faults in memory also increases. To overcome this problem, many built-in redundancy analysis (BIRA) algorithms have been proposed to repair the faults using redundancy cells in memory. Most of previous algorithms have focused on single memory block with local spare cell architecture. However, many memories in system consist of multiple local memory blocks with various spare cell architectures. Thus, the proposed algorithm is based on not only local spare cell but also various spare cell architectures. The experimental results show that repair rate, and hardware overhead of BIRA with various spare cell architectures in multiple memory blocks. The proposed algorithm is practical solution for multiple memory blocks which have global spare cell and common spare cell.


international soc design conference | 2015

Failure bitmap compression method for 3D-IC redundancy analysis

Keewon Cho; Woosung Lee; Joo Young Kim; Sungho Kang

As the chance of memory faults has increased, many redundancy analysis (RA) techniques are widely used in order to gain a proper manufacturing yield. To find appropriate repair solutions, the external automatic test equipment (ATE) receives the faulty information and stores it into a 2-D failure bitmap. This paper presents a new failure bitmap compression method which utilizes modified run-length codes. The proposed idea can reduce hardware overhead of a failure bitmap while preserving all the faulty information that is needed to get optimal repair rate. Experimental results show that the proposed method can obtain more than 80% of reduction rate in the failure bitmap size.


international soc design conference | 2015

A new in-field bad block detection scheme for NAND flash chips

Dongho Kang; Keewon Cho; Sungho Kang

A NAND flash system has been adopted as storage. However, due to its distinctive operation mechanisms, it endures only the limited number of program/Erase cycles. So, bad blocks are inevitably developed during the life time of the storage system. A bad block is a block that contains faulty bits that cannot be covered by ECC. In this paper, a novel in-field bad block detection scheme is proposed. Through simple write verifications, the proposed bad block detector finds bad blocks in real-time, and ensures that written data is reliable. The detection method includes neither costly data-mirroring nor complex ECC processing, but it requires an additional detection module of which size is less than 0.15% of the controller size.

Collaboration


Dive into the Keewon Cho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge