Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyungmee O. Kim is active.

Publication


Featured researches published by Kyungmee O. Kim.


Reliability Engineering & System Safety | 2013

A new reliability allocation weight for reducing the occurrence of severe failure effects

Kyungmee O. Kim; Yoonjung Yang; Ming J. Zuo

A reliability allocation weight is used during the early design stage of a system to apportion the system reliability requirement to its individual subsystems. Since some failures have serious effects on public safety, cost and environmental issues especially in a mission critical system, the failure effect must be considered as one of the important factors in determining the allocation weight. Previously, the risk priority number or the criticality number was used to consider the failure effect in the allocation weight. In this paper, we identify the limitations of the previous approach and propose a new allocation weight based on the subsystem failure severity and its relative frequency. An example is given to illustrate that the proposed method is more effective than the previous method for reducing the occurrence of the unacceptable failure effects in a newly designed system.


European Journal of Operational Research | 2009

Optimal burn-in for maximizing reliability of repairable non-series systems

Kyungmee O. Kim; Way Kuo

Burn-in is a manufacturing process applied to products to eliminate early failures in the factory before the products reach the customers. Various methods have been proposed for determining an optimal burn-in time of a non-repairable system or a repairable series system, assuming that system burn-in improves all components in the system. In this paper, we establish the trade-off between the component reliabilities during system burn-in and develop an optimal burn-in time for repairable non-series systems to maximize reliability. One impediment to expressing the reliability of a non-series system is in that successive failures during system burn-in cannot be described precisely because a failed component is not detected until the whole system fails. For approximating the successive failures of a non-series system during system burn-in, we considered two types of repair: minimal repair at the time of system failure, and repair at the time of component or connection failure. The two types of repair provide bounds on the optimal system burn-in time of non-series systems.


EURASIP Journal on Advances in Signal Processing | 2012

Extending the scope of empirical mode decomposition by smoothing

Donghoh Kim; Kyungmee O. Kim; Hee-Seok Oh

This article considers extending the scope of the empirical mode decomposition (EMD) method. The extension is aimed at noisy data and irregularly spaced data, which is necessary for widespread applicability of EMD. The proposed algorithm, called statistical EMD (SEMD), uses a smoothing technique instead of an interpolation when constructing upper and lower envelopes. Using SEMD, we discuss how to identify non-informative fluctuations such as noise, outliers, and ultra-high frequency components from the signal, and to decompose irregularly spaced data into several components without distortions.


IEEE Transactions on Reliability | 2005

Some considerations on system burn-in

Kyungmee O. Kim; W. Kuo

The questions of whether or not to perform system burn-in, and how long the burn-in period should be, can be answered by developing a probabilistic model of the system lifetime. Previously, such a model was obtained to relate component burn-in information & assembly quality to the system lifetime, assuming that the assembly defects introduced in various locations of a system are capable of connection failures represented by an exponential distribution. This paper extends the exponential-based results to a general distribution so as to study the dependence of system burn-in on the defect occurrence distribution. In particular, a method of determining an optimal burn-in period that maximizes system reliability is developed based on the system lifetime model, assuming that systems are repaired at burn-in failures.


IEEE Transactions on Semiconductor Manufacturing | 2005

On the relationship of semiconductor yield and reliability

Kyungmee O. Kim; Ming J. Zuo; Way Kuo

Traditionally, semiconductor reliability has been estimated from the life tests or accelerated stress tests at the completion of manufacturing processes. Recent research, however, has been directed to reliability estimation during the early production stage through a relation model of yield and reliability. Because the relation model depends on the assumed density distribution of manufacturing defects, we investigate the effect of the defect density distributions on the predicted reliability, for a single-area device without repair and for a two-area device with repair, respectively. We show that for any device, reliability functions preserve an ordering of yield functions. It is also pointed out that the repair capability improves only yield but not reliability, resulting in a large value of the factor that scales from yield to reliability. In order to achieve a reliable device, therefore, we suggest to improve yield and to perform the device test such as burn-in if the scaling factor is large.


European Journal of Operational Research | 2011

Burn-in considering yield loss and reliability gain for integrated circuits

Kyungmee O. Kim

This paper presents burn-in effects on yield loss and reliability gain for a lifetime distribution developed from a negative binomial defect density distribution and a given defect size distribution, after assuming that the rate of defect growth is proportional to the power of the present defect size. While burn-in always results in yield loss, it creates reliability gain only if either defects grow fast or the field operation time is long. Otherwise, burn-in for a short time could result in reliability loss. The optimal burn-in time for maximizing reliability is finite if defects grow linearly in time and is infinite if defects grow nonlinearly in time. The optimal burn-in time for minimizing cost expressed in terms of both yield and reliability increases in the field operation time initially but becomes constant as the field operation time is long enough. It is numerically shown that increasing mean defect density or defect clustering increases the optimal burn-in time.


Reliability Engineering & System Safety | 2007

Two fault classification methods for large systems when available data are limited

Kyungmee O. Kim; Ming J. Zuo

In this paper, we consider the problem of fault diagnosis for a system with many possible fault types. Two approaches are presented that are useful for initial diagnosis of system-wide faults, assuming that no data are available before commissioning the system but the possibility of the occurrence of each symptom is known for each fault. The first method uses a fault tree approach to reduce the solution space before applying the geometric classification method, the assumption being that no unwanted symptoms are possible. This method is nonparametric and thus does not require any data to estimate the underlying distribution of faults and symptoms. The second method is based on the Bayes classification approach to utilize the subjective information and the limited data that may be available. The two methods are generic and applicable to a variety of industrial processes.


Reliability Engineering & System Safety | 2018

General model for the risk priority number in failure mode and effects analysis

Kyungmee O. Kim; Ming J. Zuo

Abstract Failure mode and effects analysis (FMEA) is a structured method used during a given stage of the system life cycle to understand all probable failure modes and the effects of their occurrences. The risk priority number (RPN) is calculated in FMEA to select more critical failure modes by multiplying three factors: occurrence, detection, and severity. In the literature, these three factors are defined qualitatively without any underlying model, and multiple definitions and conflicting interpretations exist for each factor. As the interrelationships between the three RPN factors are not known, previous research has treated each factor as a criterion in multiple criteria decision making, under the assumption that the three factors are independent of each other. In this paper, we present a general model to explain the functional relationship among the three factors. Using the model, we discuss the unique role of each factor for comparing the risk of different failure modes.


Microelectronics Reliability | 2008

Reliability functions estimated from commonly used yield models

Kyungmee O. Kim; Hee-Seok Oh

Reliability can be estimated from a semiconductor yield model once in-process measurements of manufacturing defects are obtained which cause both yield and reliability losses. Such reliability is more useful, than reliability estimated from field failure data, for determining during the early production stage whether a reliability requirement will be met or not. The purpose of this paper is to investigate and compare reliability functions estimated from previous yield models that are commonly used in literature. The results characterize the impact of defect clustering and environmental conditions on reliability estimated from yield.


Iie Transactions | 2015

Effects of subsystem mission time on reliability allocation

Kyungmee O. Kim; Ming J. Zuo

During the early stages of system development, various factors are considered when determining an allocation weight to apportion a system’s reliability requirement to each subsystem. Previous methods have included subsystem mission time as a factor in obtaining the allocation weight in order to allocate a higher failure rate to a subsystem with a shorter mission time than the system’s mission time. This article, first shows that the results obtained from previous methods are misleading, mainly because the allocated failure rate of the subsystem is expressed in the system’s mission time rather than the subsystem’s mission time. It is further shown that if a designer intends to allocate a lower failure rate to a subsystem that has to operate longer in the system, subsystem mission time must not be included as a factor when determining the allocation weight. If a designer wants to allocate the system failure rate equally to each subsystem regardless of a subsystem’s mission time, subsystem mission time must be included as a factor.

Collaboration


Dive into the Kyungmee O. Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hee-Seok Oh

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Way Kuo

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongik Jang

Korea Transport Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge