Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Byoungju Choi is active.

Publication


Featured researches published by Byoungju Choi.


Data Mining and Knowledge Discovery | 2003

A Taxonomy of Dirty Data

Won Kim; Byoungju Choi; Eui Kyeong Hong; Soo-Kyung Kim; Doheon Lee

Today large corporations are constructing enterprise data warehouses from disparate data sources in order to run enterprise-wide data analysis applications, including decision support systems, multidimensional online analytical applications, data mining, and customer relationship management systems. A major problem that is only beginning to be recognized is that the data in data sources are often “dirty”. Broadly, dirty data include missing data, wrong data, and non-standard representations of the same data. The results of analyzing a database/data warehouse of dirty data can be damaging and at best be unreliable. In this paper, a comprehensive classification of dirty data is developed for use as a framework for understanding how dirty data arise, manifest themselves, and may be cleansed to ensure proper construction of data warehouses and accurate data analysis. The impact of dirty data on data mining is also explored.


Journal of Systems and Software | 2010

A family of code coverage-based heuristics for effective fault localization

W. Eric Wong; Vidroha Debroy; Byoungju Choi

Locating faults in a program can be very time-consuming and arduous, and therefore, there is an increased demand for automated techniques that can assist in the fault localization process. In this paper a code coverage-based method with a family of heuristics is proposed in order to prioritize suspicious code according to its likelihood of containing program bugs. Highly suspicious code (i.e., code that is more likely to contain a bug) should be examined before code that is relatively less suspicious; and in this manner programmers can identify and repair faulty code more efficiently and effectively. We also address two important issues: first, how can each additional failed test case aid in locating program faults; and second, how can each additional successful test case help in locating program faults. We propose that with respect to a piece of code, the contribution of the first failed test case that executes it in computing its likelihood of containing a bug is larger than or equal to that of the second failed test case that executes it, which in turn is larger than or equal to that of the third failed test case that executes it, and so on. This principle is also applied to the contribution provided by successful test cases that execute the piece of code. A tool, @gDebug, was implemented to automate the computation of the suspiciousness of the code and the subsequent prioritization of suspicious code for locating program faults. To validate our method case studies were performed on six sets of programs: Siemens suite, Unix suite, space, grep, gzip, and make. Data collected from the studies are supportive of the above claim and also suggest Heuristics III(a), (b) and (c) of our method can effectively reduce the effort spent on fault localization.


European Journal of Operational Research | 1999

Optimization models for quality and cost of modular software systems

Ho-Won Jung; Byoungju Choi

This study presents two optimization models for selecting the best Commercial Off-The-Shelf (COTS) software product among alternatives for each module in the development of modular software systems. The objective function of the models is to maximize quality within a budgetary constraint. The software system consists of several programs, where a specific function of each program can call upon a series of modules. Several alternative COTS products are available for each module. A weight to the modules is given by utilizing the Analytic Hierarchy Process (AHP) based on the access frequencies of the modules. A simplified example is given to demonstrate each optimization model.


Journal of Systems and Software | 1993

High-performance mutation testing

Byoungju Choi; Aditya P. Mathur

Abstract Testing a large software program is a time consuming operation. In addition to the time spent by the tester in identifying, locating, and correcting bugs, a significant amount of time is spent in the execution of the program under test and its instrumented or fault-induced variants, also known as mutants. When using mutation testing to achieve high reliability, there can be many such mutants. In this article, we show how a multiple instruction multiple data (MIMD) architecture can be exploited to obtain significant reductions in the total execution time of the mutants. We describe the architecture of the P M othra system, which is designed to provide the tester with a transparent interface to a parallel machine. Experimental results obtained on the Ncube/7 hypercube are presented. The near-linear speedups show the perfect match that exists between the software testing application and a local memory MIMD architecture typified by the Ncube/7 machine. The compilation bottleneck, which could have an adverse effect on the speedup, is illustrated by experimental results.


secure software integration and reliability improvement | 2009

Performance Testing of Mobile Applications at the Unit Test Level

Heejin Kim; Byoungju Choi; W. Eric Wong

With the rapid growth of the wireless market and the development of various mobile devices, innovative methods and technologies to produce high-quality mobile applications and reduce time to market have been emerging. Mobile applications are often characterized by an array of limitations such as the short development lifecycle to gain a competitive advantage and difficulties to update once released. Hence, rigorous testing on the applications is required before distribution to the market, including structural white-box, functional black-box, integration and system testing. Although recently performance testing at the system test level has become crucial given its direct connection with the product quality improvement, most such tests are confined to the areas of load, usability, and stress testing. Moreover, the implementation itself is insufficient due to the limitations of the development environment. This paper proposes a method to support performance testing utilizing a database established through benchmark testing in emulator-based test environment at the unit test level. It also presents the tool that supports the proposed method of performance testing and verifies the reliability of performance test results through experiments.


International Journal of Software Engineering and Knowledge Engineering | 2011

A TEST CASE PRIORITIZATION BASED ON DEGREE OF RISK EXPOSURE AND ITS EMPIRICAL STUDY

Hoijin Yoon; Byoungju Choi

We propose a test case prioritization strategy for risk based testing, in which the risk exposure is employed as the key criterion of evaluation. Existing approaches to risk based testing typically employ risk exposure values as assessed by the tester. In contrast, we employ exposure values that have been determined by experts during the risk assessment stage of the risk management process. If a given method produces greater accuracy in fault detection, that approach is considered more valuable for software testing. We demonstrate the value of our proposed risk based testing method in this sense through its application.


international conference on quality software | 2010

A Grouping-Based Strategy to Improve the Effectiveness of Fault Localization Techniques

Vidroha Debroy; W. Eric Wong; Xiaofeng Xu; Byoungju Choi

Fault localization is one of the most expensive activities of program debugging, which is why the recent years have witnessed the development of many different fault localization techniques. This paper proposes a grouping-based strategy that can be applied to various techniques in order to boost their fault localization effectiveness. The applicability of the strategy is assessed over – Tarantula and a radial basis function neural network-based technique; across three different sets of programs (the Siemens suite, grep and gzip). Results are suggestive that the grouping-based strategy is capable of significantly improving the fault localization effectiveness and is not limited to any particular fault localization technique. The proposed strategy does not require any additional information than what was already collected as input to the fault localization technique, and does not require the technique to be modified in any way.


computer software and applications conference | 2003

A CC-based security engineering process evaluation model

Jongsook Lee; Jieun Lee; Seung-Hee Lee; Byoungju Choi

Common criteria (CC) provides only the standard for evaluating information security product or system, namely target of evaluation (TOE). On the other hand, SSE-CMM provides the standard for security engineering process evaluation. Based on the CC, TOEs security quality may be assured, but its advantage is that the development process is neglected. SSE-CMM seems to assure the quality of TOE developed in an organization equipped with security engineering process, but the TOE developed in such environment cannot avoid CC-based security assurance evaluation. We propose an effective method of integrating two evaluation methods, CC and SSE-CMM, and develop CC-based assurance evaluation model, CC/spl I.bar/SSE-CMM. CC/spl I.bar/SSE-CMM presents the specific and realistically operable organizational security process maturity assessment and CC evaluation model.


Information & Software Technology | 2016

Risk-based test case prioritization using a fuzzy expert system

Charitha Hettiarachchi; Hyunsook Do; Byoungju Choi

Abstract Context: The use of system requirements and their risks enables software testers to identify more important test cases that can reveal the faults associated with system components. Objective: The goal of this research is to make the requirements risk estimation process more systematic and precise by reducing subjectivity using a fuzzy expert system. Further, we provide empirical results that show that our proposed approach can improve the effectiveness of test case prioritization. Method: In this research, we used requirements modification status, complexity, security, and size of the software requirements as risk indicators and employed a fuzzy expert system to estimate the requirements risks. Further, we employed a semi-automated process to gather the required data for our approach and to make the risk estimation process less subjective. Results: The results of our study indicated that the prioritized tests based on our new approach can detect faults early, and also the approach can be effective at finding more faults earlier in the high-risk system components compared to the control techniques. Conclusion: We proposed an enhanced risk-based test case prioritization approach that estimates requirements risks systematically with a fuzzy expert system. With the proposed approach, testers can detect more faults earlier than with other control techniques. Further, the proposed semi-automated, systematic approach can easily be applied to industrial applications and can help improve regression testing effectiveness.


Computer Standards & Interfaces | 2007

An interface test model for hardware-dependent software and embedded OS API of the embedded system

Ahyoung Sung; Byoungju Choi; Seokkyoo Shin

An embedded system has a hierarchical structure including hardware-dependent software layer, operating system layer, and applications layer. Since the system has interfaces between the different layers that are tightly coupled and inter-dependent to each other, these interfaces are the focal area to be tested. This paper proposes an Embedded Systems Interface Test Model (EmITM) to test hardware interfaces and OS interfaces. The EmITM provides a set of test items based on the interfaces when testing embedded software. We apply our model to embedded software test and analyze the test results.

Collaboration


Dive into the Byoungju Choi's collaboration.

Top Co-Authors

Avatar

Hoijin Yoon

Ewha Womans University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

W. Eric Wong

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heejin Kim

Ewha Womans University

View shared research outputs
Top Co-Authors

Avatar

Jihyun Park

Ewha Womans University

View shared research outputs
Top Co-Authors

Avatar

Jina Jang

Ewha Womans University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eun Mi Ji

Ewha Womans University

View shared research outputs
Researchain Logo
Decentralizing Knowledge