Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jongmoon Baik is active.

Publication


Featured researches published by Jongmoon Baik.


software engineering, artificial intelligence, networking and parallel/distributed computing | 2007

Test Cases Generation from UML Activity Diagrams

Hyungchoul Kim; Sungwon Kang; Jongmoon Baik; In-Young Ko

UML activity diagram is a notation suitable for modeling a concurrent system in which multiple objects interact with each other. This paper proposes a method to generate test cases from UML activity diagrams that minimizes the number of test cases generated while deriving all practically useful test cases. Our method first builds an I/O explicit activity diagram from an ordinary UML activity diagram and then transforms it to a directed graph, from which test cases for the initial activity diagram are derived. This conversion is performed based on the single stimulus principle, which helps avoid the state explosion problem in test generation for a concurrent system.


Lecture Notes in Computer Science | 2006

Generating test cases for web services using extended finite state machine

Changsup Keum; Sungwon Kang; In-Young Ko; Jongmoon Baik; Young-Il Choi

Web services utilize a standard communication infrastructure such as XML and SOAP to communicate through the Internet. Even though Web services are becoming more and more widespread as an emerging technology, it is hard to test Web services because they are distributed applications with numerous aspects of runtime behavior that are different from typical applications. This paper presents a new approach to testing Web services based on EFSM (Extended Finite State Machine). WSDL (Web Services Description Language) file alone does not provide dynamic behavior information. This problem can be overcome by augmenting it with a behavior specification of the service. Rather than domain partitioning or perturbation techniques, we choose EFSM because Web services have control flow as well as data flow like communication protocols. By appending this formal model of EFSM to standard WSDL, we can generate a set of test cases which has a better test coverage than other methods. Moreover, a procedure for deriving an EFSM model from WSDL specification is provided to help a service provider augment the EFSM model describing dynamic behaviors of the Web service. To show the efficacy of our approach, we applied our approach to Parlay-X Web services. In this way, we can test Web services with greater confidence in potential fault detection.


secure software integration and reliability improvement | 2008

Historical Value-Based Approach for Cost-Cognizant Test Case Prioritization to Improve the Effectiveness of Regression Testing

Hyuncheol Park; Hoyeon Ryu; Jongmoon Baik

Regression testing has been used to support software testing activities and assure the acquirement of appropriate quality through several versions of a software program. Regression testing, however, is too expensive because it requires many test case executions, and the number of test cases increases sharply as the software evolves. In this paper, we propose the Historical Value-Based Approach, which is based on the use of historical information, to estimate the current cost and fault severity for cost-cognizant test case prioritization. We also conducted a controlled experiment to validate the proposed approach, the results of which proved the proposed approachpsilas usefulness. As a result of the proposed approach, software testers who perform regression testing are able to prioritize their test cases so that their effectiveness can be improved in terms of average percentage of fault detected per cost.


Communications of The ACM | 2006

A quality-based cost estimation model for the product line life cycle

Hoh Peter In; Jongmoon Baik; Sangsoo Kim; Ye Yang; Barry W. Boehm

In reusing common organizational assets, the software product line (SPL) provides substantial business opportunities for reducing the unit cost of similar products, improving productivity, reducing time to market, and promoting customer satisfaction [4]. By adopting effective product line practices, return on investment (ROI) becomes increasingly critical in the decision-making process. The majority of SPL cost estimation and ROI models [5-9] confine themselves to software development costs and savings. However, if software quality cost is considered in the spectrum of the SPL < life cycle, product lines can result in considerably larger payoffs, compared to non-product lines. This article proposes a quality-based product line life cycle cost estimation model, called qCOPLIMO, and investigates the effect of software quality cost on the ROI ofSPL. qCOPLIMO is derived from two COCOMO suite models: COPLIMO and COQUALMO, as presented in Figure 1. COPLIMO [2] provides a baseline cost estimation model of the product line life cycle, and COQUALMO [3] estimates the number of residual defects. These models are used to estimate software quality cost. Both models are an extension of COCOMO II [1].


Empirical Software Engineering | 2016

Value-cognitive boosting with a support vector machine for cross-project defect prediction

Duksan Ryu; Okjoo Choi; Jongmoon Baik

It is well-known that software defect prediction is one of the most important tasks for software quality improvement. The use of defect predictors allows test engineers to focus on defective modules. Thereby testing resources can be allocated effectively and the quality assurance costs can be reduced. For within-project defect prediction (WPDP), there should be sufficient data within a company to train any prediction model. Without such local data, cross-project defect prediction (CPDP) is feasible since it uses data collected from similar projects in other companies. Software defect datasets have the class imbalance problem increasing the difficulty for the learner to predict defects. In addition, the impact of imbalanced data on the real performance of models can be hidden by the performance measures chosen. We investigate if the class imbalance learning can be beneficial for CPDP. In our approach, the asymmetric misclassification cost and the similarity weights obtained from distributional characteristics are closely associated to guide the appropriate resampling mechanism. We performed the effect size A-statistics test to evaluate the magnitude of the improvement. For the statistical significant test, we used Wilcoxon rank-sum test. The experimental results show that our approach can provide higher prediction performance than both the existing CPDP technique and the existing class imbalance technique.


ICSP'07 Proceedings of the 2007 international conference on Software process | 2007

Jasmine: a PSP supporting tool

Hyunil Shin; Ho-Jin Choi; Jongmoon Baik

The PSP (Personal Software Process) was developed to help developers make high-quality products through improving their personal software development processes. With consistent measurement and analysis activities that the PSP suggests, developers can identify process deficiencies and make a reliable estimate on effort and quality. However, due to the high-overhead and context-switching problem of manual data recording, developers have difficulties to collect reliable data, which can lead to wrong analysis results. Also, it is very inconvenient to use the paper-based process guide of the PSP in navigating its process information and difficult to attach additional process-related information to the process guide. In this paper, we describe a PSP supporting tool that we have developed to deal with these problems. The tool provides automated data collection and analysis to help acquire reliable data and identify process deficiencies. It also provides an EPG (Electronic Process Guide) in order to provide easy access and navigation of the PSP process information, which is integrated with an ER (Experience Repository) to allow developers to store development experiences.


software engineering research and applications | 2007

A Case Study: CRM Adoption Success Factor Analysis and Six Sigma DMAIC Application

Zhedan Pan; Hoyeon Ryu; Jongmoon Baik

With todays increasingly competitive economy, many organizations have initiated customer relationship management (CRM) projects to improve customer satisfaction, revenue growth and employee productivity gains. However, only a few successful CRM implementations have successfully completed. In order to enhance the CRM implementation process and increase the success rate, in this paper, first we present the most significant success factors for CRM implementation identified by the results of literature reviews and a survey we conducted. Then we propose a strategy to integrate Six Sigma DMAIC methodology with the CRM implementation process addressing five critical success factors (CSF). Finally, we provide a case study to show how the proposed approach can be applied in the real CRM implementation projects. We conclude that by considering the critical success factors, the proposed approach can emphasize the critical part of implementation process and provide high possibility of CRM adoption success.


Journal of Computer Science and Technology | 2015

A Hybrid Instance Selection Using Nearest-Neighbor for Cross-Project Defect Prediction

Duksan Ryu; Jong-In Jang; Jongmoon Baik

Software defect prediction (SDP) is an active research field in software engineering to identify defect-prone modules. Thanks to SDP, limited testing resources can be effectively allocated to defect-prone modules. Although SDP requires sufficient local data within a company, there are cases where local data are not available, e.g., pilot projects. Companies without local data can employ cross-project defect prediction (CPDP) using external data to build classifiers. The major challenge of CPDP is different distributions between training and test data. To tackle this, instances of source data similar to target data are selected to build classifiers. Software datasets have a class imbalance problem meaning the ratio of defective class to clean class is far low. It usually lowers the performance of classifiers. We propose a Hybrid Instance Selection Using Nearest-Neighbor (HISNN) method that performs a hybrid classification selectively learning local knowledge (via k-nearest neighbor) and global knowledge (via naïve Bayes). Instances having strong local knowledge are identified via nearest-neighbors with the same class label. Previous studies showed low PD (probability of detection) or high PF (probability of false alarm) which is impractical to use. The experimental results show that HISNN produces high overall performance as well as high PD and low PF.


software engineering research and applications | 2006

Six Sigma Approach in Software Quality Improvement

Cvetan Redzic; Jongmoon Baik

In this paper, we present the six sigma DMAIC approach which is used for software quality improvement. The goal was to identify and establish tactical changes that substantially increase the software quality of all software products over the next 2 years. We analyzed the data and based on the analysis expert decisions were made to determine which new technologies (tools, methods, standards, training) should be implemented and institutionalized in order to reach our goals. To measure the improvement from six sigma process changes we calculated our process capability baselines based on tactical changes, and we tracked and evaluated ongoing software product quality on a regular basis against these baselines to ensure that the software product quality goals were being achieved as planned


international conference on computational science and its applications | 2007

Software Quality Assurance in XP and Spiral - A Comparative Study

Sajid Ibrahim Hashmi; Jongmoon Baik

Agile processes have been introduced to avoid the problems most of software practitioners have run up against by using traditional software development methodologies. These are well known for their benefits like focus on quality, early business value delivery, higher morale of stakeholders, and the reduced cost/schedule. Also, they can support the earlier and quicker production of the code by dividing the product into small segments called iterations. However, there are on-going debates about their flexibility to accommodate changing requirements and whether the productivity and quality of the agile processes is satisfactory for the customers or not. Previously available studies have mostly focused on comparing XP(eXtreme Programming) with some other agile methodologies, rather than comparing it with traditional plan-driven software development methodologies. In this paper, we identify the XP phases and practices, how they ensure product quality, and map XP phases against the Spiral model phases to prove that XP has built-in QA (quality assurance) practices in its life cycle, in addition to its focus on productivity. A case study is also included to empirically investigate quality of the product developed using XP with comparison to the product developed using Spiral model.

Collaboration


Dive into the Jongmoon Baik's collaboration.

Top Co-Authors

Avatar

Ho-Jin Choi

Information and Communications University

View shared research outputs
Top Co-Authors

Avatar

Hoyeon Ryu

Information and Communications University

View shared research outputs
Top Co-Authors

Avatar

Zhedan Pan

Information and Communications University

View shared research outputs
Researchain Logo
Decentralizing Knowledge