Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yong-Kee Jun is active.

Publication


Featured researches published by Yong-Kee Jun.


workshop on parallel & distributed debugging | 1993

On-the-fly detection of access anomalies in nested parallel loops

Yong-Kee Jun; Kern Koh

One of the major bugs in shared-memory parallel programs is the instructions accessing a shared variable with at least one write-access without coordinations in a set of parallel tasks. Such bugs, called data race or parallel access anomaly, result in unintended nonde-terministic executions of the programs, and then make debugging parallel programs diicult. This paper investigates an eecient on-they technique to detect and locate the access anomalies in parallel programs. The programs we consider may have nested parallel loop constructs and have no synchronization instructions. Our technique resorts to a new labeling method, called NR Labeling, to generate information on tasks, and to determine the logical con-currency between two instructions accessing a shared variable. The eeciency of the technique makes on-they anomaly detection more practical. The storage space for the concurrency information is O(V +NT) in the worst case, where V is the number of shared variables in a debugged program P, N is the maximum number of dynamic nestings of parallel loop constructs in P, and T is the maximum number of mutually concurrnt tasks in an execution of P. The time to generate the concurrency information in the creation or the termination of each task is O(N) in the worst case. The time to detect an anomaly in each access of a shared variable is O(log 2 N) in the worst case.


international conference on supercomputing | 1998

Scalable on-the-fly detection of the first races in parallel programs

Jeong-Si Kim; Yong-Kee Jun

Detecting races is important for debugging shared-memory parallel programs, because the races result in unintended nondeterministic executions of the programs. Most on-thefly techniques to detect the races cause the central bottlenecks of serializing all accesses of each thread to a shared variable. The amount of such bottlenecks can be reduced in case of detecting the first races that may cause the other races. This paper presents a new scalable on-the-fly technique which reduces the central bottlenecks to serializing at most two accesses of each thread to a shared variable for detecting the first races in parallel programs. It is important to detect the first races efficiently, because the removal of the Iirst races can make other races disappear. This technique, therefore, makes on-the-fly race detection more efficient and practical.


parallel and distributed systems testing analysis and debugging | 2012

On-the-fly detection of data races in OpenMP programs

Ok-Kyoon Ha; In-Bon Kuh; Guy Martin Tchamgoue; Yong-Kee Jun

OpenMP provides a portable way to achieve high performance and simple compiler directives to transform a sequential program into parallel program. It is important to detect data races in OpenMP programs, because they may lead to unpredictable results from an execution of the programs. To detect data races that occur during an execution of OpenMP programs, the representative on-the-fly technique, Helgrind+, mainly focuses on reducing false positives. Unfortunately, this technique is still imprecise and inefficient, when applied to large OpenMP programs which use a structured fork-join parallelism with a large number of threads. This paper presents a novel approach which efficiently detects apparent data races without false positives in large OpenMP programs. This approach combines an efficient thread labeling to maintain the logical concurrency of thread segments with a precise detection protocol to analyze conflicting accesses to every shared memory location. We implemented this approach on top of the Pin binary instrumentation framework and compared it with Helgrind+. Empirical results using OpenMP benchmarks show that our technique detects apparent data races without false positives contrarily to Helgrind+, while reducing the average runtime overhead to 19% of Helgrind+ with a similar amount of space overhead.


grid and pervasive computing | 2007

MPIRace-check: detection of message races in MPI programs

Mi-Young Park; Su Jeong Shim; Yong-Kee Jun; Hyuk-Ro Park

Message races, which can cause nondeterministic executions of a parallel program, should be detected for debugging because nondeterminism makes debugging parallel programs a difficult task. Even though there are some tools to detect message races in MPI programs, they do not provide practical information to locate and debug message races in MPI programs. In this paper, we present an on-the-fly detection tool, which is MPIRace-Check, for debugging MPI programs written in C language. MPIRace-Check detects and reports all race conditions in all processes by checking the concurrency of the communication between processes. Also it reports the message races with some practical information such as the line number of a source code, the processes number, and the channel information which are involved in the races. By providing those information, it lets programmers distinguish of unintended races among the reported races, and lets the programmers know directly where the races occur in a huge source code. In the experiment we will show that MPIRace-Check detects the races using some testing programs as well as the tool is efficient.


international workshop on openmp | 2001

A Comparison of Scalable Labeling Schemes for Detecting Races in OpenMP Programs

So-Hee Park; Mi-Young Park; Yong-Kee Jun

Detecting races is important for debugging shared-memory parallel programs, because the races result in unintended nondeterministic executions of the program. On-the-fly technique to detect races uses a scalable labeling scheme which generates concurrency information of parallel threads without any globally-shared data structure. Two efficient schemes of scalable labeling, BD Labeling and NR Labeling, show the similar complexities in space and time, but their actual efficiencies have been compared empirically in no literature to the best of our knowledge. In this paper, we empirically compare these two labeling schemes by monitoring a set of OpenMP kernel programs with nested parallelism. The empirical results show that NR Labeling is more efficient than BD Labeling by at least 1.5 times in generating the thread labels, and by at least 3.5 times in using the labels to detect races in the kernel programs.


embedded and ubiquitous computing | 2010

Hierarchical Real-Time Scheduling Framework for Imprecise Computations

Guy Martin Tchamgoue; Kyong Hoon Kim; Yong-Kee Jun; Wan Yeon Lee

Hierarchical scheduling frameworks provide ways for composing large and complex real-time systems from independent sub-systems. In this paper, we consider the imprecise reward-based periodic task model in a compositional scheduling framework. Thus, we introduce the imprecise periodic resource model to characterize the imprecise resource allocations, and the interface model to abstract the imprecise real-time requirements of the component. The schedulability analysis of mandatory parts is analyzed to meet the minimum requirement of tasks. In addition, we provide a scheduling algorithm for guaranteeing a certain amount of reward, which makes it feasible to compose multiple imprecise components efficiently.


document analysis systems | 2010

On-the-fly healing of race conditions in ARINC-653 flight software

Ok-Kyoon Ha; Guy Martin Tchamgoue; Jeong-Bae Suh; Yong-Kee Jun

The ARINC-653 standard architecture for flight software specifies an application executive (APEX) which provides an application programming interface and defines a hierarchical framework which provides health management for error detection and recovery. In every partition of the architecture, however, asynchronously concurrent processes or threads may include concurrency bugs such as unintended race conditions which are common and difficult to remove by testing. A race condition toward a shared data, or data race, is a pair of unsynchronized instructions that access a shared variable with at least one write access. Data races threaten the reliability of shared-memory programs seriously and latently, because they result in unintended nondeterministic executions of the programs. To heal data race during executions of ARINC-653 flight software, this paper instruments on-the-fly race detection into the target program and incorporates on-the-fly race healing into the health management of the ARINC-653 architecture. The race detection signals to the health monitor using the corresponding APEX call, if a data race is detected. The health monitor then responds by invoking an aperiodic, user-defined, error handling process that is assigned the highest possible priority. This special process uses an APEX call to identify and then heals the occurrence of race condition as an application error, one of seven error types defined by ARINC-653. This race-healing process allows the target programs to be assured at run-time that the execution result of the healed program could have been in the original program and therefore no new functional bug has been introduced. This paper evaluates efficiencies of the on-the-fly mechanisms to argue that they are practical to be configured under the ARINC-653 partitions.


international conference on parallel and distributed systems | 1998

Detecting the first races in parallel programs with ordered synchronization

Hee-Dong Park; Yong-Kee Jun

Detecting races is important for debugging shared memory parallel programs, because the races result in unintended nondeterministic executions of the programs. Previous on-the-fly techniques to detect races in programs with inter thread coordination such as ordered synchronization cannot guarantee that the race detected first is not preceded by events that also participate in a race. The paper presents a novel two pass on-the-fly algorithm to detect the first races in such parallel programs. Detecting the first races is important in debugging, because the removal of such races may make other races disappear including those detected first by the previous techniques. Therefore, this technique makes on-the-fly race detection more effective and practical in debugging parallel programs.


parallel computing technologies | 2009

Visualizing Potential Deadlocks in Multithreaded Programs

Byung-Chul Kim; Sang Woo Jun; Dae Joon Hwang; Yong-Kee Jun

It is important to analyze and identify potential deadlocks resident in multithreaded programs from a successful deadlock-free execution, because the nondeterministic nature of such programs may hide the errors during testing. Visualizing the runtime behaviors of locking operations makes it possible to debug such errors effectively, because it provides intuitive understanding of different feasible executions caused by nondeterminism. However, with previous visualization techniques, it is hard to capture alternate orders imposed by locks due to their representation of a partial-order over locking operations. This paper presents a novel graph, called lock-causality graph , which represents alternate orders over locking operations. A visualization tool implements the graph, and demonstrates its power using the classical dining-philosophers problem written in Java. The experiment result shows that the graph provides a simple but powerful representation of potential deadlocks in an execution instance not deadlocked.


computational science and engineering | 2012

Dynamic Voltage Scaling for Power-aware Hierarchical Real-Time Scheduling Framework

Guy Martin Tchamgoue; Kyong Hoon Kim; Yong-Kee Jun

Recent research on hierarchical real-time scheduling framework has made it feasible to build large and complex real-time systems. A hierarchical real-time scheduling framework decomposes a system into multiple components which are composed of other sub-components. The component schedulability is analyzed based on the periodic resource model, where each component is guaranteed with a certain amount of periodic resource supply per resource period. Although most of researches have focused on efficient scheduling of a components task set, little work has been done on power-aware scheduling in hierarchical real-time scheduling framework, which has become an important issue in many recent real-time embedded applications. In this paper, we define a new problem for power-aware scheduling in hierarchical framework with periodic resource model. We provide optimal task-level and component-level static DVS (Dynamic Voltage Scaling) schemes. A component-level dynamic DVS scheme is also provided in order to reduce more energy at run-time.

Collaboration


Dive into the Yong-Kee Jun's collaboration.

Top Co-Authors

Avatar

Ok-Kyoon Ha

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar

Guy Martin Tchamgoue

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar

Kyong Hoon Kim

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar

Young-Joo Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Mi-Young Park

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Hee-Dong Park

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar

Mun-Hye Kang

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar

Eu-Teum Choi

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar

Se-Won Park

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge