Jonghun Yoo
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonghun Yoo.
society of instrument and control engineers of japan | 2006
Jonghun Yoo; Saehwa Kim; Seongsoo Hong
The ubiquitous robot companion (URC) project has been recently launched in Korea with an aim of putting networked service robots into practical use in residential environments by overcoming technical challenges of home service robots. Embedded middleware is surely one of such challenges since it has to deal with many critical and difficult problems such as real-time guarantees and software reconfigurability on a heterogeneous, distributed mechatronics system. In this paper, we adopt middleware called SCA from the software defined radio domain and extend it for use in URC robots. We call the end result robot software communications architecture (RSCA). The RSCA provides a standard operating environment for robot applications together with a framework that expedites the development of such applications. The operating environment is comprised of a real-time operating system, communication middleware, and deployment middleware, which collectively form a hierarchical structure. Specifically, the RSCA deployment middleware supports the reconfiguration of component-based robot applications including installation, creation, start, stop, tear-down, and un-installation. Since the original SCA lacks real-time guarantees and QoS support, we have significantly extended it while maintaining backward compatibility so that URC robot developers can use existing SCA tools. We have fully implemented RSCA and performed measurements to quantify its run-time performance. Our implementation clearly shows the viability of RSCA
international symposium on industrial embedded systems | 2009
Manish Kumar; Jonghun Yoo; Seongsoo Hong
AUTOSAR, an open standard for automotive software, is currently being exploited by the automotive industry. Although the standard mainly focuses on software architecture, it also provides a development methodology. Unfortunately, the methodology in its current form is insufficient for industrial exploitation because it describes only an incomplete set of activities, work products and their dependencies. Specifically, (1) the activities to support COTS-based development are missing even though AUTOSAR encourages the use of COTS components; (2) it does not describe the roles and their responsibilities; and (3) it does not specify the mapping of activities onto a complete process model. In this paper, we propose a new software development process for AUTOSAR by extending the existing methodology. In doing so, we add activities to allow COTS component selection, evaluation and integration. Then, we define specific roles and assign responsibilities to those roles. Finally, we describe the overall timeline of various activities in detail by mapping the activities to the V-model. In order to present the process, we have used SPEM 2.0 notation, which is backward compatible with the AUTOSAR methodology and has improved expressiveness. We have composed the proposed process model using Eclipse Process Framework Composer which not only performs a sanity check of the model but also provides a way to publish it.
embedded and real-time computing systems and applications | 2012
Sungju Huh; Jonghun Yoo; Seongsoo Hong
Android smart phones are often reported to suffer from sluggish user interactions due to poor interactivity. This is because the Linux kernel may incur perceptibly long response time to user interactive tasks. Particularly, the completely fair scheduler (CFS) of Linux cannot systematically favor a user interactive task over background tasks since it fails to effectively distinguish between them. Even if a user interactive task is successfully identified, such a task can suffer from a high scheduling latency due to the non-preemptive nature of CFS. This paper presents a framework-assisted task characterization and virtual runtime-based CFS (VT-CFS) to address these problems. The former is a cooperative mechanism between the Android application framework and the kernel. It identifies a user interactive task at the framework level and then enables the task scheduler to selectively promote the priority of the identified task at the kernel level. VT-CFS is an extension of the CFS. It allows a task to be preempted at any preemption tick so that the scheduling latency of a user interactive task is bounded by the tick interval. We have implemented our approach into Android 2.2 running with Linux kernel 2.6.32. Experimental results show that the response time of a user interactive task is reduced by up to 31.4% while incurring only 0.9% more run-time overhead than the legacy system.
international conference on distributed computing systems | 2012
Sungju Huh; Jonghun Yoo; Seongsoo Hong
While Linux is the most favored operating system for an open source-based cloud data center, it falls short of expectations when it comes to fair share multicore scheduling. The primary task scheduler of the mainline Linux kernel, CFS, cannot provide a desired level of fairness in a multicore system. CFS uses a weight-based load balancing mechanism to evenly distribute task weights among all cores. Contrary to expectations, this mechanism cannot guarantee fair share scheduling since balancing loads among cores has nothing to do with bounding differences in the virtual runtimes of tasks. To make matters worse, CFS allows a persistent load imbalance among cores. This paper presents a virtual runtime-based task migration algorithm which directly bounds the maximum virtual runtime difference among tasks. For a given pair of cores, our algorithm periodically partitions run able tasks into two groups depending on their virtual runtimes and assigns each group to a dedicated core. In doing so, it bounds the load difference between two cores by the largest weight in the task set and makes the core with larger virtual runtimes receive a larger load and thus run more slowly. It bounds the virtual runtime difference of any pair of tasks running on these cores by a constant. We have implemented the algorithm into the Linux kernel 2.6.38.8. Experimental results show that the maximal virtual runtime difference is 50.53 time units while incurring only 0.14% more run-time overhead than CFS.
international conference on consumer electronics | 2008
Jonghun Yoo; Jiyong Park; Seongsoo Hong; Yeong-bae Yeo; Hyunchin Kim
Bridging IEEE 1394 buses is becoming important since it can be used to provide wireless connectivity among 1394 devices. Unfortunately, existing bridge mechanisms, such as the IEEE 1394.1 bridge and the transparent bridge, have practical limitations. The former does not support interoperability with legacy 1394 devices and the latter requires a new hardware chipset for bridge implementation. We thus propose a new bridge mechanism called a mirroring bridge to overcome these limitations. It supports interoperability with legacy 1394 devices by emulating remote nodes inside a bridge and via packet address translation that can be implemented through software. We have implemented the proposed bridge mechanism and have succeeded in interconnecting legacy 1394 devices over an experimental WiMedia UWB network. The experimental result showed that the average throughput of the mirroring bridge is 188.7 Mbps, which is 94.4% of the maximum throughput of the UWB chipset used.
Software - Practice and Experience | 2015
Sungju Huh; Jonghun Yoo; Seongsoo Hong
Android smartphones are often reported to suffer from sluggish user interactions due to poor interactivity. This is partly because Android and its task scheduler, the completely fair scheduler (CFS), may incur perceptibly long response time to user‐interactive tasks. Particularly, the Android framework cannot systemically favor user‐interactive tasks over other background tasks since it does not distinguish between them. Furthermore, user‐interactive tasks can suffer from high dispatch latency due to the non‐preemptive nature of CFS. To address these problems, this paper presents framework‐assisted task characterization and virtual time‐based CFS. The former is a cross‐layer resource control mechanism between the Android framework and the underlying Linux kernel. It identifies user‐interactive tasks at the framework‐level, by using the notion of a user‐interactive task chain. It then enables the kernel scheduler to selectively promote the priorities of worker tasks appearing in the task chain to reduce the preemption latency. The latter is a cross‐layer refinement of CFS in terms of interactivity. It allows a task to be preempted at every predefined period. It also adjusts the virtual runtimes of the identified user‐interactive tasks to ensure that they are always scheduled prior to the other tasks in the run‐queue when they wake up. As a result, the dispatch latency of a user‐interactive task is reduced to a small value. We have implemented our approach into Android 4.1.2 running with Linux kernel 3.0.31. Experimental results show that the response time of a user interaction is reduced by up to 77.35% while incurring only negligible overhead. Copyright
IEEE Transactions on Computers | 2013
Jonghun Yoo; Jaesoo Lee; Seongsoo Hong
A flash translation layer (FTL) provides file systems with transparent access to NAND flash memory. Although many applications running on it require real-time guarantees, it is difficult to provide tight worst case execution time (WCET) bounds with conventional static WCET analysis since an FTL exhibits a large variance in execution time depending on its runtime state. Parametric WCET analysis could be an effective alternative but it is also challenging to formulate a parametric WCET function for an FTL program because traditional FTL architecture does not properly model the runtime availability of flash resources in its code structure. To overcome such a limitation, we propose Petri net-based FTL architecture where a Petri net explicitly specifies dependencies between FTL operations and the runtime resource availability. It comes with an FTL operation sequencer that derives at runtime the shortest sequence of FTL operations for servicing an incoming FTL request under the current resource availability. The sequencer computes the WCET of the request by merely summing the WCETs of only those FTL operations in the sequence. Our experimental results show the effectiveness of our FTL architecture. It allowed for tight WCET estimation that yielded WCETs shorter by a factor of 54 than statically analyzed ones.
high performance computing and communications | 2011
Vijeta Rathore; Jonghun Yoo; Jaesoo Lee; Seongsoo Hong
In a cloud computing system, virtual machines owned by different clients are co-hosted on a single physical machine. It is vital to isolate network performance between the clients for ensuring fair usage of the constrained and shared network resources of the physical machine. Unfortunately, the existing network performance isolation techniques are not effective for cloud computing systems because they are difficult to be adopted in a large scale and require non-trivial modification to the network stack of a guest OS. In this paper, we propose a performance isolation-enabled virtual distributed Ethernet (PIE-VDE) to overcome such difficulties. It is a network virtualization software module running on a host OS. It intends to (1) allocate fair share of outgoing link bandwidth to the co-hosted clients and (2) divide a clients share to the virtual machines owned by it in a fair way. Our approach supports full virtualization of a guest OS, ease in wide scale adoption, limited modification to the existing system, low run-time overhead and work-conserving servicing. Experimental results show the effectiveness of the proposed mechanism. Every client received at least 99.5% of its bandwidth share as specified by its weight.
international symposium on object/component/service-oriented real-time distributed computing | 2006
Michael Barth; Jonghun Yoo; Saehwa Kim; Seongsoo Hong
The software communications architecture (SCA), which has been adopted as an SDR (software defined radio) Forum standard, provides a framework that successfully exploits common design patterns of distributed, real-time, and object-oriented embedded systems software. We have fully implemented the SCA v2.2 in C++. During this implementation process, we have encountered the lack of a suitable design pattern for releasing the SCA applications. Unfortunately, design patterns for releasing objects have been neither extensively addressed nor well investigated as opposed to creational design patterns. This is largely due to the fact that such releasing design patterns are highly dependent on programming languages. In this paper, we investigate three viable design patterns for releasing the SCA applications in C++ and discuss their pros and cons. In addition, we select the most portable and thus most reusable pattern, which we name Vulture design pattern, among those alternatives and detail our specific implementation
Journal of Information Science and Engineering | 2012
Jaesoo Lee; Jonghun Yoo; Yong-Seok Park; Seongsoo Hong
In a cloud server where multiple virtual machines owned by different clients are co-hosted, excessive traffic generated by a small group of clients may well jeopardize the quality of service of other clients. It is thus very important to provide per-client network performance isolation in a cloud computing environment. Unfortunately, the existing techniques are not effective enough for a huge cloud computing system since it is difficult to adopt them in a large scale and they often require non-trivial modification to the established network protocols. To overcome such difficulties, we propose per-client network performance isolation using VDE (Virtual Distributed Ethernet) as a base framework. Our approach begins with per-client weight specification and support client- aware fair share scheduling and packet dispatching for both incoming and outgoing traffic. It also provides hierarchical fairness between a client and its virtual machines. Our approach supports full virtualization of a guest OS, wide scale adoption, limited modification to the existing system, low run-time overhead and work-conserving servicing. Our experimental results show the effectiveness of the proposed approach. Every client received at least 99.4% of its bandwidth share as specified by its weight.