Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heonshik Shin is active.

Publication


Featured researches published by Heonshik Shin.


real-time systems symposium | 1996

Visual assessment of a real-time system design: a case study on a CNC controller

Namyun Kim; Minsoo Ryu; Seongsoo Hong; Manas Saksena; Chong-Ho Choi; Heonshik Shin

We describe our experiments on a real-time system design, focusing on design alternatives such as scheduling jitter, sensor-to-output latency, intertask communication schemes and the system utilization. The prime objective of these experiments was to evaluate a real-time design produced using the period calibration method (Gerber et al., 1995) and thus identify the limitations of the method. We chose a computerized numerical control (CNC) machine as our target real-time system and built a realistic controller and a plant simulator. Our results were extracted from a controlled series of more than a hundred test controllers obtained by varying four test variables. This study unveils many interesting facts: average sensor-to-output latency is one of the most dominating factors in determining control quality; the effect of scheduling jitter appears only when the average sensor-to-output latency is sufficiently small; and loop processing periods are another dominating factor of performance. Based on these results, we propose a new communication scheme and a new objective function for the period calibration method.


real-time systems symposium | 1994

An accurate worst case timing analysis technique for RISC processors

Sung-Soo Lim; Young Hyun Bae; Gyu Tae Jang; Byung-Do Rhee; Sang Lyul Min; Chang Yun Park; Heonshik Shin; Kunsoo Park; Choug Sang Kim

An accurate and safe estimation of a tasks worst case execution time (WCET) is crucial for reasoning about the timing properties of real-time systems. In RISC processors, the execution time of a program construct (e.g., a statement) is affected by various factors such as cache hits/misses and pipeline hazards, and these factors impose serious problems in analyzing the WCETs of tasks. To analyze the timing effects of RISCs pipelined execution and cache memory, this paper proposes extensions of the original timing schema (Shaw, 1989) where the timing information associated with each program construct is a simple time-bound. We associate with each program construct what we call a WCTA (Worst Case Timing Abstraction), which contains detailed timing information of every execution path that might be the worst case execution path of the program construct. This extension leads to a revised timing schema that is similar to the original timing schema except that concatenation and pruning operations on WCTAs are newly defined to replace the add and max operations on time-bounds in the original timing schema. Our revised timing schema accurately accounts for the timing effects of pipelined execution and cache memory not only within but also across program constructs. This paper also reports on preliminary results of WCET analyses for a pipelined processor. Our results show that up to 50% tighter WCET bounds can be obtained by using the revised timing schema.<<ETX>>


embedded software | 2006

Scratchpad memory management for portable systems with a memory management unit

Bernhard Egger; Jaejin Lee; Heonshik Shin

In this paper,we present a dynamic scratchpad memory allocation strategy targeting a horizontally partitioned memory subsystem for contemporary embedded processors. The memory subsystem is equipped with a memory management unit (MMU), and physically addressed scratchpad memory (SPM)is mapped into the virtual address space. A small minicache is added to further reduce energy consumption and improve performance.Using the MMUs page fault exception mechanism, we track page accesses and copy frequently executed code sections into the SPM before they are executed. Because the minimal transfer unit between the external memory and the SPM is a single memory page, good code placement is of great importance for the success of our method. Based on profiling information, our postpass optimizer divides the application binary into pageable, cacheable, and uncacheable regions. The latter two are placed at fixed locations in the external memory, and only pageable code is copied on demand to the SPM from the external memory. Pageable code is grouped into sections whose sizes are equal to the physical page size of the MMU. We discuss code grouping techniques and also analyze the effect of the minicache on execution time and energy consumption. We evaluate our SPM allocation strategy with twelve embedded applications, including MPEG-4. Compared to a fully-cached configuration, on average we achieve a 12% improvement in runtime performance and a 33% reduction in energy consumption by the memory system.


international conference on consumer electronics | 2008

Reducing IPTV Chan nel Switching Time using H.264 Scalable Video Coding

Yonghee Lee; Jonghun Lee; In-Kwon Kim; Heonshik Shin

In this paper, we aim to reduce the channel switching time for Internet protocol TV (IPTV) without unduly influencing the picture quality. For this purpose, we propose to adopt the H.264 scalable video coding scheme, for which we allocate a base layer and enhancement layers of each channel to two separate multicast groups. In preview mode, users access the base layers of different channels already stored in a buffer, so they can switch between channels without delay. In watching mode, they use both the base and enhancement layers of the selected channel to achieve full quality. While providing fast channel switching in the preview mode, our approach proves to maintain the picture quality similar to the case of single layer coding. An experimental result, for example, shows that, even with complicated scenes, a 1.5 Mbps scalable video stream for IPTV has achieved a good picture quality with a PSNR of 33.7 dB1.


languages, compilers, and tools for embedded systems | 2007

Dynamic data scratchpad memory management for a memory subsystem with an MMU

Hyungmin Cho; Bernhard Egger; Jaejin Lee; Heonshik Shin

In this paper, we propose a dynamic scratchpad memory (SPM)management technique for a horizontally-partitioned memory subsystem with an MMU. The memory subsystem consists of a relatively cheap direct-mapped data cache and SPM. Our technique loads required global data and stack pages into the SPM on demand when a function is called. A scratchpad memory managerloads/unloads the data pages and maintains a page table for the MMU. Our approach is based on post-pass analysis and optimization techniques, and it handles the whole program including libraries. The data page mapping is determined by solving an integer linear programming (ILP) formulation that approximates our demand paging technique. The ILP model uses a dynamic call graph annotated with the number of memory accesses and/or cache misses obtained by profiling. We evaluate our technique on thirteen embedded applications. We compare the results to a reference system with a 4-way set associative data cache and the ideal case with the same 4-way cache and SPM, where all global and stack data is placed in the SPM. On average, our approach reduces the total system energy consumption by 8.1% with no performance degradation. This is equivalent to exploiting 60% of the room available in energy reduction between the reference case and the ideal case.


ACM Transactions in Embedded Computing Systems | 2008

Dynamic scratchpad memory management for code in portable systems with an MMU

Bernhard Egger; Jaejin Lee; Heonshik Shin

In this work, we present a dynamic memory allocation technique for a novel, horizontally partitioned memory subsystem targeting contemporary embedded processors with a memory management unit (MMU). We propose to replace the on-chip instruction cache with a scratchpad memory (SPM) and a small minicache. Serializing the address translation with the actual memory access enables the memory system to access either only the SPM or the minicache. Independent of the SPM size and based solely on profiling information, a postpass optimizer classifies the code of an application binary into a pageable and a cacheable code region. The latter is placed at a fixed location in the external memory and cached by the minicache. The former, the pageable code region, is copied on demand to the SPM before execution. Both the pageable code region and the SPM are logically divided into pages the size of an MMU memory page. Using the MMUs pagefault exception mechanism, a runtime scratchpad memory manager (SPMM) tracks page accesses and copies frequently executed code pages to the SPM before they get executed. In order to minimize the number of page transfers from the external memory to the SPM, good code placement techniques become more important with increasing sizes of the MMU pages. We discuss code-grouping techniques and provide an analysis of the effect of the MMUs page size on execution time, energy consumption, and external memory accesses. We show that by using the data cache as a victim buffer for the SPM, significant energy savings are possible. We evaluate our SPM allocation strategy with fifteen applications, including H.264, MP3, MPEG-4, and PGP. The proposed memory system requires 8&percent; less die are compared to a fully-cached configuration. On average, we achieve a 31&percent; improvement in runtime performance and a 35&percent; reduction in energy consumption with an MMU page size of 256 bytes.


real-time systems symposium | 1995

Worst case timing analysis of RISC processors: R3000/R3010 case study

Yerang Hur; Young Hyun Bae; Sung-Soo Lim; Sung-Kwan Kim; Byung-Do Rhee; Sang Lyul Min; Chang Yun Park; Minsuk Lee; Heonshik Shin; Chong Sang Kim

This paper presents a case study of worst case timing analysis for a RISC processor. The target machine consists of the R3000 CPU and R3010 FPA (Floating Point Accelerator). This target machine is typical of a RISC system with pipelined execution units and cache memories. Our methodology is an extension of the existing timing schema. The extended timing schema provides means to reason about the execution time variation of a program construct by surrounding program constructs due to pipelined execution and cache memories of RISC processors. The main focus of this paper is on explaining the necessary steps for performing timing analysis of a given target machine within the extended timing schema framework. This paper also gives results from experiments using a timing tool for the target machine that is built based on the extended timing schema approach.


Journal of Networks | 2008

Low-Latency Geographic Routing for Asynchronous Energy-Harvesting WSNs

Donggeon Noh; Heonshik Shin

Research on data routing strategies for wireless sensor networks (WSNs) has largely focused on energy efficiency. However rapid advances in WSNs require routing protocols which can accommodate new types of energy source and data of requiring short end-to-end delay. In this paper, we describe a duty-cycle-based low-latency geographic routing for asynchronous energy-harvesting WSNs. It uses an algorithm (D-APOLLO) that periodically and locally determines the topological knowledge range and duty-cycle of each node, based on an estimated energy budget for each period which includes the currently available energy, the predicted energy consumption, and the energy expected from the harvesting device. This facilitates a low latency routing scheme which considers both geographic and duty-cycle information about the neighbors of a node, so that data can be routed efficiently and delivered to the sink as quickly as possible. Simulation results confirm that our routing scheme can deliver data to the sink with high reliability and low latency.


IEEE Transactions on Parallel and Distributed Systems | 2008

Reconfigurable Service Composition and Categorization for Power-Aware Mobile Computing

Eunjeong Park; Heonshik Shin

The requirement of agile adaptation to varying resource constraints in mobile systems motivates the use of a service-oriented architecture (SOA), which can support the composition of two or more services to form a complex service. In this paper, we propose SOA-based middleware to support QoS control of mobile applications and to configure an energy-efficient service composition graph. We categorize services into two layers: functionality-centric services, which are connected to create a complex service to meet the users intentions, and resource-centric services, which undertake distributed functionality-centric services in a way that increases the success rate of service composition while reducing contention at specific service nodes. We also present a service routing algorithm to balance the resource consumption of service providers on a service-overlay network. Through simulation of power-aware service composition using a realistic model based on ns-2 and traced data, we demonstrate that our approach can help both the mobile devices and the servers in a service-overlay network to reduce energy consumption without an increase in response time.


wireless and mobile computing, networking and communications | 2005

Resource allocation for scalable video broadcast in wireless cellular networks

Junu Kim; Jinsung Cho; Heonshik Shin

Video broadcast services have become increasingly popular on packet-based wireless networks, such as 1xEV-DO and HSDPA which support high data rate. In this paper we propose a resource allocation algorithm for scalable video broadcast over such wireless networks. Our algorithm allocates time slots among the video layers of a scalable video and applies adaptive modulation and coding (AMC) to each video layer to maximize the sum of utilities for heterogeneous users with varying QoS requirements. It also considers competing video sessions and allocates time slots among them according to user preferences. Additionally, its polynomial time-complexity allows for online resource allocation that is necessary for real-time video services. Simulation experiments show that our algorithm outperforms a single-layer video broadcast with fixed modulation and coding (FMC), used in broadcast and multicast services (BCMCS) in the CDMA2000 system, and produce a near-optimal allocation.

Collaboration


Dive into the Heonshik Shin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongeun Lee

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yongwoo Cho

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Junhee Ryu

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Yonghee Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sang Lyul Min

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sangsoo Park

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge