Dongeun Lee
Ulsan National Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dongeun Lee.
IEEE Transactions on Consumer Electronics | 2009
Heejung Lee; Yonghee Lee; Jonghun Lee; Dongeun Lee; Heonshik Shin
An efficient mobile video streaming system needs to cope with unstable network bandwidth and limited battery life. We propose a novel streaming system which jointly considers picture quality, bit-rate and energy consumption. Reducing the spatial resolution of a video stream adaptively proves to be more efficient in terms of picture quality and energy consumption than conventional rate control using only adjustment of quantization parameter. We apply the same scheme to scalable coding for large-scale mobile video streaming. This extends the adaptation of an SVC stream to lower bit-rates, while maintaining temporal stability. Our approach has been shown to improve picture quality by approximately 0.5 dB in low bit-rate conditions, and also reduces energy consumption by more than 50% compared to conventional video streaming.
IEICE Transactions on Communications | 2007
Donggeon Noh; Dongeun Lee; Heonshik Shin
Rapid advances in wireless sensor networks require routing protocols which can accommodate new types of power source and data of differing priorities. We describe a QoS-aware geographic routing scheme based on a solar-cell energy model. It exploits an algorithm (APOLLO) that periodically and locally determines the topological knowledge range (KR) of each node, based on an estimated energy budget for the following period which includes the current energy, the predicted energy consumption, and the energy expected from the solar cell. A second algorithm (PISA) runs on each node and uses its knowledge range to determine a route which meets the objectives of each priority level in terms of path delay, energy consumption and reliability. These algorithms maximize scalability and minimize memory requirements by employing a localized routing method which only uses geographic information about the host node and its adjacent neighbors. Simulation results confirm that APOLLO can determine an appropriate KR for each node and that PISA can meet the objectives of each priority level effectively.
international symposium on wireless pervasive computing | 2007
Donggeon Noh; Junu Kim; Joon-Ho Lee; Dongeun Lee; Hyuntaek Kwon; Heonshik Shin
Rapid advances in wireless sensor networks require routing protocols which can accommodate new types of power source and data of differing priorities. We describe a priority-based geographical routing scheme based on a solar-cell energy model. It exploits an algorithm (APOLLO) that periodically and locally determines the topological knowledge range of each node, based on an estimated energy budget for the following period which includes the current energy, the predicted energy consumption, and the energy expected from the solar cell. The second algorithm (PISA) runs on each node and uses its knowledge range to determine a route which meets the objectives of each priority level in terms of path delay and energy consumption. These algorithms maximize scalability and minimize memory requirements by employing a localized geographical routing method which only uses information about a node and its adjacent neighbors
international conference on big data | 2014
Dongeun Lee; Jaesik Choi
Many large scale sensor networks produce tremendous data, typically as massive spatio-temporal data streams. We present a Low Complexity Sensing framework that, coupled with novel compressive sensing techniques, enables to reduce computational and communication overheads significantly without much compromising the accuracy of sensor readings. More specifically, our sensing framework randomly samples time-series data in the temporal dimension first, then in the spatial dimension. Under some mild conditions, our sensing framework holds the same theoretical bound of reconstruction error, but is much simpler and easier to implement than existing compressive sensing frameworks. In experiments with real world environmental data sets, we demonstrate that the proposed framework outperforms two existing compressive sensing frameworks designed for spatio-temporal data.
advanced information networking and applications | 2012
Dong Kun Noh; Dongeun Lee; Rony Teguh; Toshihisa Honma; Heonshik Shin
This paper proposes a reliable wildfire monitoring system based on a wireless sensor network (WSN) sparsely deployed in adverse conditions. The physical environment under consideration is characterized by asymmetric, irregular, and unreliable wireless links, inadequate Fresnel zone clearance, and routing problems, to name a few. We use reliable communication schemes on a fault-tolerant network topology, where sensory data are guaranteed to reach the base station with organized data storage and real-time visualization. Our approach has been validated experimentally for the case of peat-forest wildfire in southern Borneo where the fire breaks out frequently.
international conference on communications | 2007
Donggeon Noh; Dongeun Lee; Heonshik Shin
Most existing routing protocols for wireless sensor networks have energy efficiency as their main objective. But for WSN systems in which various types of data coexist, a different QoS (quality of service) metric is required. We describe a mission-oriented selective routing scheme that runs on each node of a network and invokes different algorithms for data of different priority, so as to determine a route which meets the objectives of each priority level in terms of path delay, energy consumption and reliability. Scalability and mobility are achieved by employing a localized routing method which only requires geographic information about the host node and its adjacent neighbors. Simulation results confirm that our scheme can meet the objectives of each priority level effectively.
statistical and scientific database management | 2016
Dongeun Lee; Alex Sim; Jaesik Choi; Kesheng Wu
Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. We propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storage requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. In these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.
asilomar conference on signals, systems and computers | 2008
Dongeun Lee; Yonghee Lee; Heejung Lee; Jonghun Lee; Heonshik Shin
The scalable video coding (SVC) amendment of H.264/AVC offers three scalability dimensions: spatial, temporal, and quality scalabilities. Since these three dimensions can be easily combined, a single SVC bit stream has various extraction points, providing many sub-streams. When a specific bit rate is given, finding an optimal extraction point among various extraction points is a difficult problem due to the difference in the metric of each scalability dimension. We develop a method to efficiently transform three scalability dimensions into utility functions and propose an algorithm to find an efficient extraction path by using the points of inflection. Experimental results show that our approach can find a better extraction path as compared to the Joint Scalable Video Model (JSVM) basic extractor. Moreover, using the determined extraction path for an SVC bit stream is less complex owing to its monotonically increasing property.
IEEE Sensors Journal | 2015
Dongeun Lee; Jaesik Choi; Heonshik Shin
Data generation rates of sensors are rapidly increasing, reaching a limit such that storage expansion cannot keep up with the data growth. We propose a new big data archiving scheme that handles the huge volume of sensor data with an optimized lossy coding. Our scheme leverages spatial and temporal correlations inherent in typical sensor data. The spatio-temporal correlations, observed in quality adjustable sensor data, enable us to compress a massive amount of sensor data without compromising distinctive attributes in sensor signals. Sensor data fidelity can also be decreased gradually. In order to maximize storage efficiency, we derive an optimal storage configuration for this data aging scenario. Experiments show outstanding compression ratios of our scheme and the optimality of storage configuration that minimizes system-wide distortion of sensor data under a given storage space.
Computing | 2015
Dongeun Lee; Junhee Ryu; Heonshik Shin
The quality-adjustable nature of sensor data and gradually decreasing access pattern foster a new data archiving scheme that can cope with rapidly increasing data generation rates by sensors. We propose a scalable quality management of massive sensor data, which handles less frequent data access through discarding supplementary layers as time elapses for efficient usage of storage space. The efficacy of our scheme is shown by its capability to offer multiple fidelity levels compactly utilizing spatio-temporal correlation, without compromising key features of sensor data. In order to store a huge amount of data from various sensor types efficiently, we also study the optimal storage configuration strategy using analytical models that can capture characteristics of our scheme. This strategy helps storing sensor data blocks while minimizing total distortion under a given total rate budget.