Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Soonmin Hwang is active.

Publication


Featured researches published by Soonmin Hwang.


computer vision and pattern recognition | 2015

Multispectral pedestrian detection: Benchmark dataset and baseline

Soonmin Hwang; Jaesik Park; Namil Kim; Yukyung Choi; In So Kweon

With the increasing interest in pedestrian detection, pedestrian datasets have also been the subject of research in the past decades. However, most existing datasets focus on a color channel, while a thermal channel is helpful for detection even in a dark environment. With this in mind, we propose a multispectral pedestrian dataset which provides well aligned color-thermal image pairs, captured by beam splitter-based special hardware. The color-thermal dataset is as large as previous color-based datasets and provides dense annotations including temporal correspondences. With this dataset, we introduce multispectral ACF, which is an extension of aggregated channel features (ACF) to simultaneously handle color-thermal image pairs. Multi-spectral ACF reduces the average miss rate of ACF by 15%, and achieves another breakthrough in the pedestrian detection task.


international conference on ubiquitous robots and ambient intelligence | 2016

Fast multiple objects detection and tracking fusing color camera and 3D LIDAR for intelligent vehicles

Soonmin Hwang; Namil Kim; Yukyung Choi; Seokju Lee; In So Kweon

For many robotics and intelligent vehicle applications, detection and tracking multiple objects (DATMO) is one of the most important components. However, most of the DATMO applications have difficulty in applying real-world applications due to high computational complexity. In this paper, we propose an efficient DATMO framework that fully employs the complementary information from the color camera and the 3D LIDAR. For high efficiency, we present a segmentation scheme by using both 2D and 3D information which gives accurate segments very quickly. In our experiments, we show that our framework can achieve the faster speed (~4Hz) than the state-of-the-art methods reported in KITTI benchmark (>1Hz).


intelligent robots and systems | 2016

Thermal Image Enhancement using Convolutional Neural Network

Yukyung Choi; Namil Kim; Soonmin Hwang; In So Kweon

With the advent of commodity autonomous mobiles, it is becoming increasingly prevalent to recognize under extreme conditions such as night, erratic illumination conditions. This need has caused the approaches using multi-modal sensors, which could be complementary to each other. The choice for the thermal camera provides a rich source of temperature information, less affected by changing illumination or background clutters. However, existing thermal cameras have a relatively smaller resolution than RGB cameras that has trouble for fully utilizing the information in recognition tasks. To mitigate this, we aim to enhance the low-resolution thermal image according to the extensive analysis of existing approaches. To this end, we introduce Thermal Image Enhancement using Convolutional Neural Network (CNN), called in TEN, which directly learns an end-to-end mapping a single low resolution image to the desired high resolution image. In addition, we examine various image domains to find the best representative of the thermal enhancement. Overall, we propose the first thermal image enhancement method based on CNN guided on RGB data. We provide extensive experiments designed to evaluate the quality of image and the performance of several object recognition tasks such as pedestrian detection, visual odometry, and image registration.


ieee intelligent vehicles symposium | 2016

Thermal-infrared based drivable region detection

Jae Shin Yoon; Kibaek Park; Soonmin Hwang; Namil Kim; Yukyung Choi; Francois Rameau; In So Kweon

Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations.


international conference on ubiquitous robots and ambient intelligence | 2015

Geometrical calibration of multispectral calibration

Namil Kim; Yukyung Choi; Soonmin Hwang; Kibaek Park; Jae Shin Yoon; In So Kweon

In this paper, we introduce a novel calibration pattern board for visible and thermal camera calibration. Our pattern board is easy to make, handy to move and efficient to heat. Also, it preserves a uniform thermal radiance for a long time. Proposed method can be employed in single- and multi- spectral camera system, and also used in the splitter or stereo camera system. As a result, our method shows a good performance comparing with previous works, and we also shows that the calibrated system is enough to use in ADAS systems.


IEEE Transactions on Intelligent Transportation Systems | 2018

KAIST Multi-Spectral Day/Night Data Set for Autonomous and Assisted Driving

Yukyung Choi; Namil Kim; Soonmin Hwang; Kibaek Park; Jae Shin Yoon; Kyounghwan An; In So Kweon

We introduce the KAIST multi-spectral data set, which covers a great range of drivable regions, from urban to residential, for autonomous systems. Our data set provides the different perspectives of the world captured in coarse time slots (day and night), in addition to fine time slots (sunrise, morning, afternoon, sunset, night, and dawn). For all-day perception of autonomous systems, we propose the use of a different spectral sensor, i.e., a thermal imaging camera. Toward this goal, we develop a multi-sensor platform, which supports the use of a co-aligned RGB/Thermal camera, RGB stereo, 3-D LiDAR, and inertial sensors (GPS/IMU) and a related calibration technique. We design a wide range of visual perception tasks including the object detection, drivable region detection, localization, image enhancement, depth estimation, and colorization using a single/multi-spectral approach. In this paper, we provide a description of our benchmark with the recording platform, data format, development toolkits, and lessons about the progress of capturing data sets.


international conference on ubiquitous robots and ambient intelligence | 2015

Low-Cost Synchronization for Multispectral Cameras

Soonmin Hwang; Yukyung Choi; Namil Kim; Kibaek Park; Jae Shin Yoon; In So Kweon

In this paper, we introduce a low-cost multicamera synchronization approach. Our system is low-cost to make, easy to handle and convenient to use. Proposed system can be employed in single- and multi- spectral various cameras, and also used in any devices which support the external trigger. As a result, our system shows a good performance comparing with hand-eye synchronization, and we also shows that synchronized images are enough to use in ADAS systems.


international conference on image processing | 2015

Artrieval: Painting retrieval without expert knowledge

Namil Kim; Yukyung Choi; Soonmin Hwang; In So Kweon

As people are becoming interested in paintings, various user-interactive search systems have been presented in recent times. Many systems encourage users to search paintings by prior knowledge on paintings. We discover the limitation for existing methods on how well the query is represented by the user, and propose a simple, yet effective way to search the painting by exploiting the color to express human visual memory. To achieve our goal, we suggest color clustering based on human color perception, and hierarchical metric learning to accommodate the locality of colors. With user-interactive drawing through learned colors, the user completes the abstract image to resemble the visual memory. We show that our system is easy to use, fast to process, accurate to search and fully extensible to cover deviation among users.


asian conference on computer vision | 2014

A Two Phase Approach for Pedestrian Detection

Soonmin Hwang; Tae-Hyun Oh; In So Kweon

Most of current pedestrian detectors have pursued high detection rate without carefully considering sample distributions. In this paper, we argue that the following characteristics must be considered; (1) large intra-class variation of pedestrians (multi-modality), and (2) data imbalance between positives and negatives. Pedestrian detection can be regarded as one of finding needles in a haystack problems (rare class detection). Inspired by a rare class detection technique, we propose a two-phase classifier integrating an existing baseline detector and a hard negative expert by separately conquering recall and precision. Main idea behind the hard negative expert is to reduce sample space to be learned, so that informative decision boundaries can be effectively learned. The multi-modality problem is dealt with a simple variant of a LDA based random forests as the hard negative expert. We optimally integrate two models by learned integration rules. By virtue of the two-phase structure, our method achieve competitive performance with only little additional computation. Our approach achieves 38.44 % mean miss-rate for the reasonable setting of Caltech Pedestrian Benchmark.


international conference on control automation and systems | 2013

Evaluation of vocabulary trees for localization in robot applications

Soonmin Hwang; Chaehoon Park; Yukyung Choi; Donggeun Yoo; In So Kweon

Vocabulary tree based place recognition is widely used in topological localization and its various applications have been proposed during the past decade. However, the bag-of-words representations from the vocabulary tree, which is trained with fixed training data, are difficult to be optimized to dynamic environments. To solve this problem, an adaptive vocabulary tree has been proposed, but there has been no comparison considering the adaptive properties of the conventional vocabulary tree. This paper provides a performance evaluation of the vocabulary tree and the adaptive vocabulary tree in dynamic scenes. This analysis provides guidance for choosing appropriate vocabulary in robot applications.

Collaboration


Dive into the Soonmin Hwang's collaboration.

Top Co-Authors

Avatar

Kyounghwan An

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge