Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sang Chul Ahn is active.

Publication


Featured researches published by Sang Chul Ahn.


international conference on future generation communication and networking | 2008

Activity Recognition Using Wearable Sensors for Elder Care

Yu-Jin Hong; Ig-Jae Kim; Sang Chul Ahn; Hyoung-Gon Kim

We propose a novel method to recognize a user¿s activities of daily living with accelerometers and RFID sensor. Two wireless accelerometers are used for the classification of 5 human body states using decision tree, and detection of RFID tagged objects with hand movement provides additional object related hand motion information. To do this, we used Bluetooth based wireless triaxial accelerometers and iGrabber which is a glove type RFID reader. Our experiments show that our method can be applicable to a real environment with strong confidence.


Simulation Modelling Practice and Theory | 2010

Mobile health monitoring system based on activity recognition using accelerometer

Yu-Jin Hong; Ig-Jae Kim; Sang Chul Ahn; Hyoung-Gon Kim

We propose a new method to recognize a user’s activities of daily living with accelerometers and RFID sensor. Two wireless accelerometers are used for classification of five human body states using decision tree, and detection of RFID-tagged objects with hand movements provides additional instrumental activity information. Besides, we apply our activity recognition module to the health monitoring system. We derive linear regressions for each activity by finding the correlations between the attached accelerometers and the expended calories calculated from gas exchange analyzer under different activities. Finally, we can predict the expended calories more efficiently with only accelerometer sensor depend on the recognized activity. We implement our proposed health monitoring module on smart phones for better practical use.


ieee international conference on automatic face and gesture recognition | 1998

Object oriented face detection using range and color information

Sanghoon Kim; Nam-Kyu Kim; Sang Chul Ahn; Hyoung-Gon Kim

This paper proposes an object oriented face detection method using range and color information. Objects are segmented from the background using the stereo disparity histogram that represents the range information of the objects. A matching pixel count (MPC) disparity measure is introduced to enhance the matching accuracy, and removes the effect of unexpected noise in the boundary region. For the high-performance implementation of the MPC disparity histogram, redundancy operations inherent to the area-based search operation are removed. To detect facial regions among segmented objects, a skin-color transform technique is used with the generalized face color distribution (GFCD) modeled by a 2D Gaussian function in a normalized color space. Using GFCD, the input color image can be transformed into a gray-level image enhancing only facial color components. To detect facial information only in the defined range, both results from range segmentation and color transforms are combined effectively. The experimental results show that the proposed algorithm works well in various environments with multiple human objects. Moreover, the processing time for a test image is not exceeding 2 seconds in general purpose workstations. The range information of the objects can be useful in MPEG-4 where natural and synthetic images can be mixed and synthesized.


international conference on robotics and automation | 2005

UPnP Approach for Robot Middleware

Sang Chul Ahn; Jin Hak Kim; Kiwoong Lim; Heedong Ko; Yong-Moo Kwon; Hyoung-Gon Kim

This paper presents an approach to utilize UPnP as a middleware for robots. It describes the advantages of UPnP by comparing UPnP with TAO CORBA that was used in a few robot development projects. In order to consider real situation, we select a sample robot architecture, and examine the possibility of UPnP as a robot middleware with the robot architecture. This paper shows how UPnP architecture can be applied to building a robot in the view of software architecture, message mapping, realtime, network selection, performance, memory footprint, and deployment issues.


intelligent robots and systems | 2006

Requirements to UPnP for Robot Middleware

Sang Chul Ahn; Jung-Woo Lee; Kiwoong Lim; Heedong Ko; Yong-Moo Kwon; Hyoung-Gon Kim

The UPnP (universal plug and play) defines an architecture for pervasive peer-to-peer network connectivity of intelligent appliances. It shares the service oriented architecture with emerging Web service technology, and has many advantages for future robot middleware such as automatic discovery of services and accommodation of dynamic distributed computing environment. However, the UPnP needs some additional features for being used as a robot middleware. This paper discusses them, and presents some requirements when developing a UPnP SDK for robot middleware as well. This paper also presents an experimental result of applying the UPnP to robot components


computer vision and pattern recognition | 2015

Generalized Deformable Spatial Pyramid: Geometry-preserving dense correspondence estimation

Junhwa Hur; Hwasup Lim; Changsoo Park; Sang Chul Ahn

We present a Generalized Deformable Spatial Pyramid (GDSP) matching algorithm for calculating the dense correspondence between a pair of images with large appearance variations. The main challenges of the problem generally originate in appearance dissimilarities and geometric variations between images. To address these challenges, we improve the existing Deformable Spatial Pyramid (DSP) [10] model by generalizing the search space and devising the spatial smoothness. The former is leveraged by rotations and scales, and the latter simultaneously considers dependencies between high-dimensional labels through the pyramid structure. Our spatial regularization in the high-dimensional space enables our model to effectively preserve the meaningful geometry of objects in the input images while allowing for a wide range of geometry variations such as perspective transform and non-rigid deformation. The experimental results on public datasets and challenging scenarios show that our method outperforms the state-of-the-art methods both qualitatively and quantitatively.


virtual reality continuum and its applications in industry | 2004

Real time 3D avatar for interactive mixed reality

Sang Yup Lee; Ig-Jae Kim; Sang Chul Ahn; Heedong Ko; Myo-Taeg Lim; Hyoung-Gon Kim

This paper presents real-time reconstruction of dynamic 3D avatar for interactive mixed reality. In computer graphics, one of the main goals is the combination of virtual scenes with real-world scenes. However, the views of the real world objects are often restricted to views from the cameras. True navigation through such mixed reality scenes becomes impossible unless the components from real objects can be rendered from arbitrary viewpoints. Additionally, adding a real-world object to a virtual scene requires some depth information as well, in order to handle interaction. The proposed algorithm introduces an approach to generate 3D video avatars and to augment the avatars naturally into 3D virtual environment using the calibrated camera parameters and silhouette information. As a result, we can create photo-realistic live avatars from natural scenes and the resulting 3D live avatar can guide and interact participants in VR space.


international conference on multisensor fusion and integration for intelligent systems | 2008

Automatic Lifelog media annotation based on heterogeneous sensor fusion

Ig-Jae Kim; Sang Chul Ahn; Heedong Ko; Hyoung-Gon Kim

Personal Lifelog media system involves capturing of great amount of personal experiences in the form of digital multimedia during an entire lifespan. However, the usefulness of those data is limited by lack of adequate methods for accessing and indexing such a large database. It is important to manage those data systematically so that user can efficiently retrieve useful experiences whenever they need. In this paper, we focus on presenting how to create metadata, which is the core of the systematical approach, by the fusion of sensor data from a set of heterogeneous sensors. With this metadata, we can support users to find their life history efficiently in our system.


Proceedings of the 3rd ACM workshop on Continuous archival and retrival of personal experences | 2006

PERSONE: personalized experience recoding and searching on networked environment

Ig-Jae Kim; Sang Chul Ahn; Heedong Ko; Hyoung-Gon Kim

We present a new system for creation and efficient retrieval of personal life log media(P-LLM) on networked environment in this paper. Personal life log media data include audiovisual data for users experiences and additional data from intelligent gadgets which include multimodal sensors, such as GPS, 3D-accelerometers, physiological reaction sensors and environmental sensors. We made our system as a web-based system which provides spatiotemporal graphical user interface and tree-based activity search environment, so that users can access easily and also query intuitively. Our learning based activity classification technique makes it easier to classify the users activity from multimodal sensor data. Finally we can provide user-centered service with individual activity registration and classification for each user with our proposed system.


society of instrument and control engineers of japan | 2006

Indoor Modeling for Interactive Robot Service

Sangwoo Jo; Qonita M. Shahab; Yong-moo Kwon; Sang Chul Ahn

This paper presents our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide field of view angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a Web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW. The application terminals of the environment model for interactive robot service that we consider are PC, PDA and mobile phone. Because of characteristic of device resource limitation, 3D model is focused only on PC environment

Collaboration


Dive into the Sang Chul Ahn's collaboration.

Top Co-Authors

Avatar

Hyoung-Gon Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hwasup Lim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyoung Gon Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ig Jae Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jaewon Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Seong-Oh Lee

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jae-In Hwang

Korea Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge