Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bumhwi Kim is active.

Publication


Featured researches published by Bumhwi Kim.


intelligent data engineering and automated learning | 2008

Improving AdaBoost Based Face Detection Using Face-Color Preferable Selective Attention

Bumhwi Kim; Sang-Woo Ban; Minho Lee

In this paper, we propose a new face detection model, which is developed by combining the conventional AdaBoost algorithm for human face detection with a biologically motivated face-color preferable selective attention. The biologically motivated face-color preferable selective attention model localizes face candidate regions in a natural scene, and then the Adaboost based face detection process only works for those localized face candidate areas to check whether the areas contain a human face. The proposed model not only improves the face detection performance by avoiding miss-localization of faces induced by complex background such as face-like non-face area, but can enhances a face detection speed by reducing region of interests through the face-color preferable selective attention model. The experimental results show that the proposed model shows plausible performance for localizing faces in real time.


Neurocomputing | 2011

Letters: Growing fuzzy topology adaptive resonance theory models with a push-pull learning algorithm

Bumhwi Kim; Sang-Woo Ban; Minho Lee

A new incrementally growing neural network model, called the growing fuzzy topology ART (GFTART) model, is proposed based on integrating the conventional fuzzy ART model with the incremental topology-preserving mechanism of the growing cell structure (GCS) model. This is in addition, to a new training algorithm, called the push-pull learning algorithm. The proposed GFTART model has two purposes: First, to reduce the proliferation of incrementally generated nodes in the F2 layer by the conventional fuzzy ART model based on replacing each F2 node with a GCS. Second, to enhance the class-dependent clustering representation ability of the GCS model by including the categorization property of the conventional fuzzy ART model. In addition, the proposed push-pull training algorithm enhances the cluster discriminating property and partially improves the forgetting problem of the training algorithm in the GCS model.


international symposium on neural networks | 2010

Top-down visual selective attention model combined with bottom-up saliency map for incremental object perception

Sang-Woo Ban; Bumhwi Kim; Minho Lee

Humans can efficiently perceive arbitrary visual objects based on incremental learning mechanism and selective attention function. In this paper, we propose a new top-down attention model based on human visual attention mechanism, which considers both relative feature based bottom-up saliency and goal oriented top-down attention. The proposed model can generate top-down bias signals of form and color features for a specific object, which draw attention to find a desired object by an incremental learning mechanism together with object feature representation scheme. A growing fuzzy topology adaptive resonance theory (GFTART) model is proposed by adapting a growing cell structure (GCS) unit into a conventional fuzzy ART, by which the proliferation problem of the conventional fuzzy ART can be enhanced. The proposed GFTART plays two important roles for object color and form biased attention; one is to incrementally learn and memorize color and form features of arbitrary objects, and the other is to generate top-down bias signal for selectively attending to a target object. Experimental results show that the proposed model performs well in successfully focusing on given target objects, as well as incrementally perceiving arbitrary objects in natural scenes.


international conference on neural information processing | 2011

Implementation of Visual Attention System Using Artificial Retina Chip and Bottom-Up Saliency Map Model

Bumhwi Kim; Hirotsugu Okuno; Tetsuya Yagi; Minho Lee

This paper proposes a new hardware system for visual selective attention, in which a neuromorphic silicon retina chip is used as an input camera and a bottom-up saliency map model is implemented by a Field-Programmable Gate Array (FPGA) device. The proposed system mimics the roles of retina cells, V1 cells, and parts of lateral inferior parietal lobe (LIP), such as edge extraction, orientation, and selective attention response, respectively. The center surround difference and normalization for mimicking the roles of on-center and off-surround function in the lateral geniculate nucleus (LGN) are implemented by the FPGA. The integrated artificial retina chip with the FPGA successfully produces the human-like visual attention function, with small computational overhead. In order to apply this system to mobile robotic vision, the proposed system aims to low power dissipation and compactness. The experimental results show that the proposed system successfully generates the saliency information from natural scene.


Neural Networks | 2013

Top-down attention based on object representation and incremental memory for knowledge building and inference

Bumhwi Kim; Sang-Woo Ban; Minho Lee

Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects.


human-agent interaction | 2015

Smart Cane: Face Recognition System for Blind

Yongsik Jin; Jonghong Kim; Bumhwi Kim; Rammohan Mallipeddi; Minho Lee

We propose a smart cane with a face recognition system to help the blind in recognizing human faces. This system detects and recognizes faces around them. The result of the detection is informed to the blind person through a vibration pattern. The proposed system was designed to be used in real-time and is equipped with a camera mounted on the glasses, a vibration motor attached to the cane and a mobile computer. The camera attached to the glasses sends image to mobile computer. The mobile computer extracts features from the image and then detects the face using Adaboost. We use the modified census transform (MCT) descriptor for feature extraction. After face detection, the information regarding the detected face image is gathered. We used compressed sensing with L2-norm as a classifier. Cane is equipped with a Bluetooth module and receives a persons information from the mobile computer. The cane generates vibration patterns unique to each person as to inform a blind person about the identity of the detected person using the camera. Hence, the blind people can know the person standing in front of them.


international conference on consumer electronics | 2014

Embedded face recognition system considers human eye gaze using glass-type platform

Bumhwi Kim; Yonghwa Choi; Minho Lee

In this paper, we propose an embedded system which can detect multiple faces from a scene and can select one face among them using eye gaze from eye image in a real-world environment. In the proposed system, the scene and eye image is obtained by glass type platform which is used to detect faces and the eye gaze is calculated by embedded modules. Finally, android platform receives face image from embedded modules and performs the recognition task.


international conference on neural information processing | 2012

Implementation of face selective attention model on an embedded system

Bumhwi Kim; Hyung-Min Son; Yun-Jung Lee; Minho Lee

This paper proposes a new embedded system which can selectively detect human faces with fast speed. The embedded system is developed by using OMAP 3530 application processor which has DSP and ARM core. Since the embedded system has the limited performance of CPU and memory, we propose a hybrid system combined the YCbCr based bottom-up selective attention with the conventional Adaboost algorithm. The proposed method using the bottom-up selective attention model can reduce not only the false positive error ratio of the Adaboost based face detection algorithm but also the time complexity by finding the candidate regions of the foreground and reducing the regions of interest (ROI) in the image. The experimental results show that the implemented embedded system can successfully work for localizing human faces in real time.


human-agent interaction | 2015

A Glass-type Agent for Human Memory Assistance for Face Recognition

Bumhwi Kim; Jonghong Kim; Rammohan Mallipeddi; Minho Lee

This paper proposes an agent to assist human cognition in memorizing multiple human faces by analyzing users eye gaze points. The gaze point which is the direction of sight is obtained by the infrared camera on a glass-type agent with the help of an embedded module. The gaze information is then combined with the image captured by the frontal camera to identify the location of the face that the user is looking at among several faces. The gaze detection and face selection with tracking are performed in embedded modules attached to the glass-type agent, and the recognition of the selected facial images is performed and shown on a mobile computer connected via wireless network. The major contribution of the proposed work is the use of eye gaze direction to select faces of interest, and provide information regarding the faces to improve human memory capability in recalling the faces.


international conference on neural information processing | 2013

Embedded System for Human Augmented Cognition Based on Face Selective Attention Using Eye Gaze Tracking

Bumhwi Kim; Rammohan Mallipeddi; Minho Lee

This paper proposes a new embedded system which can selectively detect human face based on eye gaze information and can also incrementally recognize the human subject. The proposed embedded system comprises of one glass-type platform, two embedded modules and one android platform. The glass-type platform can detect users eye gaze through eye camera, human faces through frontal camera and selects the face of the preferred human subject based on their gaze information. Android platform performs face recognition and information displaying operation. All modules in the system are connected and communicate wirelessly. The experimental results indicate that the proposed system is reasonable.

Collaboration


Dive into the Bumhwi Kim's collaboration.

Top Co-Authors

Avatar

Minho Lee

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rammohan Mallipeddi

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonghong Kim

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Young-Min Jang

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Sungmoon Jeong

Japan Advanced Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Amitash Ojha

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Hyung-Min Son

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Minook Kim

Kyungpook National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge