Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seungju Han is active.

Publication


Featured researches published by Seungju Han.


Pattern Recognition | 2009

The correntropy MACE filter

Kyu-Hwa Jeong; Weifeng Liu; Seungju Han; Erion Hasanbelliu; Jose C. Principe

The minimum average correlation energy (MACE) filter is well known for object recognition. This paper proposes a nonlinear extension to the MACE filter using the recently introduced correntropy function. Correntropy is a positive definite function that generalizes the concept of correlation by utilizing second and higher order moments of the signal statistics. Because of its positive definite nature, correntropy induces a new reproducing kernel Hilbert space (RKHS). Taking advantage of the linear structure of the RKHS it is possible to formulate the MACE filter equations in the RKHS induced by correntropy and obtained an approximate solution. Due to the nonlinear relation between the feature space and the input space, the correntropy MACE (CMACE) can potentially improve upon the MACE performance while preserving the shift-invariant property (additional computation for all shifts will be required in the CMACE). To alleviate the computation complexity of the solution, this paper also presents the fast CMACE using the fast Gauss transform (FGT). We apply the CMACE filter to the MSTAR public release synthetic aperture radar (SAR) data set as well as PIE database of human faces and show that the proposed method exhibits better distortion tolerance and outperforms the linear MACE in both generalization and rejection abilities.


human factors in computing systems | 2012

Evaluation of human tangential force input performance

Bhoram Lee; Hyun-Jeong Lee; Soo Chul Lim; Hyung-Kew Lee; Seungju Han; Joonah Park

While interacting with mobile devices, users may press against touch screens and also exert tangential force to the display in a sliding manner. We seek to guide UI design based on the tangential force applied by a user to the surface of a hand-held device. A prototype of an interface using tangential force input was implemented utilizing a force sensitive layer and an elastic layer and used for the user experiment. We investigated user controllability to reach and maintain target force levels and considered the effects of hand pose and direction of force input. Our results imply no significant difference in performance when applying force holding the device in one hand and in two hands. We also observed that users have more physical and perceived loads when applying tangential force in the left-right direction compared to the up-down direction. Based on the experimental results, we discuss considerations for user interface applications of tangential-force-based interface.


human factors in computing systems | 2012

A study on touch & hover based interaction for zooming

Seungju Han; Joonah Park

Proximity is a useful medium for interaction with high interactive digital contents. It can be used in different contexts such as for navigation through depth in 3D space in zoomable interfaces. In this paper, we propose hover-based zoom interaction as an alternative to multi-touch-based zoom interaction, such as expanding/pinching to zoom. It allows users to work rapidly and intuitively at multiple levels of zooming views as their fingertip is hovering over the surface. We evaluated our technique in the context of target search and found that hover-based zoom interaction significantly outperforms the conventional touch-based zoom interaction and touch/hover-based zoom interaction in both objective and subjective measurements: users searched targets more than twice as fast as with the conventional touch-based zoom interaction in our experiment.


international conference on independent component analysis and signal separation | 2006

Estimating the information potential with the fast gauss transform

Seungju Han; Sudhir Rao; Jose C. Principe

In this paper, we propose a fast and accurate approximation to the information potential of Information Theoretic Learning (ITL) using the Fast Gauss Transform (FGT). We exemplify here the case of the Minimum Error Entropy criterion to train adaptive systems. The FGT reduces the complexity of the estimation from O(N2) to O(pkN) wherep is the order of the Hermite approximation and k the number of clusters utilized in FGT. Further, we show that FGT converges to the actual entropy value rapidly with increasing order p unlike the Stochastic Information Gradient, the present O(pN) approximation to reduce the computational complexity in ITL. We test the performance of these FGT methods on System Identification with encouraging results.


international conference on acoustics, speech, and signal processing | 2006

A Normalized Minimum Error Entropy Stochastic Algorithm

Seungju Han; Sudhir Rao; Kyu-Hwa Jeong; Jose C. Principe

We propose in this paper the normalized minimum error entropy (NMEE). Following the same rational that lead to the normalized LMS, the weight update adjustment for minimum error entropy (MEE) is constrained by the principle of minimum disturbance. Unexpectedly, we obtained an algorithm that not only is insensitive to the power of the input, but is also faster than the MEE for the same misadjustment, and also that is less sensitive to the kernel size. We explain these results analytically, and through system identification simulations


human factors in computing systems | 2010

Remote interaction for 3D manipulation

Seungju Han; Hyun-Jeong Lee; Joonah Park; Wook Chang; Changyeong Kim

In this paper, we present a two-handed 3D interaction approach for immersive virtual reality applications on a large vertical display. The proposed interaction scheme is based on hybrid motion sensing technology that tracks the 3D position and orientation of multiple handheld devices. More specifically, the devices have embedded ultrasonic and inertial sensors to accurately identify their position and attitude in the air. The interaction architecture is designed for pointing and object manipulation tasks. Since the sensor system guarantees 3D spatial information only, we develop an algorithm to exactly track the position of interest produced by the pointing task. For the object manipulation, we have carefully assigned one-handed and two-handed interaction schemes for each task. One-handed interaction includes selection and translation while rotation and scaling are assigned for the two-handed interaction. By combining one-handed and two-handed interactions, we believe that the presented system provide users with more intuitive and natural interaction for 3D object manipulation. The feasibility and validity of the proposed method are validated through user tests.


international conference on acoustics, speech, and signal processing | 2006

Kernel Based Synthetic Discriminant Function for Object Recognition

Kyu-Hwa Jeong; Puskal P. Pokharel; Jian-Wu Xu; Seungju Han; Jose C. Principe

In this paper a non-linear extension to the synthetic discriminant function (SDF) is proposed. The SDF is a well known 2-D correlation filter for object recognition. The proposed nonlinear version of the SDF is derived from kernel-based learning. The kernel SDF is implemented in a nonlinear high dimensional space by using the kernel trick and it can improve the performance of the linear SDF by incorporating the images class higher order moments. We show that this kernelized composite correlation filter has an intrinsic connection with the recently proposed correntropy function. We apply this kernel SDF to face recognition and simulations show that the kernel SDF significantly outperforms the traditional SDF as well as is robust in noisy data environments


Signal Processing-image Communication | 2013

Connecting users to virtual worlds within MPEG-V standardization

Seungju Han; Jae-Joon Han; James D. K. Kim; Chang-Yeong Kim

Virtual world such as Second life and 3D internet/broadcasting services have been increasingly popular. A life-scale virtual world presentation and the intuitive interaction between the users and the virtual worlds would provide more natural and immersive experience for users. The emergence of novel interaction technologies, such as facial-expression/body-motion tracking and remote interaction for virtual object manipulation, could be used to provide a strong connection between users in the real world and avatars in the virtual world. For the wide acceptance and the use of the virtual world, various types of novel interaction devices should have a unified interaction format between the real world and the virtual world. Thus, MPEG-V Media Context and Control (ISO/IEC 23005) standardizes such connecting information. The paper provides an overview and its usage example of MPEG-V from the real world to the virtual world (R2V) on interfaces for controlling avatars and virtual objects in the virtual world by the real world devices. In particular, we investigate how the MPEG-V framework can be applied for the facial animation and hand-based 3D manipulation using intelligent camera. In addition, in order to intuitively manipulate objects in a 3D virtual environment, we present two interaction techniques using motion sensors such as a two-handed spatial 3D interaction approach and a gesture-based interaction approach.


international conference on acoustics, speech, and signal processing | 2007

The Fast Correntropy Mace Filter

Kyu-Hwa Jeong; Seungju Han; Jose C. Principe

In this paper, we implement the newly introduced correntropy MACE filter using the fast Gauss transform (FGT). The correntropy MACE filter is a nonlinear extension to the MACE filter using the correntropy function in a feature space nonlinearly related to the input. The correntropy MACE outperforms the traditional linear MACE in both generalization and rejection abilities. However, in practice, the drawback of the correntropy MACE filter is its computation complexity. This paper present a fast version of the correntropy MACE by using the FGT idea and validates the approximation with results in synthetic aperture radar (SAR) image recognition.


multimedia signal processing | 2010

Controlling virtual world by the real world devices with an MPEG-V framework

Seungju Han; Jae-Joon Han; Youngkyoo Hwang; Jung-Bae Kim; Won-chul Bang; James D. K. Kim; Chang-Yeong Kim

The recent online networked virtual worlds such as SecondLife, World of Warcraft and Lineage have been increasingly popular. A life-scale virtual world presentation and the intuitive interaction between the users and the virtual worlds would provide more natural and immersive experience for users. The emergence of novel interaction technologies such as sensing the facial expression and the motion of the users and the real world environments could be used to provide a strong connection between them. For the wide acceptance and use of the virtual world, a various type of novel interaction devices should have a unified interaction formats between the real world and the virtual world and interoperability among virtual worlds. Thus, MPEG-V Media Context and Control (ISO/IEC 23005) standardizes such connecting information. The paper provides an overview and its usage example of MPEG-V from the real world to the virtual world (R2V) on interfaces for controlling avatars and virtual objects in the virtual world by the real world devices. In particular, we investigate how the MPEG-V framework can be applied for the facial animation of an avatar in various types of virtual worlds.

Collaboration


Dive into the Seungju Han's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge