Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kuo-Chin Lien is active.

Publication


Featured researches published by Kuo-Chin Lien.


Applied Soft Computing | 2016

A multilevel image thresholding segmentation algorithm based on two-dimensional K-L divergence and modified particle swarm optimization

Xiaoli Zhao; Matthew Turk; Wei Li; Kuo-Chin Lien; Guozhong Wang

Graphical abstractDisplay Omitted HighlightsWe propose the 2D K-L divergence applied to multilevel image segmentation and infer the formulation of 2D K-L divergence as an objective function of multilevel image segmentation.We propose MPSO that modifies the location update formula and the global best position of particles to overcome the premature convergence of PSO.We propose a scheme that denotes 2D K-L divergence as the fitness function of MPSO, which improves the effectiveness of the segmentation and reduces the time complexity. Multilevel image segmentation is a technique that divides images into multiple homogeneous regions. In order to improve the effectiveness and efficiency of multilevel image thresholding segmentation, we propose a segmentation algorithm based on two-dimensional (2D) Kullback-Leibler(K-L) divergence and modified Particle Swarm Optimization (MPSO). This approach calculates the 2D K-L divergence between an image and its segmented result by adopting 2D histogram as the distribution function, then employs the sum of divergences of different regions as the fitness function of MPSO to seek the optimal thresholds. The proposed 2D K-L divergence improves the accuracy of image segmentation; the MPSO overcomes the drawback of premature convergence of PSO by improving the location update formulation and the global best position of particles, and reduces drastically the time complexity of multilevel thresholding segmentation. Experiments were conducted extensively on the Berkeley Segmentation Dataset and Benchmark (BSDS300), and four performance indices of image segmentation - BDE, PRI, GCE and VOI - were tested. The results show the robustness and effectiveness of the proposed algorithm.


symposium on 3d user interfaces | 2016

Interpreting 2D gesture annotations in 3D augmented reality

Benjamin Nuernberger; Kuo-Chin Lien; Tobias Höllerer; Matthew Turk

A 2D gesture annotation provides a simple way to annotate the physical world in augmented reality for a range of applications such as remote collaboration. When rendered from novel viewpoints, these annotations have previously only worked with statically positioned cameras or planar scenes. However, if the camera moves and is observing an arbitrary environment, 2D gesture annotations can easily lose their meaning when shown from novel viewpoints due to perspective effects. In this paper, we present a new approach towards solving this problem by using a gesture enhanced annotation interpretation. By first classifying which type of gesture the user drew, we show that it is possible to render the 2D annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for an augmented reality enhanced remote collaboration scenario by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel real-time method to automatically handle the two most common 2D gesture annotations - arrows and circles - and give a detailed analysis of the ambiguities that must be handled in each case. Arrow gestures are interpreted by identifying their anchor points and using scene surface normals for better perspective rendering. For circle gestures, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our method outperforms previous approaches by better conveying the meaning of the original drawing from different viewpoints.


international symposium on visual computing | 2015

Eye Gaze Correction with a Single Webcam Based on Eye-Replacement

Yalun Qin; Kuo-Chin Lien; Matthew Turk; Tobias Höllerer

In traditional video conferencing systems, it is impossible for users to have eye contact when looking at the conversation partner’s face displayed on the screen, due to the disparity between the locations of the camera and the screen. In this work, we implemented a gaze correction system that can automatically maintain eye contact by replacing the eyes of the user with the direct looking eyes (looking directly into the camera) captured in the initialization stage. Our real-time system has good robustness against different lighting conditions and head poses, and it provides visually convincing and natural results while relying only on a single webcam that can be positioned almost anywhere around the screen.


international symposium on mixed and augmented reality | 2015

[POSTER] 2D-3D Co-segmentation for AR-based Remote Collaboration

Kuo-Chin Lien; Benjamin Nuernberger; Matthew Turk; Tobias Höllerer

In Augmented Reality (AR) based remote collaboration, a remote user can draw a 2D annotation that emphasizes an object of interest to guide a local user accomplishing a task. This annotation is typically performed only once and then sticks to the selected object in the local users view, independent of his or her camera movement. In this paper, we present an algorithm to segment the selected object, including its occluded surfaces, such that the 2D selection can be appropriately interpreted in 3D and rendered as a useful AR annotation even when the local user moves and significantly changes the viewpoint.


international conference on 3d vision | 2015

On Preserving Structure in Stereo Seam Carving

Kuo-Chin Lien; Matthew Turk

The major objective of image retargeting algorithms is to preserve the viewers perception while adjusting the aspect ratio of an image. This means that an ideal retargeting algorithm has to be able to preserve high-level semantics and avoid generating low-level image distortion. Stereoscopic image retargeting poses a even more challenging problem in that the 3D perception has to be preserved as well. In this paper, we propose an algorithm based on high-order two-view co-labeling to simultaneously retarget a given stereo pair and preserve its 2D as well as 3D quality. Our experimental results qualitatively demonstrate the improved ability of preserving 2D image structures in both views. In addition, we show quantitatively that our algorithm improves upon the state-of-the-art up to 85% in terms of a measurement based on depth distortion.


virtual reality software and technology | 2016

Multi-view gesture annotations in image-based 3D reconstructed scenes

Benjamin Nuernberger; Kuo-Chin Lien; Lennon Grinta; Chris Sweeney; Matthew Turk; Tobias Höllerer

We present a novel 2D gesture annotation method for use in image-based 3D reconstructed scenes with applications in collaborative virtual and augmented reality. Image-based reconstructions allow users to virtually explore a remote environment using image-based rendering techniques. To collaborate with other users, either synchronously or asynchronously, simple 2D gesture annotations can be used to convey spatial information to another user. Unfortunately, prior methods are either unable to disambiguate such 2D annotations in 3D from novel viewpoints or require relatively dense reconstructions of the environment. In this paper, we propose a simple multi-view annotation method that is useful in a variety of scenarios and applicable to both very sparse and dense 3D reconstructions. Specifically, we employ interactive disambiguation of the 2D gestures via a second annotation drawn from another viewpoint, triangulating two drawings to achieve a 3D result. Our method automatically chooses an appropriate second viewpoint and uses image-based rendering transitions to keep the user oriented while moving to the second viewpoint. User experiments in an asynchronous collaboration scenario demonstrate the usability of the method and its superiority over a baseline method. In addition, we showcase our method running on a variety of image-based reconstruction datasets and highlight its use in a synchronous local-remote user collaboration system.


international symposium on mixed and augmented reality | 2016

PPV: Pixel-Point-Volume Segmentation for Object Referencing in Collaborative Augmented Reality

Kuo-Chin Lien; Benjamin Nuernberger; Tobias Höllerer; Matthew Turk

We present a method for collaborative augmented reality (AR) that enables users from different viewpoints to interpret object references specified via 2D on-screen circling gestures. Based on a users 2D drawing annotation, the method segments out the userselected object using an incomplete or imperfect scene model and the color image from the drawing viewpoint. Specifically, we propose a novel segmentation algorithm that utilizes both 2D and 3D scene cues, structured into a three-layer graph of pixels, 3D points, and volumes (supervoxels), solved via standard graph cut algorithms. This segmentation enables an appropriate rendering of the users 2D annotation from other viewpoints in 3D augmented reality. Results demonstrate the superiority of the proposed method over existing methods.


international conference on multimedia and expo | 2013

Stereo random field for bi-layer image segmentation

Kuo-Chin Lien; Jerry D. Gibson

Stereo image segmentation usually incorporates depth cues to achieve high quality. However, previous methods that pointwise propagate information within stereo pairs could suffer from a poorly estimated depth map. In this paper, we introduce a novel graphical model where a greater amount of reliable messages can be conveyed during two-view joint segmentation. This model leads to a strongly coupled stereo pair, thus improving robustness, accuracy and consistency of stereo segmentation. Additionally, we augment a depth map to a novel correspondence matrix which is suitable for the proposed stereo segmentation model. Our experiments on a public stereo dataset show that the proposed correspondence method and stereo model outperforms state-of-the-art stereo segmentation algorithms.


international conference on multimedia and expo | 2013

Bi-layer disparity remapping for handheld 3D video communications

Stephen Mangiat; Kuo-Chin Lien; Jerry D. Gibson

Handheld devices with “glasses-free” autostereoscopic displays present a new opportunity for 3D video communications. 3D can enhance realism and enrich the user experience, yet it must be employed without visual discomfort. A simple shift-convergence disparity remapping technique can align a users face throughout a 3D video call, eliminating uncomfortable crossed disparities. However, this can produce large disparities in the background that the viewer is unable to fuse. Furthermore, reducing camera separation and thus all disparities may lead to a flat appearance that does not aid realism. Using foreground/background segmentation, we propose a novel bi-layer disparity remapping algorithm to limit uncomfortable background disparities during handheld 3D video communications. A user study with the HTC Evo 3D handheld device shows that this method improves visual comfort while preserving the critical depths within the face.


ieee virtual reality conference | 2016

Anchoring 2D gesture annotations in augmented reality

Benjamin Nuernberger; Kuo-Chin Lien; Tobias Höllerer; Matthew Turk

Augmented reality enhanced collaboration systems often allow users to draw 2D gesture annotations onto video feeds to help collaborators to complete physical tasks. This works well for static cameras, but for movable cameras, perspective effects cause problems when trying to render 2D annotations from a new viewpoint in 3D. In this paper, we present a new approach towards solving this problem by using gesture enhanced annotations. By first classifying which type of gesture the user drew, we show that it is possible to render annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for remote collaboration by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel system to automatically handle the top two 2D gesture annotations - arrows and circles. Arrows are handled by identifying their anchor points and using surface normals for better perspective rendering. For circles, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our approach outperforms previous methods in terms of better conveying the original drawings meaning from different viewpoints.

Collaboration


Dive into the Kuo-Chin Lien's collaboration.

Top Co-Authors

Avatar

Matthew Turk

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Sweeney

University of California

View shared research outputs
Top Co-Authors

Avatar

Lennon Grinta

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yalun Qin

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Li

Zhejiang University

View shared research outputs
Researchain Logo
Decentralizing Knowledge