Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tyng-Luh Liu is active.

Publication


Featured researches published by Tyng-Luh Liu.


computer vision and pattern recognition | 2005

Local discriminant embedding and its variants

Hwann-Tzong Chen; Huang-Wei Chang; Tyng-Luh Liu

We present a new approach, called local discriminant embedding (LDE), to manifold learning and pattern classification. In our framework, the neighbor and class relations of data are used to construct the embedding for classification problems. The proposed algorithm learns the embedding for the submanifold of each class by solving an optimization problem. After being embedded into a low-dimensional subspace, data points of the same class maintain their intrinsic neighbor relations, whereas neighboring points of different classes no longer stick to one another. Via embedding, new test data are thus more reliably classified by the nearest neighbor rule, owing to the locally discriminating nature. We also describe two useful variants: two-dimensional LDE and kernel LDE. Comprehensive comparisons and extensive experiments on face recognition are included to demonstrate the effectiveness of our method.


international conference on computer vision | 2011

Fusing generic objectness and visual saliency for salient object detection

Kai-Yueh Chang; Tyng-Luh Liu; Hwann-Tzong Chen; Shang-Hong Lai

We present a novel computational model to explore the relatedness of objectness and saliency, each of which plays an important role in the study of visual attention. The proposed framework conceptually integrates these two concepts via constructing a graphical model to account for their relationships, and concurrently improves their estimation by iteratively optimizing a novel energy function realizing the model. Specifically, the energy function comprises the objectness, the saliency, and the interaction energy, respectively corresponding to explain their individual regularities and the mutual effects. Minimizing the energy by fixing one or the other would elegantly transform the model into solving the problem of objectness or saliency estimation, while the useful information from the other concept can be utilized through the interaction term. Experimental results on two benchmark datasets demonstrate that the proposed model can simultaneously yield a saliency map of better quality and a more meaningful objectness output for salient object detection.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Multiple Kernel Learning for Dimensionality Reduction

Yen-Yu Lin; Tyng-Luh Liu; Chiou-Shann Fuh

In solving complex visual learning tasks, adopting multiple descriptors to more precisely characterize the data has been a feasible way for improving performance. The resulting data representations are typically high-dimensional and assume diverse forms. Hence, finding a way of transforming them into a unified space of lower dimension generally facilitates the underlying tasks such as object recognition or clustering. To this end, the proposed approach (termed MKL-DR) generalizes the framework of multiple kernel learning for dimensionality reduction, and distinguishes itself with the following three main contributions: First, our method provides the convenience of using diverse image descriptors to describe useful characteristics of various aspects about the underlying data. Second, it extends a broad set of existing dimensionality reduction techniques to consider multiple kernel learning, and consequently improves their effectiveness. Third, by focusing on the techniques pertaining to dimensionality reduction, the formulation introduces a new class of applications with the multiple kernel learning framework to address not only the supervised learning problems but also the unsupervised and semi-supervised ones.


international conference on computer vision | 1999

Approximate tree matching and shape similarity

Tyng-Luh Liu; Davi Geiger

We present a framework for 2D shape contour (silhouette) comparison that can account for stretchings, occlusions and region information. Topological changes due to the original 3D scenarios and articulations are also addressed. To compare the degree of similarity between any two shapes, our approach is to represent each shape contour with a free tree structure derived from a shape axis (SA) model, which we have recently proposed. We then use a tree matching scheme to find the best approximate match and the matching cost. To deal with articulations, stretchings and occlusions, three local tree matching operations, merge, cut, and merge-and-cut, are introduced to yield optimally approximate matches, which can accommodate not only one-to-one but many-to-many mappings. The optimization process gives guaranteed globally optimal match efficiently. Experimental results on a variety of shape contours are provided.


computer vision and pattern recognition | 2011

From co-saliency to co-segmentation: An efficient and fully unsupervised energy minimization model

Kai-Yueh Chang; Tyng-Luh Liu; Shang-Hong Lai

We address two key issues of co-segmentation over multiple images. The first is whether a pure unsupervised algorithm can satisfactorily solve this problem. Without the users guidance, segmenting the foregrounds implied by the common object is quite a challenging task, especially when substantial variations in the objects appearance, shape, and scale are allowed. The second issue concerns the efficiency if the technique can lead to practical uses. With these in mind, we establish an MRF optimization model that has an energy function with nice properties and can be shown to effectively resolve the two difficulties. Specifically, instead of relying on the user inputs, our approach introduces a co-saliency prior as the hint about possible foreground locations, and uses it to construct the MRF data terms. To complete the optimization framework, we include a novel global term that is more appropriate to co-segmentation, and results in a submodular energy function. The proposed model can thus be optimally solved by graph cuts. We demonstrate these advantages by testing our method on several benchmark datasets.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Real-time tracking using trust-region methods

Tyng-Luh Liu; Hwann-Tzong Chen

Optimization methods based on iterative schemes can be divided into two classes: line-search methods and trust-region methods. While line-search techniques are commonly found in various vision applications, not much attention is paid to trust-region ones. Motivated by the fact that line-search methods can be considered as special cases of trust-region methods, we propose to establish a trust-region framework for real-time tracking. Our approach is characterized by three key contributions. First, since a trust-region tracking system is more effective, it often yields better performances than the outcomes of other trackers that rely on iterative optimization to perform tracking, e.g., a line-search-based mean-shift tracker. Second, we have formulated a representation model that uses two coupled weighting schemes derived from the covariance ellipse to integrate an objects color probability distribution and edge density information. As a result, the system can address rotation and nonuniform scaling in a continuous space, rather than working on some presumably possible discrete values of rotation angle and scale. Third, the framework is very flexible in that a variety of distance functions can be adapted easily. Experimental results and comparative studies are provided to demonstrate the efficiency of the proposed method.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Representation and self-similarity of shapes

Davi Geiger; Tyng-Luh Liu; Robert V. Kohn

Representing shapes in a compact and informative form is a significant problem for vision systems that must recognize or classify objects. We describe a compact representation model for two-dimensional (2D) shapes by investigating their self-similarities and constructing their shape axis trees (SA-trees). Our approach can be formulated as a variational one (or, equivalently, as MAP estimation of a Markov random field). We start with a 2D shape, its boundary contour, and two different parameterizations for the contour (one parameterization is oriented counterclockwise and the other clockwise). To measure its self-similarity, the two parameterizations are matched to derive the best set of one-to-one point-to-point correspondences along the contour. The cost functional used in the matching may vary and is determined by the adopted self-similarity criteria, e.g., cocircularity, distance variation, parallelism, and region homogeneity. The loci of middle points of the pairing contour points yield the shape axis and they can be grouped into a unique free tree structure, the SA-tree. By implicitly encoding the (local and global) shape information into an SA-tree, a variety of vision tasks, e.g., shape recognition, comparison, and retrieval, can be performed in a more robust and efficient way via various tree-based algorithms. A dynamic programming algorithm gives the optimal solution in O(N/sup 1/), where N is the size of the contour.


acm multimedia | 2005

Semantic manifold learning for image retrieval

Yen-Yu Lin; Tyng-Luh Liu; Hwann-Tzong Chen

Learning the users semantics for CBIR involves two different sources of information: the similarity relations entailed by the content-based features, and the relevance relations specified in the feedback. Given that, we propose an augmented relation embedding (ARE) to map the image space into a semantic manifold that faithfully grasps the users preferences. Besides ARE, we also look into the issues of selecting a good feature set for improving the retrieval performance. With these two aspects of efforts we have established a system that yields far better results than those previously reported. Overall, our approach can be characterized by three key properties: 1) The framework uses one relational graph to describe the similarity relations, and the other two to encode the relevant/irrelevant relations indicated in the feedback. 2) With the relational graphs so defined, learning a semantic manifold can be transformed into solving a constrained optimization problem, and is reduced to the ARE algorithm accounting for both the representation and the classification points of views. 3) An image representation based on augmented features is introduced to couple with the ARE learning. The use of these features is significant in capturing the semantics concerning different scales of image regions. We conclude with experimental results and comparisons to demonstrate the effectiveness of our method.


computer vision and pattern recognition | 2007

Local Ensemble Kernel Learning for Object Category Recognition

Yen-Yu Lin; Tyng-Luh Liu; Chiou-Shann Fuh

This paper describes a local ensemble kernel learning technique to recognize/classify objects from a large number of diverse categories. Due to the possibly large intraclass feature variations, using only a single unified kernel-based classifier may not satisfactorily solve the problem. Our approach is to carry out the recognition task with adaptive ensemble kernel machines, each of which is derived from proper localization and regularization. Specifically, for each training sample, we learn a distinct ensemble kernel constructed in a way to give good classification performance for data falling within the corresponding neighborhood. We achieve this effect by aligning each ensemble kernel with a locally adapted target kernel, followed by smoothing out the discrepancies among kernels of nearby data. Our experimental results on various image databases manifest that the technique to optimize local ensemble kernels is effective and consistent for object recognition.


IEEE Transactions on Image Processing | 2011

Regularized Background Adaptation: A Novel Learning Rate Control Scheme for Gaussian Mixture Modeling

Horng-Horng Lin; Jen-Hui Chuang; Tyng-Luh Liu

To model a scene for background subtraction, Gaussian mixture modeling (GMM) is a popular choice for its capability of adaptation to background variations. However, GMM often suffers from a tradeoff between robustness to background changes and sensitivity to foreground abnormalities and is inefficient in managing the tradeoff for various surveillance scenarios. By reviewing the formulations of GMM, we identify that such a tradeoff can be easily controlled by adaptive adjustments of the GMMs learning rates for image pixels at different locations and of distinct properties. A new rate control scheme based on high-level feedback is then developed to provide better regularization of background adaptation for GMM and to help resolving the tradeoff. Additionally, to handle lighting variations that change too fast to be caught by GMM, a heuristic rooting in frame difference is proposed to assist the proposed rate control scheme for reducing false foreground alarms. Experiments show the proposed learning rate control scheme, together with the heuristic for adaptation of over-quick lighting change, gives better performance than conventional GMM approaches.

Collaboration


Dive into the Tyng-Luh Liu's collaboration.

Top Co-Authors

Avatar

Hwann-Tzong Chen

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chiou-Shann Fuh

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jen-Hui Chuang

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Horng-Horng Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shang-Hong Lai

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Hsiao-Rong Tyan

Chung Yuan Christian University

View shared research outputs
Researchain Logo
Decentralizing Knowledge