Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chin-Hung Teng is active.

Publication


Featured researches published by Chin-Hung Teng.


Computer Vision and Image Understanding | 2005

Accurate optical flow computation under non-uniform brightness variations

Chin-Hung Teng; Shang-Hong Lai; Yung-Sheng Chen; Wen-Hsing Hsu

In this paper, we present a very accurate algorithm for computing optical flow with non-uniform brightness variations. The proposed algorithm is based on a generalized dynamic image model (GDIM) in conjunction with a regularization framework to cope with the problem of non-uniform brightness variations. To alleviate flow constraint errors due to image aliasing and noise, we employ a reweighted least-squares method to suppress unreliable flow constraints, thus leading to robust estimation of optical flow. In addition, a dynamic smoothness adjustment scheme is proposed to efficiently suppress the smoothness constraint in the vicinity of the motion and brightness variation discontinuities, thereby preserving motion boundaries. We also employ a constraint refinement scheme, which aims at reducing the approximation errors in the first-order differential flow equation, to refine the optical flow estimation especially for large image motions. To efficiently minimize the resulting energy function for optical flow computation, we utilize an incomplete Cholesky preconditioned conjugate gradient algorithm to solve the large linear system. Experimental results on some synthetic and real image sequences show that the proposed algorithm compares favorably to most existing techniques reported in literature in terms of accuracy in optical flow computation with 100% density.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2007

Constructing a 3D trunk model from two images

Chin-Hung Teng; Yung-Sheng Chen; Wen-Hsing Hsu

Trees stand for a key component in the natural environment, thus modeling realistic trees has received much attentions of researchers in computer graphics. However, most trees in computer graphics are generated according to some procedural rules in conjunction with some random perturbations, thus they are generally different from the real trees in the natural environment. In this paper, we propose a systematic approach to create a 3D trunk graphical model from two images so that the created trunk has a similar 3D trunk structure to the real one. In the proposed system, the trunk is first segmented from the image via an interactive segmentation tool and its skeleton is then extracted. Some points on the skeleton are selected and their context relations are established for representing the 2D trunk structure. A camera self-calibration algorithm appropriate for the two-view case is developed, and a minimum curvature constraint is employed to recover the 3D trunk skeleton from the established 2D trunk structure and the calibrated camera. The trunk is then modeled by a set of generalized cylinders around the recovered 3D trunk skeleton. A polygonal mesh representing the trunk is finally generated and a textured 3D trunk model is also produced by mapping the image onto the surface of the 3D trunk model. We have conducted some experiments and the results demonstrated that the proposed system can actually yield a visually plausible 3D trunk model which is similar to the real one in the image.


international conference on image analysis and recognition | 2009

Leaf Segmentation, Its 3D Position Estimation and Leaf Classification from a Few Images with Very Close Viewpoints

Chin-Hung Teng; Yi-Ting Kuo; Yung-Sheng Chen

In this paper, we present a complete system to extract leaves, recover their 3D positions and finally classify them based on leaf shape. We use only a few images with slightly different viewpoints to achieve the task. The images are captured by a general hand-held digital camera and no camera pre-calibration is required. Because only a few images with close viewpoints are sufficient to segment the leaves and recover their 3D positions, our system is flexible and easy to use in image acquisition. For leaf classification, we use the normalized centroid-contour distance as our classification feature and employ a circular-shift comparing scheme to measure the similarity, thus our system has the advantages of being invariant to leaf translation, rotation and scaling. We have conducted several experiments and the results are encouraging. The leaves are nearly perfectly extracted and the classification results are also acceptable.


Optical Engineering | 2011

Leaf segmentation, classification, and three-dimensional recovery from a few images with close viewpoints

Chin-Hung Teng; Yi-Ting Kuo; Yung-Sheng Chen

In this paper, we incorporate a set of sophisticated algorithms to implement a leaf segmentation and classification system. This system inherits the advantages of these algorithms while eliminating the difficulties each algorithm faced. Our system can segment leaves from images of live plants with arbitrary image conditions, and classify them against sketched leaf shapes or real leaves. This system can also estimate the three-dimensional (3-D) information of leaves which is not only useful for leaf segmentation but is also beneficial for further 3-D shape recovery. Although our system requires more than one image to reconstruct the 3-D structure of the scene, it has been designed so that only a few images with close viewpoints are sufficient to achieve the task, thus the system is still flexible and easy to use in image acquisition. For leaf classification, we adopt the normalized centroid-contour distance as our classification feature and employ a circular-shift comparing scheme to measure leaf similarity so that the system has the advantage of being invariant to leaf translation, rotation and scaling. We have conducted a series of experiments on many leaf images and the results are encouraging. The leaves can be well segmented and the classification results are also acceptable.


The Visual Computer | 2009

Image-based tree modeling from a few images with very narrow viewing range

Chin-Hung Teng; Yung-Sheng Chen

Creating 3D tree models from actual trees is a task receiving increasing attention. Some approaches have been developed to reconstruct a tree based on a number of photographs around the tree, typically spanning a wide viewing range. However, due to the environmental restrictions, sometimes it is quite difficult to capture so many acceptable images from so many different viewpoints. In this paper, we propose a tree modeling system which is capable of reconstructing the 3D model of a tree from a few images with very narrow viewing ranges. Because only a few images are required to generate the model, our system has the distinct advantage of fewer environmental restrictions, resulting in the extended usability and flexibility for real applications.


Applied Optics | 2006

Camera self-calibration method suitable for variant camera constraints

Chin-Hung Teng; Yung-Sheng Chen; Wen-Hsing Hsu

This paper presents a self-calibration algorithm that seeks the camera intrinsic parameters to minimize the sum of squared distances between the measured and reprojected image points. By exploiting the constraints provided by the fundamental matrices, the function to be minimized can be directly reduced to a function of the camera intrinsic parameters; thus variant camera constraints such as fixed or varying focal lengths can be easily imposed by controlling the parameters of the resulting function. We employed the simplex method to minimize the resulting function and tested the proposed algorithm on some simulated and real data. The experimental results demonstrate that our algorithm performs well for variant camera constraints and for two-view and multiple-view cases.


Computers & Graphics | 2008

Technical Section: Image-based three-dimensional model reconstruction for Chinese treasure-Jadeite Cabbage with Insects

Chia-Ming Cheng; Shu-Fan Wang; Chin-Hung Teng; Shang-Hong Lai

This paper presents a novel 3D reconstruction system for the famous Chinese treasure, Jadeite Cabbage with Insects, from uncalibrated image sequences. There are two major challenges for this 3D model reconstruction problem. The first is the difficult image registration problem due to the semi-diaphaneity and the highly specular property of jadeite materials. Secondly, the unknown camera information, including the intrinsic (calibration) and extrinsic (position and orientation) parameters, to be recovered from the uncalibrated image sequences makes the 3D reconstruction problem very challenging. The proposed 3D modeling process first recovers the camera information as well as sparse 3D structure by using a robust structure from motion algorithm. Then an approximate 3D object model is recovered from the silhouettes at the corresponding multiple views by using the visual hull technique. The final process refines the 3D model by further integrating the 3D information extracted from dense correspondences between image patches of different views. In the proposed 3D reconstruction system, we successfully combine the structure from motion and visual hull techniques to accomplish this challenging task of reconstructing an accurate 3D model for jadeite object from uncalibrated multi-view images. Finally, we assess the 3D reconstruction results for the Chinese jadeite treasure and simulated data by using the proposed 3D reconstruction system.


International Journal of Pattern Recognition and Artificial Intelligence | 2013

AN OPTICAL MUSIC RECOGNITION SYSTEM FOR SKEW OR INVERTED MUSICAL SCORES

Yung-Sheng Chen; Feng-Sheng Chen; Chin-Hung Teng

Optical Music Recognition (OMR) is a technique for converting printed musical documents into computer readable formats. In this paper, we present a simple OMR system that can perform well for ordinary musical documents such as ballad and pop music. This system is constructed based on fundamental image processing and pattern recognition techniques, thus it is easy to implement. Moreover, this system has a strong capability in skew restoration and inverted musical score detection. From a series of experiments, the error for our skew restoration is below 0.2° for any possible document rotation and the accuracy of inverted musical score detection is up to 98.89%. The overall recognition accuracy of our OMR can achieve to nearly 97%, a figure comparable with current commercial OMR software. However, if taking into image skew into consideration, our system is superior to commercial software in terms of recognition accuracy.


international conference on pattern recognition | 2002

Robust computation of optical flow under non-uniform illumination variations

Chin-Hung Teng; Shang-Hong La; Yung-Sheng Chen; Wen-Hsing Hsu

In this paper, an energy minimization method is proposed to estimate the optical flow of an image sequence in the presence of non-uniform illumination variations. The energy function is formulated by combining a data constraint energy that considers the illumination variations and a smoothness constraint, which minimizes the pixel-to-pixel variation of the velocity and illumination fields. Minimization of this energy function is equivalent to solving a linear system, which is accomplished by using an incomplete Cholesky preconditioned conjugate gradient algorithm. A dynamic weighting scheme, which considers the statistical properties of estimated optical flow, is also combined with this algorithm to improve the robustness of our algorithm. This algorithm has been successfully applied to synthetic and real image sequences and some experimental results demonstrate that this algorithm can estimate the optical flow under non-uniform illumination variations accurately.


The Visual Computer | 2018

Reconstructing three-dimensional models of objects using a Kinect sensor

Chin-Hung Teng; Kai-Yuan Chuo; Chen-Yuan Hsieh

Advanced sensor technology has allowed us to acquire three-dimensional (3D) information from a scene using a low-cost RGB-D sensor such as Kinect. Although this sensor can recover the 3D structure of a scene, it cannot distinguish a target object from the background. In view of this, we incorporate an interactive 3D segmentation algorithm with a well-known Kinect scene reconstruction system, the KinectFusion, to effectively extract an object from the scene, and hence obtain a 3D point cloud of the object. With this system, a user can freely move the Kinect sensor to reconstruct the scene and then select the foreground/background seeds from the reconstructed point cloud. The system can take over the following tasks to complete the 3D reconstruction of the selected object. The advantage of this system is that users need not select the foreground/background seeds very carefully, which greatly reduces the operational complexity. Moreover, previous segmentation results are inherited to the next phase as new foreground/background seeds, which minimizes the required user intervention. With a simple seed selection, the point cloud of the selected object can be gradually recovered when a user moves the sensor to different viewpoints. Several experiments were conducted, and the results confirmed the effectiveness of the proposed system. The 3D structures of objects with complex shapes are well reconstructed by our system.

Collaboration


Dive into the Chin-Hung Teng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wen-Hsing Hsu

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Shang-Hong Lai

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chia-Ming Cheng

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shu-Fan Wang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Po-Hao Huang

National Tsing Hua University

View shared research outputs
Researchain Logo
Decentralizing Knowledge