Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tung-Ying Lee is active.

Publication


Featured researches published by Tung-Ying Lee.


IEEE Transactions on Biomedical Engineering | 2013

Automatic Distortion Correction of Endoscopic Images Captured With Wide-Angle Zoom Lens

Tung-Ying Lee; Tzu-Shan Chang; Chen-Hao Wei; Shang-Hong Lai; Kai-Che Liu; Hurng-Sheng Wu

Operation in minimally invasive surgery is more difficult since the surgeons perform operations without haptic feedback or depth perception. Moreover, the field of view perceived by the surgeons through endoscopy is usually quite limited. The goal of this paper is to allow surgeons to see wide-angle images from endoscopy without the drawback of lens distortion. The proposed distortion correction process consists of lens calibration and real-time image warping. The calibration step is to estimate the parameters in the lens distortion model. We propose a fully automatic Hough-entropy-based calibration algorithm, which provides calibration results comparable to the previous manual calibration method. To achieve real-time correction, we use graphics processing unit to warp the image in parallel. In addition, surgeons may adjust the focal length of a lens during the operation. Real-time distortion correction of a zoomable lens is impossible by using traditional calibration methods because the tedious calibration process has to repeat again if focal length is changed. We derive a formula to describe the relationship between the distortion parameter, focal length, and image boundary. Hence, we can estimate the focal length for a zoomable lens from endoscopic images online and achieve real-time lens distortion correction.


conference on multimedia modeling | 2011

People localization in a camera network combining background subtraction and scene-aware human detection

Tung-Ying Lee; Tsung-Yu Lin; Szu-Hao Huang; Shang-Hong Lai; Shang-Chih Hung

In a network of cameras, people localization is an important issue. Traditional methods utilize camera calibration and combine results of background subtraction in different views to locate people in the three dimensional space. Previous methods usually solve the localization problem iteratively based on background subtraction results, and high-level image information is neglected. In order to fully exploit the image information, we suggest incorporating human detection into multi-camera video surveillance. We develop a novel method combining human detection and background subtraction for multi-camera human localization by using convex optimization. This convex optimization problem is independent of the image size. In fact, the problem size only depends on the number of interested locations in ground plane. Experimental results show this combination performs better than background subtraction-based methods and demonstrate the advantage of combining these two types of complementary information.


Computer Graphics Forum | 2011

Bipartite Polar Classification for Surface Reconstruction

Yi-Ling Chen; Tung-Ying Lee; Bing-Yu Chen; Shang-Hong Lai

In this paper, we propose bipartite polar classification to augment an input unorganized point set ℘ with two disjoint groups of points distributed around the ambient space of ℘ to assist the task of surface reconstruction. The goal of bipartite polar classification is to obtain a space partitioning of ℘ by assigning pairs of Voronoi poles into two mutually invisible sets lying in the opposite sides of ℘ through direct point set visibility examination. Based on the observation that a pair of Voronoi poles are mutually invisible, spatial classification is accomplished by carving away visible exterior poles with their counterparts simultaneously determined as interior ones. By examining the conflicts of mutual invisibility, holes or boundaries can also be effectively detected, resulting in a hole‐aware space carving technique. With the classified poles, the task of surface reconstruction can be facilitated by more robust surface normal estimation with global consistent orientation and off‐surface point specification for variational implicit surface reconstruction. We demonstrate the ability of the bipartite polar classification to achieve robust and efficient space carving on unorganized point clouds with holes and complex topology and show its application to surface reconstruction.


visual communications and image processing | 2011

Wide-angle distortion correction by Hough transform and gradient estimation

Tung-Ying Lee; Tzu-Shan Chang; Shang-Hong Lai; Kai-Che Liu; Hurng-Sheng Wu

Wide-angle cameras have been widely used in surveillance and endoscopic imaging. An automatic distortion correction method is very useful for these applications. Traditional methods extract corners or curved straight lines for estimating distortion parameters. Hough transform is a powerful tool to assess straightness. However, previous methods usually require some human intervention or only focus on using a single curve. In this paper, we propose a new method based on Hough transform by considering all curves into the estimation of distortion parameters. By considering the relationship between distortion parameters and curves, our method is fully automatic and does not require manual selection of curves in an image. Experiments on synthetic and real datasets have been conducted. The results of our method are also compared with other Hough Transform based methods in quantitative measures. The experimental results show that the accuracy of the proposed automatic method is comparable to those of other manual line-based methods.


asia pacific conference on circuits and systems | 2012

Real-time correction of wide-angle lens distortion for images with GPU computing

Tung-Ying Lee; Chen-Hao Wei; Shang-Hong Lai; Ruen-Rone Lee

Wide-angle lens provides a broad field of view which benefits some applications, such as video surveillance or endoscopic imaging. However, it also induces lens distortion, especially radial distortion, which may impede further video analysis or perceptual interpretation. For some applications, such as minimally invasive surgery and visual surveillance, real-time correction of image distortion is required. Traditional CPU-centric machines are difficult to achieve the requirement of real-time computation because of a large amount of computation. In this paper, we propose to achieve real-time correction of wide-angle lens distortion of images on several target platforms. In the GPGPU platform, we achieve real-time correction at full-HD resolution by using CUDA. For middle-end devices, an error-controllable mesh is used and the system is implemented by industry standard, OpenGL. We also implement it with OpenGL ES on embedded GPUs for mobile devices. Experiments show using our error-controllable mesh greatly outperforms those using regularly downsampled meshes.


computer vision and pattern recognition | 2008

3D non-rigid registration for MPU implicit surfaces

Tung-Ying Lee; Shang-Hong Lai

Implicit surface representation is well suited for surface reconstruction from a large amount of 3D noisy data points with non-uniform sampling density. Previous 3D non-rigid model registration methods can only be applied to the mesh or volume representations, but not directly to implicit surfaces. To our best knowledge, the previous 3D registration methods for implicit surfaces can only handle rigid transformation and they must keep the data points on the surface. In this paper, we propose a new 3D non-rigid registration algorithm to register two multi-level partition of unity (MPU) implicit surfaces with a variational formulation. The 3D non-rigid transformation between two implicit surfaces is a continuous deformation function, which is determined via an energy minimization procedure. Under the octree structure in the MPU surface, each leaf cell is transformed by an individual affine transformation associated with an energy that is related to the distance between two general quadrics. The proposed algorithm can directly register between two 3D implicit surfaces without sampling on the two signed distance functions or polygonalizing implicit surfaces, which makes our algorithm efficient in both computation and memory requirement. Experimental results on 3D human organ and sculpture models demonstrate the effectiveness of the proposed algorithm.


digital identity management | 2007

Generalized MPU Implicits Using Belief Propagation

Yi-Ling Chen; Shang-Hong Lai; Tung-Ying Lee

In this paper, we present a new algorithm to reconstruct 3D surfaces from an unorganized point cloud based on generalizing the MPU implicit algorithm through introducing a powerful orientation inference scheme via Belief Propagation. Instead of using orientation information like surface normals, local data distribution analysis is performed to identify the local surface property so as to guide the selection of local fitting models. We formulate the determination of the globally consistent orientation as a graph optimization problem. Local belief networks are constructed by treating the local shape functions as their nodes. The consistency of adjacent nodes linked by an edge is checked by evaluating the functions and an energy is thus defined. By minimizing the total energy over the graph, we can obtain an optimal assignment of labels indicating the orientation of each local shape function. The local inference result is propagated over the model in a front-propagation fashion to obtain the global solution. We demonstrate the performance of the proposed algorithm by showing experimental results on some real-world 3D data sets.


international conference on image processing | 2013

Rolling shutter correction for video with large depth of field

Yen-Hao Chiao; Tung-Ying Lee; Shang-Hong Lai

Rolling shutter correction has attracted considerable attention in recent years. Several algorithms have been proposed to correct the distortion. Previous methods on rolling shutter correction did not consider depth variations in the scene. In this work, we overcome the limitation of the previous works that the depth of field in the scene is small. We present a correction model for rectifying the rolling shutter video based on the depth maps estimated from the rolling shutter video. In addition, we propose a two-stage optimization algorithm to estimate the temporal camera motion and the associated depth maps. Experimental results show the improvement of the proposed rolling shutter correction algorithm that takes the depth information into account.


visual communications and image processing | 2011

Robust 3D object pose estimation from a single 2D image

Chia-Ming Cheng; Hsiao-Wei Chen; Tung-Ying Lee; Shang-Hong Lai; Ya-Hui Tsai

In this paper, we propose a robust algorithm for 3D object pose estimation from a single 2D image. The proposed pose estimation algorithm is based on modifying the traditional image projection error function to a sum of squared image projection errors weighted by their associated distances. By using an Euler angle representation, we formulate the energy minimization for the pose estimation problem as searching a global minimum solution. Based on this framework, the proposed algorithm employs robust techniques to detect outliers in a coarse-to-fine fashion, thus providing very robust pose estimation. Our experiments show that the algorithm outperforms previous methods under noisy conditions.


ieee international conference on automatic face gesture recognition | 2015

Correcting radial and perspective distortion by using face shape information

Tung-Ying Lee; Tzu-Shan Chang; Shang-Hong Lai

In this paper, we propose a new technique for compensating radial and perspective distortions of photos acquired with wide-angle lens by using facial features detected from the images without using predefined calibration patterns. The proposed algorithm utilizes a statistical facial feature model to recover radial distortion and the facial features are further used for adaptive cylindrical projection which will reduce perspective distortion near the image boundary. Our algorithm has several advantages over the traditional methods. First, traditional calibration patterns, like man-made straight buildings, chessboards, or calibration cubes, are not required in our method. Even though the radial distortion can be corrected by several conventional methods, most of them usually produce photos with larger perspective distortion for faces compared to our method. The system is composed of four components: offline training of the statistical facial feature model, feature point extraction from distorted faces, estimation of radial distortion parameters and compensation of radial distortion, and adaptive cylindrical projection. In order to estimate the distortion parameters, we propose an energy considering the fitness between the undistorted coordinates of the facial feature points extracted from the input distorted image and the learned statistical facial feature model. Given the distortion parameters, the fitness is calculated by solving a linear least squares system. The distortion parameters that minimize the cost function are searched in a hierarchical manner. Experimental results demonstrate the distortion reduction in the corrected images by using the proposed method.

Collaboration


Dive into the Tung-Ying Lee's collaboration.

Top Co-Authors

Avatar

Shang-Hong Lai

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Tzu-Shan Chang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chen-Hao Wei

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Hong-Ren Su

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Yi-Ling Chen

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Hurng-Sheng Wu

Memorial Hospital of South Bend

View shared research outputs
Top Co-Authors

Avatar

Kai-Che Liu

Memorial Hospital of South Bend

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing-Yu Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chia-Ming Cheng

National Tsing Hua University

View shared research outputs
Researchain Logo
Decentralizing Knowledge