Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuping Lin is active.

Publication


Featured researches published by Yuping Lin.


computer vision and pattern recognition | 2007

Map-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences

Yuping Lin; Gérard G. Medioni

Registering consecutive images from an airborne sensor into a mosaic is an essential tool for image analysts. Strictly local methods tend to accumulate errors, resulting in distortion. We propose here to use a reference image (such as a high resolution map image) to overcome this limitation. In our approach, we register a frame in an image sequence to the map using both frame-to-frame registration and frame-to-map registration iteratively. In frame-to-frame registration, a frame is registered to its previous frame. With its previous frame been registered to the map in the previous iteration, we can derive an estimated transformation from the frame to the map. In frame-to-map registration, we warp the frame to the map by this transformation to compensate for scale and rotation difference and then perform an area based matching using mutual information to find correspondences between this warped frame and the map. These correspondences together with the correspondences in previous frames could be regarded as correspondences between the partial local mosaic and the map. By registering the partial local mosaic to the map, we derive a transformation from the frame to the map. With this two-step registration, the errors between each consecutive frames are not accumulated. We then extend our approach to synchronize multiple image sequences by tracking moving objects in each image sequence, and aligning the frames based on the objects coordinates in the reference image.


international conference on pattern recognition | 2010

3D Face Reconstruction Using a Single or Multiple Views

Jongmoo Choi; Gérard G. Medioni; Yuping Lin; Luciano Silva; Olga Regina; Mauricio Pamplona; Timothy C. Faltemier

We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.


workshop on applications of computer vision | 2007

Map-Enhanced UAV Image Sequence Registration

Yuping Lin; Qian Yu; Gérard G. Medioni

Registering consecutive images from an airborne sensor into a mosaic is an essential tool for image analysts. Strictly local methods tend to accumulate errors, resulting in distortion. We propose here to use a reference image (such as a high resolution map image) to overcome this limitation. In our approach, we register a frame in an image sequence to the map using both frame-to-frame registration and frame-to-map registration iteratively. In frame-to-frame registration, a frame is registered to its previous frame. With its previous frame been registered to the map in the previous iteration, we can derive an estimated transformation from the frame to the map. In frame-to-map registration, we warp the frame to the map by this transformation to compensate for scale and rotation difference and then perform an area based matching using mutual information to find correspondences between this warped frame and the map. From these correspondences, we derive a transformation that further registers the warped frame to the map. With this two-step registration, the errors between each consecutive frames are not accumulated. We present results on real image sequences from a hot air balloon


computer vision and pattern recognition | 2010

Accurate 3D face reconstruction from weakly calibrated wide baseline images with profile contours

Yuping Lin; Gérard G. Medioni; Jongmoo Choi

We propose a method to generate a highly accurate 3D face model from a set of wide-baseline images in a weakly calibrated setup. Our approach is purely data driven, and produces faithful 3D models without any pre-defined models, unlike other statistical model-based approaches. Our results do not rely upon a critical initialization step nor parameters for optimization steps. We process 5 images (including profile views), infer the accurate poses of cameras in all views, and then infer a dense 3D face model. The quality of 3D face models depends on the accuracy of estimated head-camera motion. First, we propose to use an iterative bundle adjustment approach to remove outliers in corresponding points. Contours in the profile views are matched to provide reliable correspondences that link two opposite side of views together. For dense reconstruction, we propose to use a face-specific cylindrical representation which allows us to solve a global optimization problem for N-view dense aggregation. Profile contours are used once again to provide constraints in the optimization step. Experimental results using synthetic and real images show that our method provides accurate and stable reconstruction results on wide-baseline images. We compare our method with state of the art methods, and show that it provides significantly better results in terms of both accuracy and efficiency.


computer vision and pattern recognition | 2008

Retinal image registration from 2D to 3D

Yuping Lin; Gérard G. Medioni

We propose a 2D registration method for multi-modal image sequences of the retinal fundus, and a 3D metric reconstruction of near planar surface from multiple views. There are two major contributions in our paper. For 2D registration, our method produces high registration rates while accounting for large modality differences. Compared with the state of the art method, our approach has higher registration rate (97.2% vs. 82.31%) while the computation time is much less. This is achieved by extracting features from the edge maps of the contrast enhanced images, and performing pairwise registration by matching the features in an iterative manner, maximizing the number of matches and estimating homographies accurately. The pairwise registration result is further globally optimized by an indirect registration process. For 3D registration part, images are registered to the reference frame by transforming points via a reconstructed 3D surface. The challenge is the reconstruction of a near planar surface, in which the shallow depth makes it a quasi-degenerate case for estimating the geometry from images. Our contribution is the proposed 4-pass bundle adjustment method that gives optimal estimation of all camera poses. With accurate camera poses, the 3D surface can be reconstructed using the images associated with the cameras with the largest baseline. Compared with state of the art 3D retinal image registration methods, our approach produces better results in all image sets.


machine vision applications | 2011

Efficient detection and tracking of moving objects in geo-coordinates

Yuping Lin; Qian Yu; Gérard G. Medioni

We present a system to detect and track moving objects from an airborne platform. Given a global map, such as a satellite image, our approach can locate and track the targets in geo-coordinates, namely longitude and latitude obtained from geo-registration. A motion model in geo-coordinates is more physically meaningful than the one in image coordinates. We propose to use a two-step geo-registration approach to stitch images acquired by satellite and UAV cameras. Mutual information is used to find correspondences between these two very different modalities. After motion segmentation and geo-registration, tracking is performed in a hierarchical manner: at the temporally local level, moving image blobs extracted by motion segmentation are associated into tracklets; at the global level, tracklets are linked by their appearance and spatio-temporal consistency on the global map. To achieve efficient time performance, graphics processing unit techniques are applied in the geo-registration and motion detection modules, which are the bottleneck of the whole system. Experiments show that our method can efficiently deal with long term occlusion and segmented tracks even when targets fall out the field of view.


computer vision and pattern recognition | 2008

Mutual information computation and maximization using GPU

Yuping Lin; Gérard G. Medioni

We present a GPU implementation to compute both mutual information and its derivatives. Mutual information computation is a highly demanding process due to the enormous number of exponential computations. It is therefore the bottleneck in many image registration applications. However, we show that these computations are fully parallizable and can be efficiently ported onto the GPU architecture. Compared with the same CPU implementation running on a workstation level CPU, we reached a factor of 170 in computing mutual information, and a factor of 400 in computing its derivatives.


international conference on computer vision | 2011

Aerial 3D reconstruction with line-constrained dynamic programming

Huei-Hung Liao; Yuping Lin; Gérard G. Medioni

Aerial imagery of an urban environment is often characterized by significant occlusions, sharp edges, and textureless regions, leading to poor 3D reconstruction using conventional multi-view stereo methods. In this paper, we propose a novel approach to 3D reconstruction of urban areas from a set of uncalibrated aerial images. A very general structural prior is assumed that urban scenes consist mostly of planar surfaces oriented either in a horizontal or an arbitrary vertical orientation. In addition, most structural edges associated with such surfaces are also horizontal or vertical. These two assumptions provide powerful constraints on the underlying 3D geometry. The main contribution of this paper is to translate the two constraints on 3D structure into intra-image-column and inter-image-column constraints, respectively, and to formulate the dense reconstruction as a 2-pass Dynamic Programming problem, which is solved in complete parallel on a GPU. The result is an accurate cloud of 3D dense points of the underlying urban scene. Our algorithm completes the reconstruction of 1M points with 160 available discrete height levels in under a hundred seconds. Results on multiple datasets show that we are capable of preserving a high level of structural detail and visual quality.


computer vision and pattern recognition | 2007

Moving Object Detection on a Runway Prior to Landing Using an Onboard Infrared Camera

Cheng-Hua Pai; Yuping Lin; Gérard G. Medioni; Ray Rida Hamza

Determining the status of a runway prior to landing is essential for any aircraft, whether manned or unmanned. In this paper, we present a method that can detect moving objects on the runway from an onboard infrared camera prior to the landing phase. Since the runway is a planar surface, we first locally stabilize the sequence to automatically selected reference frames using feature points in the neighborhood of the runway. Next, we normalize the stabilized sequence to compensate for the global intensity variation caused by the gain control of the infrared camera. We then create a background model to learn an appearance model of the runway. Finally, we identify moving objects by comparing the image sequence with the background model. We have tested our system with both synthetic and real world data and show that it can detect distant moving objects on the runway. We also provide a quantitative analysis of the performance with respect to variations in size, direction and speed of the target.


international conference on computer vision | 2009

Untangling fibers by quotient appearance manifold mapping for grayscale shape classification

Yoshihisa Shinagawa; Yuping Lin

Appearance manifolds have been one of the most powerful methods for object recognition. However, they could not be used for grayscale shape classification, particularly in three dimensions, such as classifying medical lesion volumes or galaxy images. The main cause of the difficulty is that the appearance manifolds of shape classes have entangled fibers in their embedded Euclidean space. This paper proposes a novel appearance-based method called the quotient appearance manifold mapping to untangle the fibers of the appearance manifolds. First, the quotient manifold is constructed to untangle the fiber bundles of appearance manifolds. The mapping from each point of the manifold to the quotient submanifold is then proposed to classify grayscale shapes. We show the effectiveness in grayscale 3D shape recognition using medical images.

Collaboration


Dive into the Yuping Lin's collaboration.

Top Co-Authors

Avatar

Gérard G. Medioni

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheng-Hua Pai

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jongmoo Choi

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Huei-Hung Liao

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luciano Silva

Federal University of Paraná

View shared research outputs
Top Co-Authors

Avatar

Mauricio Pamplona

Federal University of Paraná

View shared research outputs
Researchain Logo
Decentralizing Knowledge