Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shang-Hong Lai is active.

Publication


Featured researches published by Shang-Hong Lai.


international conference on computer vision | 2011

Fusing generic objectness and visual saliency for salient object detection

Kai-Yueh Chang; Tyng-Luh Liu; Hwann-Tzong Chen; Shang-Hong Lai

We present a novel computational model to explore the relatedness of objectness and saliency, each of which plays an important role in the study of visual attention. The proposed framework conceptually integrates these two concepts via constructing a graphical model to account for their relationships, and concurrently improves their estimation by iteratively optimizing a novel energy function realizing the model. Specifically, the energy function comprises the objectness, the saliency, and the interaction energy, respectively corresponding to explain their individual regularities and the mutual effects. Minimizing the energy by fixing one or the other would elegantly transform the model into solving the problem of objectness or saliency estimation, while the useful information from the other concept can be utilized through the interaction term. Experimental results on two benchmark datasets demonstrate that the proposed model can simultaneously yield a saliency map of better quality and a more meaningful objectness output for salient object detection.


computer vision and pattern recognition | 2011

From co-saliency to co-segmentation: An efficient and fully unsupervised energy minimization model

Kai-Yueh Chang; Tyng-Luh Liu; Shang-Hong Lai

We address two key issues of co-segmentation over multiple images. The first is whether a pure unsupervised algorithm can satisfactorily solve this problem. Without the users guidance, segmenting the foregrounds implied by the common object is quite a challenging task, especially when substantial variations in the objects appearance, shape, and scale are allowed. The second issue concerns the efficiency if the technique can lead to practical uses. With these in mind, we establish an MRF optimization model that has an energy function with nice properties and can be shown to effectively resolve the two difficulties. Specifically, instead of relying on the user inputs, our approach introduces a co-saliency prior as the hint about possible foreground locations, and uses it to construct the MRF data terms. To complete the optimization framework, we include a novel global term that is more appropriate to co-segmentation, and results in a submodular energy function. The proposed model can thus be optimally solved by graph cuts. We demonstrate these advantages by testing our method on several benchmark datasets.


IEEE Transactions on Medical Imaging | 2009

Learning-Based Vertebra Detection and Iterative Normalized-Cut Segmentation for Spinal MRI

Szu-Hao Huang; Yi-Hong Chu; Shang-Hong Lai; Carol L. Novak

Automatic extraction of vertebra regions from a spinal magnetic resonance (MR) image is normally required as the first step to an intelligent spinal MR image diagnosis system. In this work, we develop a fully automatic vertebra detection and segmentation system, which consists of three stages; namely, AdaBoost-based vertebra detection, detection refinement via robust curve fitting, and vertebra segmentation by an iterative normalized cut algorithm. In order to produce an efficient and effective vertebra detector, a statistical learning approach based on an improved AdaBoost algorithm is proposed. A robust estimation procedure is applied on the detected vertebra locations to fit a spine curve, thus refining the above vertebra detection results. This refinement process involves removing the false detections and recovering the miss-detected vertebrae. Finally, an iterative normalized-cut segmentation algorithm is proposed to segment the precise vertebra regions from the detected vertebra locations. In our implementation, the proposed AdaBoost-based detector is trained from 22 spinal MR volume images. The experimental results show that the proposed vertebra detection and segmentation system can achieve nearly 98% vertebra detection rate and 96% segmentation accuracy on a variety of testing spinal MR images. Our experiments also show the vertebra detection and segmentation accuracies by using the proposed algorithm are superior to those of the previous representative methods. The proposed vertebra detection and segmentation system is proved to be robust and accurate so that it can be used for advanced research and application on spinal MR images.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

A generalized depth estimation algorithm with a single image

Shang-Hong Lai; Chang-Wu Fu; Shyang Chang

A depth estimation algorithm proposed by A.P. Pentland (1987) is generalized. In the proposed algorithm, the raw image data in the vicinity of the edge is used to estimate the depth from defocus. Since no differentiation operation on the image data is required before the optimization process, the method is less sensitive to the noise disturbance of measurements. Furthermore, the edge orientation that was critical in Pentlands approach will not be required in the case. This algorithm is then applied to synthetic images containing various amounts of noise to test its performance. Experimental results indicate that the depth estimation errors are kept within 5% of true values on the average when it is applied to real images. >


IEEE Transactions on Image Processing | 2008

Fast Template Matching Based on Normalized Cross Correlation With Adaptive Multilevel Winner Update

Shou-Der Wei; Shang-Hong Lai

In this paper, we propose a fast pattern matching algorithm based on the normalized cross correlation (NCC) criterion by combining adaptive multilevel partition with the winner update scheme to achieve very efficient search. This winner update scheme is applied in conjunction with an upper bound for the cross correlation derived from Cauchy-Schwarz inequality. To apply the winner update scheme in an efficient way, we partition the summation of cross correlation into different levels with the partition order determined by the gradient energies of the partitioned regions in the template. Thus, this winner update scheme in conjunction with the upper bound for NCC can be employed to skip unnecessary calculation. Experimental results show the proposed algorithm is very efficient for image matching under different lighting conditions.


International Journal of Computer Vision | 1998

Reliable and Efficient Computation of Optical Flow

Shang-Hong Lai; Baba C. Vemuri

In this paper, we present two very efficient and accurate algorithms for computing optical flow. The first is a modified gradient-based regularization method, and the other is an SSD-based regularization method. For the gradient-based method, to amend the errors in the discrete image flow equation caused by numerical differentiation as well as temporal and spatial aliasing in the brightness function, we propose to selectively combine the image flow constraint and a contour-based flow constraint into the data constraint by using a reliability measure. Each data constraint is appropriately normalized to obtain an approximate minimum distance (of the data point to the linear flow equation) constraint instead of the conventional linear flow constraint. These modifications lead to robust and accurate optical flow estimation. We propose an incomplete Cholesky preconditioned conjugate gradient algorithm to solve the resulting large and sparse linear system efficiently. Our SSD-based regularization method uses a normalized SSD measure (based on a similar reasoning as in the gradient-based scheme) as the data constraint in a regularization framework. The nonlinear conjugate gradient algorithm in conjunction with an incomplete Cholesky preconditioning is developed to solve the resulting nonlinear minimization problem. Experimental results on synthetic and real image sequences for these two algorithms are given to demonstrate their performance in comparison with competing methods reported in literature.


international conference on pattern recognition | 2008

Improved novel view synthesis from depth image with large baseline

Chia-Ming Cheng; Shu-Jyuan Lin; Shang-Hong Lai; Jinn-Cherng Yang

In this paper, a new algorithm is developed for recovering the large disocclusion regions in depth image based rendering (DIBR) systems on 3DTV. For the DIBR systems, undesirable artifacts occur in the disocclusion regions by using the conventional view synthesis techniques especially with large baseline. Three techniques are proposed to improve the view synthesis results. The first is the preprocessing of the depth image by using the bilateral filter, which helps to sharpen the discontinuous depth changes as well as to smooth the neighboring depth of similar color, thus restraining noises from appearing on the warped images. Secondly, on the warped image of a new viewpoint, we fill the disocclusion regions on the depth image with the background depth levels to preserve the depth structure. For the color image, we propose the depth-guided exemplar-based image inpainting that combines the structural strengths of the color gradient to preserve the image structure in the restored regions. Finally, a trilateral filter, which simultaneous combines the spatial location, the color intensity, and the depth information to determine the weighting, is applied to enhance the image synthesis results. Experimental results are shown to demonstrate the superior performance of the proposed novel view synthesis algorithm compared to the traditional methods.


international conference on multimedia and expo | 2004

A robust and efficient video stabilization algorithm

Hung-Chang Chang; Shang-Hong Lai; Kuang-Rong Lu

The acquisition of digital video usually suffers from undesirable camera jitter due to unstable random camera motion, which is produced by a hand-held camera or a camera in a vehicle moving on a non-smooth road or terrain. We propose a real-time robust video stabilization algorithm to remove undesirable motion jitter and produce a stabilized video. We first compute the optical flow between successive frames, followed by estimating the camera motion by fitting the computed optical flow field to a simplified affine motion model with a trimmed least squares method. Then, the computed camera motions are smoothed temporally to reduce the motion vibrations by using a regularization method. Finally, we transform all frames of the video based on the original and smoothed motions to obtain a stabilized video. Experimental results are given to demonstrate the stabilization performance and the efficiency of the proposed algorithm.


Medical Imaging 1998: Image Perception | 1998

Robust and automatic adjustment of display window width and center for MR images

Shang-Hong Lai; Ming Fang

The display of a 12-bit MR image on a common 8-bit computer monitor is usually achieved by linearly mapping the image values through a display window, which is determined by the width and center values. The adjustment of the display window for a variety of MR images involves considerable user interaction. In this paper, we present an advanced algorithm with the hierarchical neural network structure for robust and automatic adjustment of display window width and center for a wide range of MR images. This algorithm consists of a feature generator utilizing both histogram and spatial information computed from a MR image, a wavelet transform for compressing the feature vector, a competitive layer neural network for clustering MR images into different subclasses, a bi-modal linear estimator and an RBF (radial basis function) network based estimator for each subclass, as well as a data fusion process to integrate estimates from both estimators of different subclasses to compute the final display parameters. Both estimators can adapt a new types of MR images simply by training them with those images, thus making the algorithm adaptive and extendable. This trainability makes also possible for advanced future developments such as adaptation of the display parameters to users personal preference. While the RBF neural network based estimators perform very well for images similar to those in the training data set, the bi-modal linear estimators provide reasonable estimation for a wide range of images that may not be included in the training data set. The data fusion step makes the final estimation of the display parameters accurate for trained images and robust for the unknown images. The algorithm has been tested on a wide range of MR images and shown satisfactory results. Although the proposed algorithm is very comprehensive, its execution time is kept within a reasonable range.


Journal of Visual Communication and Image Representation | 2006

A robust real-time video stabilization algorithm

Hung-Chang Chang; Shang-Hong Lai; Kuang-Rong Lu

Abstract The acquisition of digital video usually suffers from undesirable camera jitters due to unstable camera motions. In this paper, we propose a robust real-time video stabilization algorithm that alleviates the undesirable jitter motions from the unstable video to produce a stabilized video. In the proposed algorithm, we first compute the sparse optical flow vectors between successive frames, followed by estimating the camera motion by fitting the computed optical flow vectors to a simplified affine motion model with a robust trimmed least squares method. Then the computed camera motion parameters are smoothed temporally to reduce the motion fluctuations by using a regularization method. Finally, we transform all frames in the video sequence based on the original and smoothed camera motions to obtain a stabilized video. Experimental results are given to demonstrate the stabilization performance and the efficiency of the proposed algorithm.

Collaboration


Dive into the Shang-Hong Lai's collaboration.

Top Co-Authors

Avatar

Chen-Kuo Chiang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Te-Feng Su

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Hong-Ren Su

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chia-Ming Cheng

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chia-Te Liao

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Shou-Der Wei

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Shu-Fan Wang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Szu-Hao Huang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Po-Hao Huang

National Tsing Hua University

View shared research outputs
Researchain Logo
Decentralizing Knowledge