Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mei Han is active.

Publication


Featured researches published by Mei Han.


computer vision and pattern recognition | 2010

Efficient hierarchical graph-based video segmentation

Matthias Grundmann; Vivek Kwatra; Mei Han; Irfan A. Essa

We present an efficient and scalable technique for spatiotemporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a “region graph” over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, which are temporally coherent with stable region boundaries, and allows subsequent applications to choose from varying levels of granularity. We further improve segmentation quality by using dense optical flow to guide temporal connections in the initial graph. We also propose two novel approaches to improve the scalability of our technique: (a) a parallel out-of-core algorithm that can process volumes much larger than an in-core algorithm, and (b) a clip-based processing algorithm that divides the video into overlapping clips in time, and segments them successively while enforcing consistency. We demonstrate hierarchical segmentations on video shots as long as 40 seconds, and even support a streaming mode for arbitrarily long videos, albeit without the ability to process them hierarchically.


computer vision and pattern recognition | 2010

Discontinuous seam-carving for video retargeting

Matthias Grundmann; Vivek Kwatra; Mei Han; Irfan A. Essa

We introduce a new algorithm for video retargeting that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. This formulation optimizes the difference in appearance of the resultant retargeted frame to the optimal temporally coherent one, and allows for carving around fast moving salient regions. Additionally, we generalize the idea of appearance-based coherence to the spatial domain by introducing piece-wise spatial seams. Our spatial coherence measure minimizes the change in gradients during retargeting, which preserves spatial detail better than minimization of color difference alone. We also show that per-frame saliency (gradient-based or feature-based) does not always produce desirable retargeting results and propose a novel automatically computed measure of spatio-temporal saliency. As needed, a user may also augment the saliency by interactive region-brushing. Our retargeting algorithm processes the video sequentially, making it conducive for streaming applications.


computer vision and pattern recognition | 2011

A hierarchical conditional random field model for labeling and segmenting images of street scenes

Qixing Huang; Mei Han; Bo Wu; Sergey Ioffe

Simultaneously segmenting and labeling images is a fundamental problem in Computer Vision. In this paper, we introduce a hierarchical CRF model to deal with the problem of labeling images of street scenes by several distinctive object classes. In addition to learning a CRF model from all the labeled images, we group images into clusters of similar images and learn a CRF model from each cluster separately. When labeling a new image, we pick the closest cluster and use the associated CRF model to label this image. Experimental results show that this hierarchical image labeling method is comparable to, and in many cases superior to, previous methods on benchmark data sets. In addition to segmentation and labeling results, we also showed how to apply the image labeling result to rerank Google similar images.


international conference on computational photography | 2012

Shadow removal for aerial imagery by information theoretic intrinsic image analysis

Vivek Kwatra; Mei Han; Shengyang Dai

We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.


european conference on computer vision | 2010

Fast covariance computation and dimensionality reduction for sub-window features in images

Vivek Kwatra; Mei Han

This paper presents algorithms for efficiently computing the covariance matrix for features that form sub-windows in a large multidimensional image. For example, several image processing applications, e.g. texture analysis/synthesis, image retrieval, and compression, operate upon patches within an image. These patches are usually projected onto a low-dimensional feature space using dimensionality reduction techniques such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which in-turn requires computation of the covariance matrix from a set of features. Covariance computation is usually the bottleneck during PCA or LDA (O(nd2) where n is the number of pixels in the image and d is the dimensionality of the vector). Our approach reduces the complexity of covariance computation by exploiting the redundancy between feature vectors corresponding to overlapping patches. Specifically, we show that the covariance between two feature components can be reduced to a function of the relative displacement between those components in patch space. One can then employ a lookup table to store covariance values by relative displacement. By operating in the frequency domain, this lookup table can be computed in O(n log n) time. We allow the patches to sub-sample the image, which is useful for hierarchical processing and also enables working with filtered responses over these patches, such as local gist features. We also propose a method for fast projection of sub-window patches onto the low-dimensional space.


international conference on image processing | 2010

Example-based image compression

Jingyu Cui; Saurabh Mathur; Michele Covell; Vivek Kwatra; Mei Han

The current standard image-compression approaches rely on fairly simple predictions, using either block- or wavelet-based methods. While many more sophisticated texture-modeling approaches have been proposed, most do not provide a significant improvement in compression rate over the current standards at a workable encoding complexity level. We re-examine this area, using example-based texture prediction. We find that we can provide consistent and significant improvements over JPEG, reducing the bit rate by more than 20% for many PSNR levels. These improvements require consideration of the differences between residual energy and prediction/residual compressibility when selecting a texture prediction, as well as careful control of the computational complexity in encoding.


Archive | 2013

Spatio-temporal segmentation for video

Matthias Grundmann; Vivek Kwatra; Mei Han


Archive | 2013

Principal component analysis based seed generation for clustering analysis

Sangho Yoon; Jay Yagnik; Mei Han; Vivek Kwatra


Archive | 2009

Content-aware video resizing using discontinuous seam carving

Matthias Grundmann; Vivek Kwatra; Mei Han


Archive | 2011

Removing illumination variation from images

Vivek Kwatra; Mei Han

Collaboration


Dive into the Mei Han's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qixing Huang

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Irfan A. Essa

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge