Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian L. Price is active.

Publication


Featured researches published by Brian L. Price.


international conference on computer vision | 2015

Minimum Barrier Salient Object Detection at 80 FPS

Jianming Zhang; Stan Sclaroff; Zhe L. Lin; Xiaohui Shen; Brian L. Price; Radomir Mech

We propose a highly efficient, yet powerful, salient object detection method based on the Minimum Barrier Distance (MBD) Transform. The MBD transform is robust to pixel-value fluctuation, and thus can be effectively applied on raw pixels without region abstraction. We present an approximate MBD transform algorithm with 100X speedup over the exact algorithm. An error bound analysis is also provided. Powered by this fast MBD transform algorithm, the proposed salient object detection method runs at 80 FPS, and significantly outperforms previous methods with similar speed on four large benchmark datasets, and achieves comparable or better performance than state-of-the-art methods. Furthermore, a technique based on color whitening is proposed to extend our method to leverage the appearance-based backgroundness cue. This extended version further improves the performance, while still being one order of magnitude faster than all the other leading methods.


computer vision and pattern recognition | 2010

Geodesic graph cut for interactive image segmentation

Brian L. Price; Bryan S. Morse; Scott D. Cohen

Interactive segmentation is useful for selecting objects of interest in images and continues to be a topic of much study. Methods that grow regions from foreground/background seeds, such as the recent geodesic segmentation approach, avoid the boundary-length bias of graph-cut methods but have their own bias towards minimizing paths to the seeds, resulting in increased sensitivity to seed placement. The lack of edge modeling in geodesic or similar approaches limits their ability to precisely localize object boundaries, something at which graph-cut methods generally excel. This paper presents a method for combining geodesic-distance information with edge information in a graphcut optimization framework, leveraging the complementary strengths of each. Rather than a fixed combination we use the distinctiveness of the foreground/background color models to predict the effectiveness of the geodesic distance term and adjust the weighting accordingly. We also introduce a spatially varying weighting that decreases the potential for shortcutting in object interiors while transferring greater control to the edge term for better localization near object boundaries. Results show our method is less prone to shortcutting than typical graph cut methods while being less sensitive to seed placement and better at edge localization than geodesic methods. This leads to increased segmentation accuracy and reduced effort on the part of the user.


international conference on computer vision | 2009

LIVEcut: Learning-based interactive video segmentation by evaluation of multiple propagated cues

Brian L. Price; Bryan S. Morse; Scott D. Cohen

Video sequences contain many cues that may be used to segment objects in them, such as color, gradient, color adjacency, shape, temporal coherence, camera and object motion, and easily-trackable points. This paper introduces LIVEcut, a novel method for interactively selecting objects in video sequences by extracting and leveraging as much of this information as possible. Using a graph-cut optimization framework, LIVEcut propagates the selection forward frame by frame, allowing the user to correct any mistakes along the way if needed. Enhanced methods of extracting many of the features are provided. In order to use the most accurate information from the various potentially-conflicting features, each feature is automatically weighted locally based on its estimated accuracy using the previous implicitly-validated frame. Feature weights are further updated by learning from the user corrections required in the previous frame. The effectiveness of LIVEcut is shown through timing comparisons to other interactive methods, accuracy comparisons to unsupervised methods, and qualitatively through selections on various video sequences.


computer vision and pattern recognition | 2014

Context Driven Scene Parsing with Attention to Rare Classes

Jimei Yang; Brian L. Price; Scott D. Cohen; Ming-Hsuan Yang

This paper presents a scalable scene parsing algorithm based on image retrieval and superpixel matching. We focus on rare object classes, which play an important role in achieving richer semantic understanding of visual scenes, compared to common background classes. Towards this end, we make two novel contributions: rare class expansion and semantic context description. First, considering the long-tailed nature of the label distribution, we expand the retrieval set by rare class exemplars and thus achieve more balanced superpixel classification results. Second, we incorporate both global and local semantic context information through a feedback based mechanism to refine image retrieval and superpixel matching. Results on the SIFTflow and LMSun datasets show the superior performance of our algorithm, especially on the rare classes, without sacrificing overall labeling accuracy.


IEEE Transactions on Image Processing | 2015

Inner and Inter Label Propagation: Salient Object Detection in the Wild

Hongyang Li; Huchuan Lu; Zhe L. Lin; Xiaohui Shen; Brian L. Price

In this paper, we propose a novel label propagation-based method for saliency detection. A key observation is that saliency in an image can be estimated by propagating the labels extracted from the most certain background and object regions. For most natural images, some boundary superpixels serve as the background labels and the saliency of other superpixels are determined by ranking their similarities to the boundary labels based on an inner propagation scheme. For images of complex scenes, we further deploy a three-cue-center-biased objectness measure to pick out and propagate foreground labels. A co-transduction algorithm is devised to fuse both boundary and objectness labels based on an inter propagation scheme. The compactness criterion decides whether the incorporation of objectness labels is necessary, thus greatly enhancing computational efficiency. Results on five benchmark data sets with pixelwise accurate annotations show that the proposed method achieves superior performance compared with the newest state-of-the-arts in terms of different evaluation metrics.


computer vision and pattern recognition | 2015

Towards unified depth and semantic prediction from a single image

Peng Wang; Xiaohui Shen; Zhe Lin; Scott D. Cohen; Brian L. Price; Alan L. Yuille

Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.


computer vision and pattern recognition | 2016

Object Contour Detection with a Fully Convolutional Encoder-Decoder Network

Jimei Yang; Brian L. Price; Scott D. Cohen; Honglak Lee; Ming-Hsuan Yang

We develop a deep learning algorithm for contour detection with a fully convolutional encoder-decoder network. Different from previous low-level edge detection, our algorithm focuses on detecting higher-level object contours. Our network is trained end-to-end on PASCAL VOC with refined ground truth from inaccurate polygon annotations, yielding much higher precision in object contour detection than previous methods. We find that the learned model generalizes well to unseen object classes from the same supercategories on MS COCO and can match state-of-the-art edge detection on BSDS500 with fine-tuning. By combining with the multiscale combinatorial grouping algorithm, our method can generate high-quality segmented object proposals, which significantly advance the state-of-the-art on PASCAL VOC (improving average recall from 0.62 to 0.67) with a relatively small amount of candidates (~1660 per image).


computer vision and pattern recognition | 2013

Improving Image Matting Using Comprehensive Sampling Sets

Ehsan Shahrian; Deepu Rajan; Brian L. Price; Scott D. Cohen

In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.


Computers & Graphics | 2007

Interactive segmentation of image volumes with Live Surface

Christopher J. Armstrong; Brian L. Price; William A. Barrett

Live Surface allows users to segment and render complex surfaces from 3D image volumes at interactive (sub-second) rates using a novel, cascading graph cut (CGC). Live Surface consists of two phases: (1) preprocessing for generation of a complete 3D hierarchy of tobogganed regions followed by tracking of all region surfaces; (2) user interaction in which, with each mouse movement, the volume is segmented and the 3D object is rendered at interactive rates. Interactive segmentation is accomplished by cascading through the 3D hierarchy from the top, applying graph cut successively, at each level, only to regions bordering the segmented surface from the previous level. CGC allows the entire image volume to be segmented an order of magnitude faster than existing techniques that make use of graph cut. OpenGL rendering provides for display and update of the segmented surface at interactive rates. The user selects objects by tagging voxels with either foreground (object) or background seeds. Seeds can be placed on image cross-sections or directly on the 3D rendered surface. Interaction with the rendered surface improves the users ability to steer the segmentation, augmenting or subtracting from the current selection. Segmentation and rendering, combined, is accomplished in about 0.35s, allowing 3D surfaces to be displayed and updated dynamically as each additional seed is deposited. The immediate feedback of Live Surface allows the segmentation of 3D image volumes using an interaction paradigm similar to the Live Wire (Intelligent Scissors) tool used in 2D images.


The Visual Computer | 2006

Object-based vectorization for interactive image editing

Brian L. Price; William A. Barrett

We present a technique for creating an editable vector graphic from an object in a raster image. Object selection is performed interactively in subsecond time by calling graph cut with each mouse movement. A renderable mesh is then computed automatically for the selected object and each of its subobjects by (1) generating a coarse object mesh; (2) performing recursive graph cut segmentation and hierarchical ordering of subobjects; (3) applying error-driven mesh refinement to each (sub)object. The fully layered object hierarchy compares favorably with current approaches and is computed in a few 10s of seconds, facilitating object-level editing without leaving holes.

Collaboration


Dive into the Brian L. Price's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryan S. Morse

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Wang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge