Eric N. Mortensen
Oregon State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric N. Mortensen.
international conference on computer graphics and interactive techniques | 1995
Eric N. Mortensen; William A. Barrett
We present a new, interactive tool called Intelligent Scissors which we use for image segmentation and composition. Fully automated segmentation is an unsolved problem, while manual tracing is inaccurate and laboriously unacceptable. However, Intelligent Scissors allow objects within digital images to be extracted quickly and accurately using simple gesture motions with a mouse. When the gestured mouse position comes in proximity to an object edge, a live-wire boundary “snaps” to, and wraps around the object of interest. Live-wire boundary detection formulates discrete dynamic programming (DP) as a two-dimensional graph searching problem. DP provides mathematically optimal boundaries while greatly reducing sensitivity to local noise or other intervening structures. Robustness is further enhanced with on-the-fly training which causes the boundary to adhere to the specific type of edge currently being followed, rather than simply the strongest edge in the neighborhood. Boundary cooling automatically freezes unchanging segments and automates input of additional seed points. Cooling also allows the user to be much more free with the gesture path, thereby increasing the efficiency and finesse with which boundaries can be extracted. Extracted objects can be scaled, rotated, and composited using live-wire masks and spatial frequency equivalencing. Frequency equivalencing is performed by applying a Butterworth filter which matches the lowest frequency spectra to all other image components. Intelligent Scissors allow creation of convincing compositions from existing images while dramatically increasing the speed and precision with which objects can be extracted.
Graphical Models and Image Processing | 1998
Eric N. Mortensen; William A. Barrett
Abstract We present a new, interactive tool called Intelligent Scissors which we use for image segmentation. Fully automated segmentation is an unsolved problem, while manual tracing is inaccurate and laboriously unacceptable. However, Intelligent Scissors allow objects within digital images to be extracted quickly and accurately using simple gesture motions with a mouse. When the gestured mouse position comes in proximity to an object edge, a live-wire boundary “snaps” to, and wraps around the object of interest. Live-wire boundary detection formulates boundary detection as an optimal path search in a weighted graph. Optimal graph searching provides mathematically piece-wise optimal boundaries while greatly reducing sensitivity to local noise or other intervening structures. Robustness is further enhanced with on-the-fly training which causes the boundary to adhere to the specific type of edge currently being followed, rather than simply the strongest edge in the neighborhood. Boundary cooling automatically freezes unchanging segments and automates input of additional seed points. Cooling also allows the user to be much more free with the gesture path, thereby increasing the efficiency and finesse with which boundaries can be extracted.
computer vision and pattern recognition | 2005
Eric N. Mortensen; Hongli Deng; Linda G. Shapiro
Matching points between multiple images of a scene is a vital component of many computer vision tasks. Point matching involves creating a succinct and discriminative descriptor for each point. While current descriptors such as SIFT can find matches between features with unique local neighborhoods, these descriptors typically fail to consider global context to resolve ambiguities that can occur locally when an image has multiple similar regions. This paper presents a feature descriptor that augments SIFT with a global context vector that adds curvilinear shape information from a much larger neighborhood, thus reducing mismatches when multiple local descriptors are similar. It also provides a more robust method for handling 2D nonrigid transformations since points are more effectively matched individually at a global scale rather than constraining multiple matched points to be mapped via a planar homography. We have tested our technique on various images and compare matching accuracy between the SIFT descriptor with global context to that without.
computing in cardiology conference | 1992
Eric N. Mortensen; Bryan S. Morse; William A. Barrett; Jayaram K. Udupa
An adaptive boundary detection algorithm that uses two-dimensional dynamic programming (DP) is presented. The algorithm is less constrained than previous one-dimensional dynamic programming algorithms and allows the user to interactively determine the mathematically optimal boundary between a user-selected seed point and any other dynamically selected free point in the image. Interactive movement of the free point by the cursor causes the boundary to behave like a live wire as it adapts to the new minimum cost path between the seed point and the currently selected free point. The algorithm can also be adapted or customized to learn boundary-defining features for a particular class of images. Adaptive 2-D DP performs well on a variety of images. It accurately detects the boundaries of low contrast objects, which occur with intravenous injections, as well as those found in noisy, low SNR images.<<ETX>>
computer vision and pattern recognition | 1999
Eric N. Mortensen; William A. Barrett
Intelligent Scissors is an interactive image segmentation tool that allows a user to select piece-wise globally optimal contour segments that correspond to a desired object boundary. We present a new and faster method of computing the optimal path by over-segmenting the image using tobogganing and then imposing a weighted planar graph on top of the resulting region boundaries. The resulting region-based graph is many times smaller than the previous pixel-based graph, thus providing faster graph searches and immediate user interaction. Further tobogganing provides an new systematic and predictable framework for computing edge model parameters, allowing subpixel localization as well as a measure of edge blur.
machine vision applications | 2008
Natalia Larios; Hongli Deng; Wei Zhang; Matt Sarpola; Jenny Yuen; Robert Paasch; Andrew R. Moldenke; David A. Lytle; Salvador Ruiz Correa; Eric N. Mortensen; Linda G. Shapiro; Thomas G. Dietterich
This paper describes a computer vision approach to automated rapid-throughput taxonomic identification of stonefly larvae. The long-term objective of this research is to develop a cost-effective method for environmental monitoring based on automated identification of indicator species. Recognition of stonefly larvae is challenging because they are highly articulated, they exhibit a high degree of intraspecies variation in size and color, and some species are difficult to distinguish visually, despite prominent dorsal patterning. The stoneflies are imaged via an apparatus that manipulates the specimens into the field of view of a microscope so that images are obtained under highly repeatable conditions. The images are then classified through a process that involves (a) identification of regions of interest, (b) representation of those regions as SIFT vectors (Lowe, in Int J Comput Vis 60(2):91–110, 2004) (c) classification of the SIFT vectors into learned “features” to form a histogram of detected features, and (d) classification of the feature histogram via state-of-the-art ensemble classification algorithms. The steps (a) to (c) compose the concatenated feature histogram (CFH) method. We apply three region detectors for part (a) above, including a newly developed principal curvature-based region (PCBR) detector. This detector finds stable regions of high curvature via a watershed segmentation algorithm. We compute a separate dictionary of learned features for each region detector, and then concatenate the histograms prior to the final classification step. We evaluate this classification methodology on a task of discriminating among four stonefly taxa, two of which, Calineuria and Doroneuria, are difficult even for experts to discriminate. The results show that the combination of all three detectors gives four-class accuracy of 82% and three-class accuracy (pooling Calineuria and Doro-neuria) of 95%. Each region detector makes a valuable contribution. In particular, our new PCBR detector is able to discriminate Calineuria and Doroneuria much better than the other detectors.
computer vision and pattern recognition | 2009
Gonzalo Martínez-Muñoz; Natalia Larios; Eric N. Mortensen; Wei Zhang; Asako Yamamuro; Robert Paasch; Nadia Payet; David A. Lytle; Linda G. Shapiro; Sinisa Todorovic; Andrew R. Moldenke; Thomas G. Dietterich
Current work in object categorization discriminates among objects that typically possess gross differences which are readily apparent. However, many applications require making much finer distinctions. We address an insect categorization problem that is so challenging that even trained human experts cannot readily categorize images of insects considered in this paper. The state of the art that uses visual dictionaries, when applied to this problem, yields mediocre results (16.1% error). Three possible explanations for this are (a) the dictionaries are unsupervised, (b) the dictionaries lose the detailed information contained in each keypoint, and (c) these methods rely on hand-engineered decisions about dictionary size. This paper presents a novel, dictionary-free methodology. A random forest of trees is first trained to predict the class of an image based on individual keypoint descriptors. A unique aspect of these trees is that they do not make decisions but instead merely record evidence-i.e., the number of descriptors from training examples of each category that reached each leaf of the tree. We provide a mathematical model showing that voting evidence is better than voting decisions. To categorize a new image, descriptors for all detected keypoints are “dropped” through the trees, and the evidence at each leaf is summed to obtain an overall evidence vector. This is then sent to a second-level classifier to make the categorization decision. We achieve excellent performance (6.4% error) on the 9-class STONEFLY9 data set. Also, our method achieves an average AUC of 0.921 on the PASCAL06 VOC, which places it fifth out of 21 methods reported in the literature and demonstrates that the method also works well for generic object categorization.
VBC '96 Proceedings of the 4th International Conference on Visualization in Biomedical Computing | 1996
William A. Barrett; Eric N. Mortensen
We present an interactive tool for efficient, accurate, and reproducible boundary extraction which requires minimal user input with a mouse. Optimal boundaries are computed and selected at interactive rates as the user moves the mouse starting from a user-selected seed point. When the mouse position comes in proximity to an object edge, a “live-wire” boundary snaps to, and wraps around the object of interest. Input of a new seed point “freezes” the selected boundary segment, and the process is repeated until the boundary is complete. Data-driven boundary cooling generates seed points automatically and further reduces user input. On-the-fly training adapts the dynamic boundary to edges of current interest.
Journal of The North American Benthological Society | 2010
David A. Lytle; Gonzalo Martínez-Muñoz; Wei Zhang; Natalia Larios; Linda G. Shapiro; Robert Paasch; Andrew R. Moldenke; Eric N. Mortensen; Sinisa Todorovic; Thomas G. Dietterich
Abstract We present a visually based method for the taxonomic identification of benthic invertebrates that automates image capture, image processing, and specimen classification. The BugID system automatically positions and images specimens with minimal user input. Images are then processed with interest operators (machine-learning algorithms for locating informative visual regions) to identify informative pattern features, and this information is used to train a classifier algorithm. Naïve Bayes modeling of stacked decision trees is used to determine whether a specimen is an unknown distractor (taxon not in the training data set) or one of the species in the training set. When tested on images from 9 larval stonefly taxa, BugID correctly identified 94.5% of images, even though small or damaged specimens were included in testing. When distractor taxa (10 common invertebrates not present in the training set) were included to make classification more challenging, overall accuracy decreased but generally was close to 90%. At the equal error rate (EER), 89.5% of stonefly images were correctly classified and the accuracy of nonrejected stoneflies increased to 96.4%, a result suggesting that many difficult-to-identify or poorly imaged stonefly specimens had been rejected prior to classification. BugID is the first system of its kind that allows users to select thresholds for rejection depending on the required use. Rejected images of distractor taxa or difficult specimens can be identified later by a taxonomic expert, and new taxa ultimately can be incorporated into the training set of known taxa. BugID has several advantages over other automated insect classification systems, including automated handling of specimens, the ability to isolate nontarget and novel species, and the ability to identify specimens across different stages of larval development.
international conference on computer vision | 2007
Robin Hess; Alan Fern; Eric N. Mortensen
For many multi-part object classes, the set of parts can vary not only in location but also in type. For example, player formations in American football involve various subsets of player types, and the spatial constraints among players depend largely upon which subset of player types constitutes the formation. In this work, we study the problem of localizing and classifying the parts of such objects. Pictorial structures provide an efficient and robust mechanism for localizing object parts. Unfortunately, these models assume that each object instance involves the same set of parts, making it difficult to apply them directly in our setting. With this motivation, we introduce the mixture-of-parts pictorial structure (MoPPS) model, which is characterized by three components: a set of available parts, a set of constraints that specify legal part subsets, and a function that returns a pictorial structure for any legal part subset. MoPPS inference corresponds to jointly computing the most likely subset of parts and their positions. We propose a restricted, but useful, representation for MoPPS models that facilitates inference via branch-and-bound optimization, which we show is efficient in practice. Experiments in the challenging domain of American football show the effectiveness of the model and inference procedure.