Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oscar Beijbom is active.

Publication


Featured researches published by Oscar Beijbom.


computer vision and pattern recognition | 2016

Compact Bilinear Pooling

Yang Gao; Oscar Beijbom; Ning Zhang; Trevor Darrell

Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition. However, bilinear features are high dimensional, typically on the order of hundreds of thousands to a few million, which makes them impractical for subsequent analysis. We propose two compact bilinear representations with the same discriminative power as the full bilinear representation but with only a few thousand dimensions. Our compact representations allow back-propagation of classification errors enabling an end-to-end optimization of the visual recognition system. The compact bilinear representations are derived through a novel kernelized analysis of bilinear pooling which provide insights into the discriminative power of bilinear pooling, and a platform for further research in compact pooling methods. Experimentation illustrate the utility of the proposed representations for image classification and few-shot learning across several datasets.


computer vision and pattern recognition | 2012

Automated annotation of coral reef survey images

Oscar Beijbom; Peter J. Edmunds; David I. Kline; B. Greg Mitchell; David J. Kriegman

With the proliferation of digital cameras and automatic acquisition systems, scientists can acquire vast numbers of images for quantitative analysis. However, much image analysis is conducted manually, which is both time consuming and prone to error. As a result, valuable scientific data from many domains sit dormant in image libraries awaiting annotation. This work addresses one such domain: coral reef coverage estimation. In this setting, the goal, as defined by coral reef ecologists, is to determine the percentage of the reef surface covered by rock, sand, algae, and corals; it is often desirable to resolve these taxa at the genus level or below. This is challenging since the data exhibit significant within class variation, the borders between classes are complex, and the viewpoints and image quality vary. We introduce Moorea Labeled Corals, a large multi-year dataset with 400,000 expert annotations, to the computer vision community, and argue that this type of ecological data provides an excellent opportunity for performance benchmarking. We also propose a novel algorithm using texture and color descriptors over multiple scales that outperforms commonly used techniques from the texture classification literature. We show that the proposed algorithm accurately estimates coral coverage across locations and years, thereby taking a significant step towards reliable automated coral reef image annotation.


PLOS ONE | 2015

Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

Oscar Beijbom; Peter J. Edmunds; Chris Roelfsema; Jennifer E. Smith; David I. Kline; Benjamin P. Neal; Matthew J. Dunlap; Vincent W. Moriarty; Tung-Yung Fan; Chih-Jui Tan; Stephen Chan; Tali Treibitz; Anthony Gamst; B. Greg Mitchell; David J. Kriegman

Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.


computer vision and pattern recognition | 2013

Efficient Large-Scale Structured Learning

Steve Branson; Oscar Beijbom; Serge J. Belongie

We introduce an algorithm, SVM-IS, for structured SVM learning that is computationally scalable to very large datasets and complex structural representations. We show that structured learning is at least as fast-and often much faster-than methods based on binary classification for problems such as deformable part models, object detection, and multiclass classification, while achieving accuracies that are at least as good. Our method allows problem-specific structural knowledge to be exploited for faster optimization by integrating with a user-defined importance sampling function. We demonstrate fast train times on two challenging large scale datasets for two very different problems: Image Net for multiclass classification and CUB-200-2011 for deformable part model training. Our method is shown to be 10-50 times faster than SVMstruct for cost-sensitive multiclass classification while being about as fast as the fastest 1-vs-all methods for multiclass classification. For deformable part model training, it is shown to be 50-1000 times faster than methods based on SVMstruct, mining hard negatives, and Pegasos-style stochastic gradient descent. Source code of our method is publicly available.


Scientific Reports | 2015

Wide Field-of-View Fluorescence Imaging of Coral Reefs

Tali Treibitz; Benjamin P. Neal; David I. Kline; Oscar Beijbom; Paul L. D. Roberts; B. Greg Mitchell; David J. Kriegman

Coral reefs globally are declining rapidly because of both local and global stressors. Improved monitoring tools are urgently needed to understand the changes that are occurring at appropriate temporal and spatial scales. Coral fluorescence imaging tools have the potential to improve both ecological and physiological assessments. Although fluorescence imaging is regularly used for laboratory studies of corals, it has not yet been used for large-scale in situ assessments. Current obstacles to effective underwater fluorescence surveying include limited field-of-view due to low camera sensitivity, the need for nighttime deployment because of ambient light contamination, and the need for custom multispectral narrow band imaging systems to separate the signal into meaningful fluorescence bands. Here we describe the Fluorescence Imaging System (FluorIS), based on a consumer camera modified for greatly increased sensitivity to chlorophyll-a fluorescence, and we show high spectral correlation between acquired images and in situ spectrometer measurements. This system greatly facilitates underwater wide field-of-view fluorophore surveying during both night and day, and potentially enables improvements in semi-automated segmentation of live corals in coral reef photographs and juvenile coral surveys.


Remote Sensing | 2016

Scaling up Ecological Measurements of Coral Reefs Using Semi-Automated Field Image Collection and Analysis

Manuel González-Rivero; Oscar Beijbom; Alberto Rodriguez-Ramirez; Tadzio Holtrop; Yeray González-Marrero; Anjani E. Ganase; Chris Roelfsema; Stuart R. Phinn; Ove Hoegh-Guldberg

Ecological measurements in marine settings are often constrained in space and time, with spatial heterogeneity obscuring broader generalisations. While advances in remote sensing, integrative modelling and meta-analysis enable generalisations from field observations, there is an underlying need for high-resolution, standardised and geo-referenced field data. Here, we evaluate a new approach aimed at optimising data collection and analysis to assess broad-scale patterns of coral reef community composition using automatically annotated underwater imagery, captured along 2 km transects. We validate this approach by investigating its ability to detect spatial (e.g., across regions) and temporal (e.g., over years) change, and by comparing automated annotation errors to those of multiple human annotators. Our results indicate that change of coral reef benthos can be captured at high resolution both spatially and temporally, with an average error below 5%, among key benthic groups. Cover estimation errors using automated annotation varied between 2% and 12%, slightly larger than human errors (which varied between 1% and 7%), but small enough to detect significant changes among dominant groups. Overall, this approach allows a rapid collection of in-situ observations at larger spatial scales (km) than previously possible, and provides a pathway to link, calibrate, and validate broader analyses across even larger spatial scales (10–10,000 km2).


Scientific Reports | 2016

Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

Oscar Beijbom; Tali Treibitz; David I. Kline; Gal Eyal; Adi Khen; Benjamin P. Neal; Yossi Loya; B. Greg Mitchell; David J. Kriegman

Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.


workshop on applications of computer vision | 2015

Menu-Match: Restaurant-Specific Food Logging from Images

Oscar Beijbom; Neel Joshi; Dan Morris; T. Scott Saponas; Siddharth Khullar


Aquatic Conservation-marine and Freshwater Ecosystems | 2014

The Catlin Seaview Survey - kilometre-scale seascape assessment, and monitoring of coral reef ecosystems

Manuel González-Rivero; Pim Bongaerts; Oscar Beijbom; Oscar Pizarro; Ariell Friedman; Alberto Rodriguez-Ramirez; Ben Upcroft; Dan Laffoley; David I. Kline; Christophe Bailhache; Richard Vevers; Ove Hoegh-Guldberg


international conference on machine learning | 2014

Guess-Averse Loss Functions For Cost-Sensitive Multiclass Boosting

Oscar Beijbom; Mohammad J. Saberian; David J. Kriegman; Nuno Vasconcelos

Collaboration


Dive into the Oscar Beijbom's collaboration.

Top Co-Authors

Avatar

David I. Kline

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge