Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vicente Ordonez is active.

Publication


Featured researches published by Vicente Ordonez.


computer vision and pattern recognition | 2011

High level describable attributes for predicting aesthetics and interestingness

Sagnik Dhar; Vicente Ordonez; Tamara L. Berg

With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”.


european conference on computer vision | 2016

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

Mohammad Rastegari; Vicente Ordonez; Joseph Redmon; Ali Farhadi

We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32\(\times \) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58\(\times \) faster convolutional operations (in terms of number of the high precision operations) and 32\(\times \) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than \(16\,\%\) in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

BabyTalk: Understanding and Generating Simple Image Descriptions

Girish Kulkarni; Visruth Premraj; Vicente Ordonez; Sagnik Dhar; Siming Li; Yejin Choi; Alexander C. Berg; Tamara L. Berg

We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.


empirical methods in natural language processing | 2014

ReferItGame: Referring to Objects in Photographs of Natural Scenes

Sahar Kazemzadeh; Vicente Ordonez; Mark Matten; Tamara L. Berg

In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.


european conference on computer vision | 2014

Learning High-Level Judgments of Urban Perception

Vicente Ordonez; Tamara L. Berg

Human observers make a variety of perceptual inferences about pictures of places based on prior knowledge and experience. In this paper we apply computational vision techniques to the task of predicting the perceptual characteristics of places by leveraging recent work on visual features along with a geo-tagged dataset of images associated with crowd-sourced urban perception judgments for wealth, uniqueness, and safety. We perform extensive evaluations of our models, training and testing on images of the same city as well as training and testing on images of different cities to demonstrate generalizability. In addition, we collect a new densely sampled dataset of streetview images for 4 cities and explore joint models to collectively predict perceptual judgments at city scale. Finally, we show that our predictions correlate well with ground truth statistics of wealth and crime.


International Journal of Computer Vision | 2015

Predicting Entry-Level Categories

Vicente Ordonez; Wei Liu; Jia Deng; Yejin Choi; Alexander C. Berg; Tamara L. Berg

Entry-level categories—the labels people use to name an object—were originally defined and studied by psychologists in the 1970s and 1980s. In this paper we extend these ideas to study entry-level categories at a larger scale and to learn models that can automatically predict entry-level categories for images. Our models combine visual recognition predictions with linguistic resources like WordNet and proxies for word “naturalness” mined from the enormous amount of text on the web. We demonstrate the usefulness of our models for predicting nouns (entry-level words) associated with images by people, and for learning mappings between concepts predicted by existing visual recognition systems and entry-level concepts. In this work we make use of recent successful efforts on convolutional network models for visual recognition by training classifiers for 7404 object categories on ConvNet activation features. Results for category mapping and entry-level category prediction for images show promise for producing more natural human-like labels. We also demonstrate the potential applicability of our results to the task of image description generation.


International Journal of Computer Vision | 2016

Large Scale Retrieval and Generation of Image Descriptions

Vicente Ordonez; Xufeng Han; Polina Kuznetsova; Girish Kulkarni; Margaret Mitchell; Kota Yamaguchi; Karl Stratos; Amit Goyal; Jesse Dodge; Alyssa Mensch; Hal Daumé; Alexander C. Berg; Yejin Choi; Tamara L. Berg

What is the story of an image? What is the relationship between pictures, language, and information we can extract using state of the art computational recognition systems? In an attempt to address both of these questions, we explore methods for retrieving and generating natural language descriptions for images. Ideally, we would like our generated textual descriptions (captions) to both sound like a person wrote them, and also remain true to the image content. To do this we develop data-driven approaches for image description generation, using retrieval-based techniques to gather either: (a) whole captions associated with a visually similar image, or (b) relevant bits of text (phrases) from a large collection of image + description pairs. In the case of (b), we develop optimization algorithms to merge the retrieved phrases into valid natural language sentences. The end result is two simple, but effective, methods for harnessing the power of big data to produce image captions that are altogether more general, relevant, and human-like than previous attempts.


north american chapter of the association for computational linguistics | 2016

Stating the Obvious: Extracting Visual Common Sense Knowledge

Mark Yatskar; Vicente Ordonez; Ali Farhadi

Obtaining common sense knowledge using current information extraction techniques is extremely challenging. In this work, we instead propose to derive simple common sense statements from fully annotated object detection corpora such as the Microsoft Common Objects in Context dataset. We show that many thousands of common sense facts can be extracted from such corpora at high quality. Furthermore, using WordNet and a novel submodular k-coverage formulation, we are able to generalize our initial set of common sense assertions to unseen objects and uncover over 400k potentially useful facts.


workshop on applications of computer vision | 2014

Furniture-geek: Understanding fine-grained furniture attributes from freely associated text and tags

Vicente Ordonez; Vignesh Jagadeesh; Wei Di; Anurag Bhardwaj; Robinson Piramuthu

As the amount of user generated content on the internet grows, it becomes ever more important to come up with vision systems that learn directly from weakly annotated and noisy data. We leverage a large scale collection of user generated content comprising of images, tags and title/captions of furniture inventory from an e-commerce website to discover and categorize learnable visual attributes. Furniture categories have long been the quintessential example of why computer vision is hard, and we make one of the first attempts to understand them through a large scale weakly annotated dataset. We focus on a handful of furniture categories that are associated with a large number of fine-grained attributes. We propose a set of localized feature representations built on top of state-of-the-art computer vision representations originally designed for fine-grained object categorization. We report a thorough empirical characterization on the visual identifiability of various fine-grained attributes using these representations and show encouraging results on finding iconic images and on multi-attribute prediction.


computer vision and pattern recognition | 2017

Commonly Uncommon: Semantic Sparsity in Situation Recognition

Mark Yatskar; Vicente Ordonez; Luke Zettlemoyer; Ali Farhadi

Semantic sparsity is a common challenge in structured visual classification problems, when the output space is complex, the vast majority of the possible predictions are rarely, if ever, seen in the training set. This paper studies semantic sparsity in situation recognition, the task of producing structured summaries of what is happening in images, including activities, objects and the roles objects play within the activity. For this problem, we find empirically that most substructures required for prediction are rare, and current state-of-the-art model performance dramatically decreases if even one such rare substructure exists in the target output. We avoid many such errors by (1) introducing a novel tensor composition function that learns to share examples across substructures more effectively and (2) semantically augmenting our training data with automatically gathered examples of rarely observed outputs using web data. When integrated within a complete CRF-based structured prediction model, the tensor-based approach outperforms existing state of the art by a relative improvement of 2.11% and 4.40% on top-5 verb and noun-role accuracy, respectively. Adding 5 million images with our semantic augmentation techniques gives further relative improvements of 6.23% and 9.57% on top-5 verb and noun-role accuracy.

Collaboration


Dive into the Vicente Ordonez's collaboration.

Top Co-Authors

Avatar

Tamara L. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Yejin Choi

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Alexander C. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Mark Yatskar

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jianru Xue

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jihua Zhu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Shanmin Pang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Ali Farhadi

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge