Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronghang Hu is active.

Publication


Featured researches published by Ronghang Hu.


computer vision and pattern recognition | 2016

Natural Language Object Retrieval

Ronghang Hu; Huazhe Xu; Marcus Rohrbach; Jiashi Feng; Kate Saenko; Trevor Darrell

In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.


european conference on computer vision | 2016

Grounding of Textual Phrases in Images by Reconstruction

Anna Rohrbach; Marcus Rohrbach; Ronghang Hu; Trevor Darrell; Bernt Schiele

Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual content is a challenging problem with many applications for human-computer interaction and image-text reference resolution. Few datasets provide the ground truth spatial localization of phrases, thus it is desirable to learn from data with no or little grounding supervision. We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly. During training our approach encodes the phrase using a recurrent network language model and then learns to attend to the relevant image region in order to reconstruct the input phrase. At test time, the correct attention, i.e., the grounding, is evaluated. If grounding supervision is available it can be directly applied via a loss over the attention mechanism. We demonstrate the effectiveness of our approach on the Flickr 30k Entities and ReferItGame datasets with different levels of supervision, ranging from no supervision over partial supervision to full supervision. Our supervised variant improves by a large margin over the state-of-the-art on both datasets.


computer vision and pattern recognition | 2017

Modeling Relationships in Referential Expressions with Compositional Modular Networks

Ronghang Hu; Marcus Rohrbach; Jacob Andreas; Trevor Darrell; Kate Saenko

People often refer to entities in an image in terms of their relationships with other entities. For example, the black cat sitting under the table refers to both a black cat entity and its relationship with another table entity. Understanding these relationships is essential for interpreting and grounding such natural language expressions. Most prior work focuses on either grounding entire referential expressions holistically to one region, or localizing relationships based on a fixed set of categories. In this paper we instead present a modular deep architecture capable of analyzing referential expressions into their component parts, identifying entities and relationships mentioned in the input expression and grounding them all in the scene. We call this approach Compositional Modular Networks (CMNs): a novel architecture that learns linguistic analysis and visual inference end-to-end. Our approach is built around two types of neural modules that inspect local regions and pairwise interactions between regions. We evaluate CMNs on multiple referential expression datasets, outperforming state-of-the-art approaches on all tasks.


european conference on computer vision | 2016

Segmentation from Natural Language Expressions

Ronghang Hu; Marcus Rohrbach; Trevor Darrell

In this paper we approach the novel problem of segmenting an image based on a natural language expression. This is different from traditional semantic segmentation over a predefined set of semantic classes, as e.g., the phrase “two men sitting on the right bench” requires segmenting only the two people on the right bench and no one standing or sitting on another bench. Previous approaches suitable for this task were limited to a fixed set of categories and/or rectangular regions. To produce pixelwise segmentation for the language expression, we propose an end-to-end trainable recurrent and convolutional network model that jointly learns to process visual and linguistic information. In our model, a recurrent neural network is used to encode the referential expression into a vector representation, and a fully convolutional network is used to a extract a spatial feature map from the image and output a spatial response map for the target object. We demonstrate on a benchmark dataset that our model can produce quality segmentation output from the natural language expression, and outperforms baseline methods by a large margin.


international conference on computer vision | 2015

Spatial Semantic Regularisation for Large Scale Object Detection

Damian Mrowca; Marcus Rohrbach; Judy Hoffman; Ronghang Hu; Kate Saenko; Trevor Darrell

Large scale object detection with thousands of classes introduces the problem of many contradicting false positive detections, which have to be suppressed. Class-independent non-maximum suppression has traditionally been used for this step, but it does not scale well as the number of classes grows. Traditional non-maximum suppression does not consider label-and instance-level relationships nor does it allow an exploitation of the spatial layout of detection proposals. We propose a new multi-class spatial semantic regularisation method based on affinity propagation clustering, which simultaneously optimises across all categories and all proposed locations in the image, to improve both the localisation and categorisation of selected detection proposals. Constraints are shared across the labels through the semantic WordNet hierarchy. Our approach proves to be especially useful in large scale settings with thousands of classes, where spatial and semantic interactions are very frequent and only weakly supervised detectors can be built due to a lack of bounding box annotations. Detection experiments are conducted on the ImageNet and COCO dataset, and in settings with thousands of detected categories. Our method provides a significant precision improvement by reducing false positives, while simultaneously improving the recall.


european conference on computer vision | 2018

Explainable Neural Computation via Stack Neural Module Networks

Ronghang Hu; Jacob Andreas; Trevor Darrell; Kate Saenko

In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional reasoning process, and, in many applications, the need for this reasoning process to be interpretable to assist users in both development and prediction. Existing models designed to produce interpretable traces of their decision-making process typically require these traces to be supervised at training time. In this paper, we present a novel neural modular approach that performs compositional reasoning by automatically inducing a desired sub-task decomposition without relying on strong supervision. Our model allows linking different reasoning tasks though shared modules that handle common routines across tasks. Experiments show that the model is more interpretable to human evaluators compared to other state-of-the-art models: users can better understand the model’s underlying reasoning procedure and predict when it will succeed or fail based on observing its intermediate outputs.


neural information processing systems | 2014

LSDA: Large Scale Detection through Adaptation

Judy Hoffman; Sergio Guadarrama; Eric Tzeng; Ronghang Hu; Jeff Donahue; Ross B. Girshick; Trevor Darrell; Kate Saenko


international conference on computer vision | 2017

Learning to Reason: End-to-End Module Networks for Visual Question Answering

Ronghang Hu; Jacob Andreas; Marcus Rohrbach; Trevor Darrell; Kate Saenko


computer vision and pattern recognition | 2018

Learning to Segment Every Thing

Ronghang Hu; Piotr Dollár; Kaiming He; Trevor Darrell; Ross B. Girshick


arXiv: Computer Vision and Pattern Recognition | 2016

Utilizing Large Scale Vision and Text Datasets for Image Segmentation from Referring Expressions.

Ronghang Hu; Marcus Rohrbach; Subhashini Venugopalan; Trevor Darrell

Collaboration


Dive into the Ronghang Hu's collaboration.

Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jacob Andreas

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judy Hoffman

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Damian Mrowca

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge