Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brandon Rothrock is active.

Publication


Featured researches published by Brandon Rothrock.


computer vision and pattern recognition | 2015

Pooled motion features for first-person videos

Michael S. Ryoo; Brandon Rothrock; Larry H. Matthies

In this paper, we present a new feature representation for first-person videos. In first-person video understanding (e.g., activity recognition), it is very important to capture both entire scene dynamics (i.e., egomotion) and salient local motion observed in videos. We describe a representation framework based on time series pooling, which is designed to ab] short-term/long-term changes in feature descriptor elements. The idea is to keep track of how descriptor values are changing over time and summarize them to represent motion in the activity video. The framework is general, handling any types of per-frame feature descriptors including conventional motion descriptors like histogram of optical flows (HOF) as well as appearance descriptors from more recent convolutional neural networks (CNN). We experimentally confirm that our approach clearly outperforms previous feature representations including bag-of-visual-words and improved Fisher vector (IFV) when using identical underlying feature descriptors. We also confirm that our feature representation has superior performance to existing state-of-the-art features like local spatio-temporal features and Improved Trajectory Features (originally developed for 3rd-person videos) when handling first-person videos. Multiple first-person activity datasets were tested under various settings to confirm these findings.


computer vision and pattern recognition | 2015

Joint inference of groups, events and human roles in aerial videos

Tianmin Shu; Dan Xie; Brandon Rothrock; Sinisa Todorovic; Song-Chun Zhu

With the advent of drones, aerial video analysis becomes increasingly important; yet, it has received scant attention in the literature. This paper addresses a new problem of parsing low-resolution aerial videos of large spatial areas, in terms of 1) grouping, 2) recognizing events and 3) assigning roles to people engaged in events. We propose a novel framework aimed at conducting joint inference of the above tasks, as reasoning about each in isolation typically fails in our setting. Given noisy tracklets of people and detections of large objects and scene surfaces (e.g., building, grass), we use a spatiotemporal AND-OR graph to drive our joint inference, using Markov Chain Monte Carlo and dynamic programming. We also introduce a new formalism of spatiotemporal templates characterizing latent sub-events. For evaluation, we have collected and released a new aerial videos dataset using a hex-rotor flying over picnic areas rich with group events. Our results demonstrate that we successfully address above inference tasks under challenging conditions.


computer vision and pattern recognition | 2013

Integrating Grammar and Segmentation for Human Pose Estimation

Brandon Rothrock; Seyoung Park; Song-Chun Zhu

In this paper we present a compositional and-or graph grammar model for human pose estimation. Our model has three distinguishing features: (i) large appearance differences between people are handled compositionally by allowing parts or collections of parts to be substituted with alternative variants, (ii) each variant is a sub-model that can define its own articulated geometry and context-sensitive compatibility with neighboring part variants, and (iii) background region segmentation is incorporated into the part appearance models to better estimate the contrast of a part region from its surroundings, and improve resilience to background clutter. The resulting integrated framework is trained discriminatively in a max-margin framework using an efficient and exact inference algorithm. We present experimental evaluation of our model on two popular datasets, and show performance improvements over the state-of-art on both benchmarks.


international conference on computer vision | 2011

Human parsing using stochastic and-or grammars and rich appearances

Brandon Rothrock; Song-Chun Zhu

One of the key challenges to human parsing and pose recovery is handling the variability in geometry and appearance of humans in natural scenes. This variability is due to the large number of distinct articulated configurations, clothing, and self-occlusion, as well as unknown lighting and viewpoint. In this paper, we present a stochastic grammar model that represents the body as an articulated assembly of compositional and reconfigurable parts. The reconfigurable aspect allows a compatible part to be substituted with an alternative part with different attributes, such as for clothing appearance or viewpoint foreshortening. Relations within the grammar enforce consistency between part attributes as well as geometry, allowing a richer set of appearance and geometry constraints over conventional articulated models. Part appearances are modeled by a sparse deformable image template that can still richly describe salient part structures. We describe a dynamic programming parsing algorithm for our model, and show competitive pose recovery results against the state-of-art on a challenging dataset.


AIAA SPACE 2016 | 2016

SPOC: Deep Learning-based Terrain Classification for Mars Rover Missions

Brandon Rothrock; Ryan Kennedy; Christopher Cunningham; Jeremie Papon; Matthew Heverly; Masahiro Ono

This paper presents Soil Property and Object Classification (SPOC), a novel software capability that can visually identify terrain types (e.g., sand, bedrock) as well as terrain features (e.g., scarps, ridges) on a planetary surface. SPOC works on both orbital and ground-bases images. Built upon a deep convolutional neural network (CNN), SPOC employs a machine learning approach, where it learns from a small volume of examples provided by human experts, and applies the learned model to a significant volume of data very efficiently. SPOC is important since terrain type is essential information for evaluating the traversability for rovers, yet manual terrain classification is very labor intensive. This paper presents the technology behind SPOC, as well as two successful applications to Mars rover missions. The first is the landing site traversability analysis for the Mars 2020 Rover (M2020) mission. SPOC identifies 17 terrain classes on full-resolution (25 cm/pixel) HiRISE (High Resolution Imaging Science Experiment) images for all eight candidate landing sites, each of which spans over ∼ 100km. The other application is slip prediction for the Mars Science Laboratory (MSL) mission. SPOC processed several thousand NAVCAM (Navigation camera) images taken by the Curiosity rover. Predicted terrain classes were then correlated with observed wheel slip and slope angles to build a slip prediction model. In addition, SPOC was integrated into the MSL downlink pipeline to automatically process all NAVCAM images. These tasks were impractical, if not impossible, to perform manually. SPOC opens the door for big data analysis in planetary exploration. It has a promising potential for a wider range of future applications, such as the automated discovery of scientifically important terrain features on existing Mars orbital imagery, as well as traversability analysis for future surface missions to small bodies and icy worlds.


international symposium on experimental robotics | 2016

Vision-Based Obstacle Avoidance for Micro Air Vehicles Using an Egocylindrical Depth Map

Roland Brockers; Anthony T. Fragoso; Brandon Rothrock; Connor Lee; Larry H. Matthies

Obstacle avoidance is an essential capability for micro air vehicles. Prior approaches have mainly been either purely reactive, mapping low-level visual features directly to headings, or deliberative methods that use onboard 3-D sensors to create a 3-D, voxel-based world model, then generate 3-D trajectories and check them for potential collisions with the world model. Onboard 3-D sensor suites have had limited fields of view. We use forward-looking stereo vision and lateral structure from motion to give a very wide horizontal and vertical field of regard. We fuse depth maps from these sources in a novel robot-centered, cylindrical, inverse range map we call an egocylinder. Configuration space expansion directly on the egocylinder gives a very compact representation of visible freespace. This supports very efficient motion planning and collision-checking with better performance guarantees than standard reactive methods. We show the feasibility of this approach experimentally in a challenging outdoor environment.


ieee aerospace conference | 2016

Data-driven surface traversability analysis for Mars 2020 landing site selection

Masahiro Ono; Brandon Rothrock; Eduardo Almeida; Adnan Ansar; Richard Otero; Andres Huertas; Matthew Heverly

The objective of this paper is three-fold: 1) to describe the engineering challenges in the surface mobility of the Mars 2020 Rover mission that are considered in the landing site selection processs, 2) to introduce new automated traversability analysis capabilities, and 3) to present the preliminary analysis results for top candidate landing sites. The analysis capabilities presented in this paper include automated terrain classification, automated rock detection, digital elevation model (DEM) generation, and multi-ROI (region of interest) route planning. These analysis capabilities enable to fully utilize the vast volume of high-resolution orbiter imagery, quantitatively evaluate surface mobility requirements for each candidate site, and reject subjectivity in the comparison between sites in terms of engineering considerations. The analysis results supported the discussion in the Second Landing Site Workshop held in August 2015, which resulted in selecting eight candidate sites that will be considered in the third workshop.


Unmanned Systems Technology XX | 2018

Modeling and traversal of pliable materials for tracked robot navigation

Camilo Ordonez; Ryan Alicea; Brandon Rothrock; Kyle Ladyko; Mario Harper; Sisir Karumanchi; Larry H. Matthies; Emmanuel G. Collins

In order to fully exploit robot motion capabilities in complex environments, robots need to reason about obstacles in a non-binary fashion. In this paper, we focus on the modeling and characterization of pliable materials such as tall vegetation. These materials are of interest because they are pervasive in the real world, requiring the robotic vehicle to determine when to traverse or avoid them. This paper develops and experimentally verifies a template model for vegetation stems. In addition, it presents a methodology to generate predictions of the associated energetic cost incurred by a tracked mobile robot when traversing a vegetation patch of variable density.


national conference on artificial intelligence | 2017

Privacy-Preserving Human Activity Recognition from Extreme Low Resolution.

Michael S. Ryoo; Brandon Rothrock; Charles Fleming; Hyun Jong Yang


intelligent robots and systems | 2017

Feeling the force: Integrating force and pose for fluent discovery through imitation learning to open medicine bottles

Mark Edmonds; Feng Gao; Xu Xie; Hangxin Liu; Siyuan Qi; Yixin Zhu; Brandon Rothrock; Song-Chun Zhu

Collaboration


Dive into the Brandon Rothrock's collaboration.

Top Co-Authors

Avatar

Song-Chun Zhu

University of California

View shared research outputs
Top Co-Authors

Avatar

Hangxin Liu

University of California

View shared research outputs
Top Co-Authors

Avatar

Larry H. Matthies

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark Edmonds

University of California

View shared research outputs
Top Co-Authors

Avatar

Masahiro Ono

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew Heverly

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael S. Ryoo

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xu Xie

University of California

View shared research outputs
Top Co-Authors

Avatar

Yixin Zhu

University of California

View shared research outputs
Top Co-Authors

Avatar

Eduardo Almeida

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge