Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trevor Campbell is active.

Publication


Featured researches published by Trevor Campbell.


computer vision and pattern recognition | 2015

Small-variance nonparametric clustering on the hypersphere

Julian Straub; Trevor Campbell; Jonathan P. How; John W. Fisher

Structural regularities in man-made environments reflect in the distribution of their surface normals. Describing these surface normal distributions is important in many computer vision applications, such as scene understanding, plane segmentation, and regularization of 3D reconstructions. Based on the small-variance limit of Bayesian nonparametric von-Mises-Fisher (vMF) mixture distributions, we propose two new flexible and efficient k-means-like clustering algorithms for directional data such as surface normals. The first, DP-vMF-means, is a batch clustering algorithm derived from the Dirichlet process (DP) vMF mixture. Recognizing the sequential nature of data collection in many applications, we extend this algorithm to DDP-vMF-means, which infers temporally evolving cluster structure from streaming data. Both algorithms naturally respect the geometry of directional data, which lies on the unit sphere. We demonstrate their performance on synthetic directional data and real 3D surface normals from RGB-D sensors. While our experiments focus on 3D data, both algorithms generalize to high dimensional directional data such as protein backbone configurations and semantic word vectors.


advances in computing and communications | 2015

Bayesian nonparametric set construction for robust optimization

Trevor Campbell; Jonathan P. How

This paper presents a Bayesian nonparametric, data-driven, nonconvex uncertainty set construction for robust optimization. First, a basic uncertainty set is constructed from a union of posterior predictive ellipsoids for the Dirichlet process Gaussian mixture. The robustification of linear optimization problems using this set is proven to be a tractable second order cone problem with probabilistic feasibility guarantees. Noting that this basic set is typically overly conservative, a scaled version of the set is obtained via stochastic bisection search, and convergence guarantees to the least conservative scaled set for a particular probabilistic guarantee are provided. Experiments on synthetic linear programs and a Mobility on Demand system design problem demonstrate that the proposed set improves upon the robust optimal objective over simpler uncertainty sets, more accurately achieves the desired level of conservatism, and requires little design input from the user.


american control conference | 2013

Multiagent allocation of Markov decision process tasks

Trevor Campbell; Luke B. Johnson; Jonathan P. How

Producing task assignments for multiagent teams often leads to an exponential growth in the decision space as the number of agents and objectives increases. One approach to finding a task assignment is to model the agents and the environment as a single Markov decision process, and solve the planning problem using standard MDP techniques. However, both exact and approximate MDP solvers in this environment struggle to produce assignments even for problems involving few agents and objectives. Conversely, problem formulations based upon mathematical programming typically scale well with the problem size at the expense of requiring comparatively simple agent and task models. This paper combines these two formulations by modeling task and agent dynamics using MDPs, and then using optimization techniques to solve the combinatorial problem of assigning tasks to agents. The computational complexity of the resulting algorithm is polynomial in the number of tasks and is constant in the number of agents. Simulation results are provided which highlight the performance of the algorithm in a grid world mobile target surveillance scenario, while demonstrating that these techniques can be extended to even larger tasking domains.


computer vision and pattern recognition | 2017

Efficient Global Point Cloud Alignment Using Bayesian Nonparametric Mixtures

Julian Straub; Trevor Campbell; Jonathan P. How; John W. Fisher

Point cloud alignment is a common problem in computer vision and robotics, with applications ranging from 3D object recognition to reconstruction. We propose a novel approach to the alignment problem that utilizes Bayesian nonparametrics to describe the point cloud and surface normal densities, and branch and bound (BB) optimization to recover the relative transformation. BB uses a novel, refinable, near-uniform tessellation of rotation space using 4D tetrahedra, leading to more efficient optimization compared to the common axis-angle tessellation. We provide objective function bounds for pruning given the proposed tessellation, and prove that BB converges to the optimum of the cost function along with providing its computational complexity. Finally, we empirically demonstrate the efficiency of the proposed approach as well as its robustness to real-world conditions such as missing data and partial overlap.


AIAA Guidance, Navigation, and Control Conference 2012 | 2012

Planning under Uncertainty using Bayesian Nonparametric Models

Trevor Campbell; Sameera S. Ponda; Girish Chowdhary; Jonathan P. How

objectives in the presence of environmental uncertainties is critical to the success of many Unmanned Aerial Vehicle missions. One way to plan in the presence of such uncertainties is by learning a model of the environment through Bayesian inference, and using this model to improve the predictive capability of the planning algorithm. Traditional parametric models of the environment, however, can be ineective if the data cannot


neural information processing systems | 2013

Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture

Trevor Campbell; Miao Liu; Brian Kulis; Jonathan P. How; Lawrence Carin


neural information processing systems | 2015

Streaming, distributed variational inference for Bayesian nonparametrics

Trevor Campbell; Julian Straub; John W. Fisher; Jonathan P. How


neural information processing systems | 2016

Edge-exchangeable graphs and sparsity

Diana Cai; Trevor Campbell; Tamara Broderick


neural information processing systems | 2016

Coresets for Scalable Bayesian Logistic Regression

Jonathan H. Huggins; Trevor Campbell; Tamara Broderick


uncertainty in artificial intelligence | 2014

Approximate decentralized Bayesian inference

Trevor Campbell; Jonathan P. How

Collaboration


Dive into the Trevor Campbell's collaboration.

Top Co-Authors

Avatar

Jonathan P. How

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan H. Huggins

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John W. Fisher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Julian Straub

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Diana Cai

University of Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sameera S. Ponda

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luke B. Johnson

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge