Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin G. Jamieson is active.

Publication


Featured researches published by Kevin G. Jamieson.


allerton conference on communication, control, and computing | 2011

Low-dimensional embedding using adaptively selected ordinal data

Kevin G. Jamieson; Robert D. Nowak

Low-dimensional embedding based on non-metric data (e.g., non-metric multidimensional scaling) is a problem that arises in many applications, especially those involving human subjects. This paper investigates the problem of learning an embedding of n objects into d-dimensional Euclidean space that is consistent with pairwise comparisons of the type “object a is closer to object b than c.” While there are O(n3) such comparisons, experimental studies suggest that relatively few are necessary to uniquely determine the embedding up to the constraints imposed by all possible pairwise comparisons (i.e., the problem is typically over-constrained). This paper is concerned with quantifying the minimum number of pairwise comparisons necessary to uniquely determine an embedding up to all possible comparisons. The comparison constraints stipulate that, with respect to each object, the other objects are ranked relative to their proximity. We prove that at least Q(dn log n) pairwise comparisons are needed to determine the embedding of all n objects. The lower bounds cannot be achieved by using randomly chosen pairwise comparisons. We propose an algorithm that exploits the low-dimensional geometry in order to accurately embed objects based on relatively small number of sequentially selected pairwise comparisons and demonstrate its performance with experiments.


international conference on robotics and automation | 2017

Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations

Michael Laskey; Caleb Chuck; Jonathan Lee; Jeffrey Mahler; Sanjay Krishnan; Kevin G. Jamieson; Anca D. Dragan; Ken Goldberg

Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations from fallible human supervisors. Human-Centric (HC) sampling is a standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. Robot-Centric (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot execute a learned policy and provides corrective control labels for each state visited. We suggest RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable to RC when applied to expressive learning models such as deep learning and hyper-parametric decision trees, which can achieve very low training error provided there is enough data. We compare HC and RC using a grid world environment and a physical robot singulation task. In the latter the input is a binary image of objects on a planar worksurface and the policy generates a motion in the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that using deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples in which at the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge. These results suggest a form of HC sampling may be preferable for highly-expressive learning models and human supervisors.


conference on information sciences and systems | 2014

Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting

Kevin G. Jamieson; Robert D. Nowak

This paper is concerned with identifying the arm with the highest mean in a multi-armed bandit problem using as few independent samples from the arms as possible. While the so-called “best arm problem” dates back to the 1950s, only recently were two qualitatively different algorithms proposed that achieve the optimal sample complexity for the problem. This paper reviews these recent advances and shows that most best-arm algorithms can be described as variants of the two recent optimal algorithms. For each algorithm type we consider a specific instance to analyze both theoretically and empirically thereby exposing the core components of the theoretical analysis of these algorithms and intuition about how the algorithms work in practice. The derived sample complexity bounds are novel, and in certain cases improve upon previous bounds. In addition, we compare a variety of state-of-the-art algorithms empirically through simulations for the best-arm-problem.


international conference on acoustics, speech, and signal processing | 2010

Training a support vector machine to classify signals in a real environment given clean training data

Kevin G. Jamieson; Maya R. Gupta; Eric Swanson; Hyrum S. Anderson

When building a classifier from clean training data for a particular test environment, knowledge about the environmental noise and channel should be taken into account. We propose training a support vector machine (SVM) classifier using a modified kernel that is the expected kernel with respect to a probability distribution over channels and noise that might affect the test signal. We compare the proposed expected SVM to an SVM that ignores the environment, to an SVM that trains with multiple random samples of the environment, and to a quadratic discriminant analysis classifier that takes advantage of environment statistics (Joint QDA). Simulations classifying narrowband signals in a noisy acoustic reverberation environment indicate that the expected SVM can improve performance over a range of noise levels.


IEEE Transactions on Signal Processing | 2011

Channel-Robust Classifiers

Hyrum S. Anderson; Maya R. Gupta; Eric Swanson; Kevin G. Jamieson

A key assumption underlying traditional supervised learning algorithms is that labeled examples used to train a classifier are drawn i.i.d. from the same distribution as test samples. This assumption is violated when classifying a test sample whose statistics differ from the training samples because the test signal is the output of a noisy linear time-invariant system, e.g., from channel propagation or filtering. We assume that the channel impulse response is unknown, but can be modeled as a random channel with finite first and second-order statistics that can be estimated from sample impulse responses. We present two kernels, the expected and projected RBF kernels, that account for the stochastic channel. Compared to the strategy of virtual examples, an SVM trained with the proposed kernels requires dramatically less training time, and may perform better in practice. We also extend the joint quadratic discriminant analysis (joint QDA) classifier, which also accounts for a stochastic channel, to a local version that reduces model bias. Results show the proposed methods achieve state-of-the-art performance and significantly faster training times.


neural information processing systems | 2011

Active Ranking using Pairwise Comparisons

Kevin G. Jamieson; Robert D. Nowak


conference on learning theory | 2014

lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits

Kevin G. Jamieson; Matthew L. Malloy; Robert D. Nowak; Sébastien Bubeck


neural information processing systems | 2012

Query Complexity of Derivative-Free Optimization

Kevin G. Jamieson; Robert D. Nowak; Benjamin Recht


arXiv: Learning | 2017

Efficient Hyperparameter Optimization and Infinitely Many Armed Bandits

Afshin Rostamizadeh; Ameet Talwalkar; Giulia DeSalvo; Kevin G. Jamieson; Lisha Li


neural information processing systems | 2015

NEXT: a system for real-world development, evaluation, and application of active learning

Kevin G. Jamieson; Lalit Jain; Chris Fernandez; Nicholas J. Glattard; Robert D. Nowak

Collaboration


Dive into the Kevin G. Jamieson's collaboration.

Top Co-Authors

Avatar

Robert D. Nowak

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Lalit Jain

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Benjamin Recht

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisha Li

University of California

View shared research outputs
Top Co-Authors

Avatar

Giulia DeSalvo

Courant Institute of Mathematical Sciences

View shared research outputs
Top Co-Authors

Avatar

Max Simchowitz

University of California

View shared research outputs
Top Co-Authors

Avatar

Maya R. Gupta

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Eric Swanson

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge