Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jacob R. Gardner is active.

Publication


Featured researches published by Jacob R. Gardner.


computer vision and pattern recognition | 2017

Deep Feature Interpolation for Image Content Changes

Paul Upchurch; Jacob R. Gardner; Geoff Pleiss; Robert Pless; Noah Snavely; Kavita Bala; Kilian Q. Weinberger

We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation. As the name suggests, DFI relies only on simple linear interpolation of deep convolutional features from pre-trained convnets. We show that despite its simplicity, DFI can perform high-level semantic transformations like make older/younger, make bespectacled, add smile, among others, surprisingly well–sometimes even matching or outperforming the state-of-the-art. This is particularly unexpected as DFI requires no specialized network architecture or even any deep network to be trained for these tasks. DFI therefore can be used as a new baseline to evaluate more complex algorithms and provides a practical answer to the question of which image transformation tasks are still challenging after the advent of deep learning.


Ear and Hearing | 2015

Fast, Continuous Audiogram Estimation Using Machine Learning.

Xinyu D. Song; Brittany M. Wallace; Jacob R. Gardner; Noah M. Ledbetter; Kilian Q. Weinberger; Dennis L. Barbour

Objectives: Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study was to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and one repetition of conventional modified Hughson-Westlake ascending–descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results: The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably with those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions: The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry.


international symposium on parallel and distributed computing | 2014

WOODSTOCC: Extracting Latent Parallelism from a DNA Sequence Aligner on a GPU

Stephen V. Cole; Jacob R. Gardner; Jeremy Buhler

An exponential increase in the speed of DNA sequencing over the past decade has driven demand for fast, space efficient algorithms to process the resultant data. The first step in processing is alignment of many short DNA sequences, or reads, against a large reference sequence. This work presents WOODSTOCC, an implementation of short-read alignment designed for Graphics Processing Unit (GPU) architectures. WOODSTOCC translates a novel CPU implementation of gapped short-read alignment, which has guaranteed optimal and complete results, to the GPU. Our implementation combines an irregular trie search with dynamic programming to expose regularly structured parallelism. We first describe this implementation, then discuss its port to the GPU. WOODSTOCCs GPU port exploits three generally useful techniques for extracting regular parallelism from irregular computations: dynamic thread mapping with a work list, kernel stage decoupling, and kernel slicing. We discuss the performance impact of these techniques and suggest further opportunities for improvement.


international conference on machine learning | 2014

Bayesian Optimization with Inequality Constraints

Jacob R. Gardner; Matt J. Kusner; Zhixiang; Kilian Q. Weinberger; John P. Cunningham


national conference on artificial intelligence | 2015

A reduction of the elastic net to support vector machines with an application to GPU computing

Quan Zhou; Wenlin Chen; Shiji Song; Jacob R. Gardner; Kilian Q. Weinberger; Yixin Chen


arXiv: Learning | 2015

Deep Manifold Traversal: Changing Labels with Convolutional Features

Jacob R. Gardner; Matt J. Kusner; Yixuan Li; Paul Upchurch; Kilian Q. Weinberger; John E. Hopcroft


arXiv: Learning | 2014

Parallel Support Vector Machines in Practice

Stephen Tyree; Jacob R. Gardner; Kilian Q. Weinberger; Kunal Agrawal; John Tran


international conference on machine learning | 2015

Differentially Private Bayesian Optimization

Matt J. Kusner; Jacob R. Gardner; Roman Garnett; Kilian Q. Weinberger


neural information processing systems | 2015

Bayesian active model selection with an application to automated audiometry

Jacob R. Gardner; Gustavo Malkomes; Roman Garnett; Kilian Q. Weinberger; Dennis L. Barbour; John P. Cunningham


international conference on artificial intelligence and statistics | 2017

Discovering and Exploiting Additive Structure for Bayesian Optimization

Jacob R. Gardner; Chuan Guo; Kilian Q. Weinberger; Roman Garnett; Roger B. Grosse

Collaboration


Dive into the Jacob R. Gardner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dennis L. Barbour

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt J. Kusner

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Roman Garnett

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noah M. Ledbetter

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge