Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zachary C. Lipton is active.

Publication


Featured researches published by Zachary C. Lipton.


ACM Queue | 2018

The Mythos of Model Interpretability

Zachary C. Lipton

Supervised machine-learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world?


european conference on machine learning | 2014

Optimal thresholding of classifiers to maximize F1 measure

Zachary C. Lipton; Charles Elkan; Balakrishnan Naryanaswamy

This paper provides new insight into maximizing F1 measures in the context of binary classification and also in the context of multilabel classification. The harmonic mean of precision and recall, the F1 measure is widely used to evaluate the success of a binary classifier when one class is rare. Micro average, macro average, and per instance average F1 measures are used in multilabel classification. For any classifier that produces a real-valued output, we derive the relationship between the best achievable F1 value and the decision-making threshold that achieves this optimum. As a special case, if the classifier outputs are well-calibrated conditional probabilities, then the optimal threshold is half the optimal F1 value. As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive. When the actual prevalence of positive examples is low, this behavior can be undesirable. As a case study, we discuss the results, which can be surprising, of maximizing F1 when predicting 26,853 labels for Medline documents.


british machine vision conference | 2016

Context Matters: Refining Object Detection in Video with Recurrent Neural Networks.

Subarna Tripathi; Zachary C. Lipton; Serge J. Belongie; Truong Q. Nguyen

Given the vast amounts of video available online, and recent breakthroughs in object detection with static images, object detection in video offers a promising new frontier. However, motion blur and compression artifacts cause substantial frame-level variability, even in videos that appear smooth to the eye. Additionally, video datasets tend to have sparsely annotated frames. We present a new framework for improving object detection in videos that captures temporal context and encourages consistency of predictions. First, we train a pseudo-labeler, that is, a domain-adapted convolutional neural network for object detection. The pseudo-labeler is first trained individually on the subset of labeled frames, and then subsequently applied to all frames. Then we train a recurrent neural network that takes as input sequences of pseudo-labeled frames and optimizes an objective that encourages both accuracy on the target frame and consistency across consecutive frames. The approach incorporates strong supervision of target frames, weak-supervision on context frames, and regularization via a smoothness penalty. Our approach achieves mean Average Precision (mAP) of 68.73, an improvement of 7.1 over the strongest image-based baselines for the Youtube-Video Objects dataset. Our experiments demonstrate that neighboring frames can provide valuable information, even absent labels.


computer vision and pattern recognition | 2017

Tensor Contraction Layers for Parsimonious Deep Nets

Jean Kossaifi; Aran Khanna; Zachary C. Lipton; Tommaso Furlanello; Anima Anandkumar

Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. In particular, tensor decompositions are noted for their ability to discover multi-dimensional dependencies and produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. We evaluate TCLs performance on the task of image recognition, using the CIFAR100 and ImageNet datasets, studying the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance.


meeting of the association for computational linguistics | 2017

Deep Active Learning for Named Entity Recognition

Yanyao Shen; Hyokun Yun; Zachary C. Lipton; Yakov Kronrod; Animashree Anandkumar

Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning. While active learning is sample-efficient, it can be computationally expensive since it requires iterative retraining. To speed this up, we introduce a lightweight architecture for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and word encoders and a long short term memory (LSTM) tag decoder. The model achieves nearly state-of-the-art performance on standard datasets for the task while being computationally much more efficient than best performing models. We carry out incremental active learning, during the training process, and are able to nearly match state-of-the-art performance with just 25\% of the original training data.


arXiv: Learning | 2015

A Critical Review of Recurrent Neural Networks for Sequence Learning

Zachary C. Lipton


international conference on learning representations | 2016

Learning to Diagnose with LSTM Recurrent Neural Networks

Zachary C. Lipton; David C. Kale; Charles Elkan; Randall Wetzell


arXiv: Learning | 2014

Differential Privacy and Machine Learning: a Survey and Review.

Zhanglong Ji; Zachary C. Lipton; Charles Elkan


arXiv: Learning | 2016

Modeling Missing Data in Clinical Time Series with RNNs

Zachary C. Lipton; David C. Kale; Randall C. Wetzel


arXiv: Learning | 2016

A User Simulator for Task-Completion Dialogues

Xiujun Li; Zachary C. Lipton; Bhuwan Dhingra; Lihong Li; Jianfeng Gao; Yun-Nung Chen

Collaboration


Dive into the Zachary C. Lipton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu-Xiang Wang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Julian McAuley

University of California

View shared research outputs
Top Co-Authors

Avatar

Charles Elkan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David C. Kale

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Xiujun Li

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge