Jennifer Paykin
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jennifer Paykin.
international world wide web conferences | 2011
Stephen Tyree; Kilian Q. Weinberger; Kunal Agrawal; Jennifer Paykin
Gradient Boosted Regression Trees (GBRT) are the current state-of-the-art learning paradigm for machine learned web-search ranking - a domain notorious for very large data sets. In this paper, we propose a novel method for parallelizing the training of GBRT. Our technique parallelizes the construction of the individual regression trees and operates using the master-worker paradigm as follows. The data are partitioned among the workers. At each iteration, the worker summarizes its data-partition using histograms. The master processor uses these to build one layer of a regression tree, and then sends this layer to the workers, allowing the workers to build histograms for the next layer. Our algorithm carefully orchestrates overlap between communication and computation to achieve good performance. Since this approach is based on data partitioning, and requires a small amount of communication, it generalizes to distributed and shared memory machines, as well as clouds. We present experimental results on both shared memory machines and clusters for two large scale web search ranking data sets. We demonstrate that the loss in accuracy induced due to the histogram approximation in the regression tree creation can be compensated for through slightly deeper trees. As a result, we see no significant loss in accuracy on the Yahoo data sets and a very small reduction in accuracy for the Microsoft LETOR data. In addition, on shared memory machines, we obtain almost perfect linear speed-up with up to about 48 cores on the large data sets. On distributed memory machines, we get a speedup of 25 with 32 processors. Due to data partitioning our approach can scale to even larger data sets, on which one can reasonably expect even higher speedups.
LINEARITY | 2014
Jennifer Paykin; Steve Zdancewic
This paper defines a new proof- and category-theoretic framework for classical linear logic that separates reasoning into one linear regime and two persistent regimes corresponding to ! and ?. The resulting linear/producer/consumer (LPC) logic puts the three classes of propositions on the same semantic footing, following Bentons linear/non-linear formulation of intuitionistic linear logic. Semantically, LPC corresponds to a system of three categories connected by adjunctions reflecting the linear/producer/consumer structure. The papers metatheoretic results include admissibility theorems for the cut and duality rules, and a translation of the LPC logic into category theory. The work also presents several concrete instances of the LPC model.
international symposium on haskell | 2017
Jennifer Paykin; Steve Zdancewic
We introduce a technique for programming with domain-specific linear languages using the monad that arises from the theory of linear/non-linear logic. In this work we interpret the linear/non-linear model as a simple, effectful linear language embedded inside an existing non-linear host language. We implement a modular framework for defining these linear EDSLs in Haskell, allowing both shallow and deep embeddings. To demonstrate the effectiveness of the framework and the linearity monad, we implement languages for file handles, mutable arrays, session types, and quantum computing.
Proceedings of the 1st International Workshop on Type-Driven Development | 2016
Jennifer Paykin; Antal Spector-Zabusky; Kenneth Foner
We discuss a generalization of the synchronization mechanism selective choice. We argue that selective choice can be extended to synchronize arbitrary data structures of events, based on a typing paradigm introduced by McBride: the derivatives of recursive data types. We discuss our work in progress implementing generalized selective choice as a Haskell library based on generic programming.
Archive | 2016
Jennifer Paykin; Steve Zdancewic
In this paper we compare Wadler’s \( \textsc {CP} \) calculus for classical linear processes to a linear version of Parigot’s \( \lambda \mu \) calculus for classical logic. We conclude that linear \( \lambda \mu \) is “more or less” \( \textsc {CP} \), in that it equationally corresponds to a polarized version of \( \textsc {CP} \). The comparison is made by extending a technique from Mellies and Tabareau’s tensor logic that correlates negation with polarization. The polarized \( \textsc {CP} \), which is written \( \textsc {CP}^{\pm } \) and pronounced “\( \textsc {CP} \) more or less,” is an interesting bridge in the landscape of Curry-Howard interpretations of logic.In this paper we compare Wadler’s CP calculus for classical linear processes to a linear version of Parigot’s λμ calculus for classical logic. We conclude that linear λμ is “more or less” CP, in that it equationally corresponds to a polarized version of CP. The comparison is made by extending a technique from Melliès and Tabareau’s tensor logic that correlates negation with polarization. The polarized CP, which is written CP± and pronounced “CP more or less,” is an interesting bridge in the landscape of Curry-Howard interpretations of logic.
programming languages meets program verification | 2013
Norman Danner; Jennifer Paykin; James S. Royer
symposium on principles of programming languages | 2017
Jennifer Paykin; Robert Rand; Steve Zdancewic
Electronic Proceedings in Theoretical Computer Science | 2018
Robert Rand; Jennifer Paykin; Steve Zdancewic
Mathematical Structures in Computer Science | 2016
Jennifer Paykin; Steve Zdancewic
A List of Successes That Can Change the World | 2016
Jennifer Paykin; Steve Zdancewic