Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ewout van den Berg.
The Annals of Applied Statistics | 2015
Małgorzata Bogdan; Ewout van den Berg; Chiara Sabatti; Weijie Su; Emmanuel J. Candès
We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to [Formula: see text]where λ1 ≥ λ2 ≥ … ≥ λ p ≥ 0 and [Formula: see text] are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank-that is, stronger the signal-the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B57 (1995) 289-300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λ i } is given by the BH critical values [Formula: see text], where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.
Proceedings of the National Academy of Sciences of the United States of America | 2013
Ewout van den Berg; Emmanuel J. Candès; Garry Chinn; Craig S. Levin; Peter D. Olcott; Carlos Sing-Long
Significance We propose a highly compressed readout architecture for arrays of imaging sensors capable of detecting individual photons. By exploiting sparseness properties of the input signal, our architecture can provide the same information content as conventional readout designs while using orders of magnitude fewer output channels. This is achieved using a unique interconnection topology based on group-testing theoretical considerations. Unlike existing designs, this promises a low-cost sensor with high fill factor and high photon sensitivity, potentially enabling increased spatial and temporal resolution in a number of imaging applications, including positron-emission tomography and light detection and ranging. Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as light detection and ranging and positron-emission tomography. The demands placed on on-chip readout circuitry impose stringent trade-offs between fill factor and spatiotemporal resolution, causing many contemporary designs to severely underuse the technology’s full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs. We provide optimized design instances for various sensor parameters and compute explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization of a 60 × 60 photodiode sensor using only 142 TDCs. The design guarantees registration and unique recovery of up to four simultaneous photon arrivals using a fast decoding algorithm. By contrast, a cross-strip design requires 120 TDCs and cannot uniquely decode any simultaneous photon arrivals. Among other realistic simulations of scintillation events in clinical positron-emission tomography, the above design is shown to recover the spatiotemporal location of 99.98% of all detected photons.
international conference on acoustics, speech, and signal processing | 2017
Ewout van den Berg; Bhuvana Ramabhadran; Michael Picheny
In this work we study variance in the results of neural network training on a wide variety of configurations in automatic speech recognition. Although this variance itself is well known, this is, to the best of our knowledge, the first paper that performs an extensive empirical study on its effects in speech recognition. We view training as sampling from a distribution and show that these distributions can have a substantial variance. These results show the urgent need to rethink the way in which results in the literature are reported and interpreted.
european conference on parallel processing | 2015
Stephen Moore; Devi Sudheer Chunduri; Sergiy Zhuk; Tigran T. Tchrakian; Ewout van den Berg; Albert Akhriev; Alberto Costa Nogueira; Andrew Rawlinson; Lior Horesh
Full waveform inversion (FWI) is an emerging subsurface imaging technique, used to locate oil and gas reservoirs. The key challenges that hinder its adoption by industry are both algorithmic and computational in nature, including storage, communication, and processing of large-scale data structures, which impose cardinal impediments upon computational scalability. In this work we will present a complete matrix-free algorithmic formulation of a 3D elastic time domain spectral element solver for both the forward and adjoint wave-fields as part of a greater cloud based FWI framework. We discuss computational optimisation (SIMD vectorisation, use of Many Integrated Core architectures, etc.) and present scaling results for two HPC systems, namely an IBM Blue Gene/Q and an Intel based system equipped with Xeon Phi coprocessors.
international symposium on information theory | 2013
Mary Wootters; Yaniv Plan; Mark A. Davenport; Ewout van den Berg
In this paper we consider the problem of 1-bit matrix completion, where instead of observing a subset of the real-valued entries of a matrix M, we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the real-valued entries of M. The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, however, it has recently been shown in [1] that under certain assumptions it is possible to recover M by optimizing a simple convex program. In this paper we provide lower bounds showing that these estimates are near-optimal.
arXiv: Statistics Theory | 2014
Mark A. Davenport; Yaniv Plan; Ewout van den Berg; Mary Wootters
arXiv: Methodology | 2013
Małgorzata Bogdan; Ewout van den Berg; Weijie Su; Emmanuel J. Candès
Archive | 2014
Garry Chinn; Peter D. Olcott; Craig S. Levin; Ewout van den Berg; Carlos Alberto Sing-Long Collao; Emmanuel J. Candès
arXiv: Learning | 2016
Ewout van den Berg
arXiv: Learning | 2018
Ziv Goldfeld; Ewout van den Berg; Kristjan H. Greenewald; Igor Melnyk; Nam P. Nguyen; Brian Kingsbury; Yury Polyanskiy