Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johannes Lederer is active.

Publication


Featured researches published by Johannes Lederer.


Bernoulli | 2017

On the Prediction Performance of the Lasso

Arnak S. Dalalyan; Mohamed Hebiri; Johannes Lederer

Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood. In this paper, we give new insights into this relationship in the context of multiple linear regression. We show, in particular, that the incorporation of a simple correlation measure into the tuning parameter can lead to a nearly optimal prediction performance of the Lasso even for highly correlated covariates. However, we also reveal that for moderately correlated covariates, the prediction performance of the Lasso can be mediocre irrespective of the choice of the tuning parameter. We finally show that our results also lead to near-optimal rates for the least-squares estimator with total variation penalty.


IEEE Transactions on Information Theory | 2014

The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms

Florentina Bunea; Johannes Lederer; Yiyuan She

We introduce and study the group square-root lasso (GSRL) method for estimation in high dimensional sparse regression models with group structure. The new estimator minimizes the square root of the residual sum of squares plus a penalty term proportional to the sum of the Euclidean norms of groups of the regression parameter vector. The net advantage of the method over the existing group lasso-type procedures consists in the form of the proportionality factor used in the penalty term, which for GSRL is independent of the variance of the error terms. This is of crucial importance in models with more parameters than the sample size, when estimating the variance of the noise becomes as difficult as the original problem. We show that the GSRL estimator adapts to the unknown sparsity of the regression vector, and has the same optimal estimation and prediction accuracy as the GL estimators, under the same minimal conditions on the model. This extends the results recently established for the square-root lasso, for sparse regression without group structure. Moreover, as a new type of result for square-root lasso methods, with or without groups, we study correct pattern recovery, and show that it can be achieved under conditions similar to those needed by the lasso or group-lasso-type methods, but with a simplified tuning strategy. We implement our method via a new algorithm, with proved convergence properties, which, unlike existing methods, scales well with the dimension of the problem. Our simulation studies support strongly our theoretical findings.


IEEE Transactions on Information Theory | 2013

How Correlations Influence Lasso Prediction

Mohamed Hebiri; Johannes Lederer

We study how correlations in the design matrix influence Lasso prediction. First, we argue that the higher the correlations, the smaller the optimal tuning parameter. This implies in particular that the standard tuning parameters, that do not depend on the design matrix, are not favorable. Furthermore, we argue that Lasso prediction works well for any degree of correlations if suitable tuning parameters are chosen. We study these two subjects theoretically as well as with simulations.


arXiv: Methodology | 2013

The Lasso, correlated design, and improved oracle inequalities

Sara van de Geer; Johannes Lederer

We study high-dimensional linear models and the


Bernoulli | 2014

New concentration inequalities for suprema of empirical processes

Johannes Lederer; Sara van de Geer

\ell_1


Probability Theory and Related Fields | 2013

The Bernstein–Orlicz norm and deviation inequalities

Sara van de Geer; Johannes Lederer

-penalized least squares estimator, also known as the Lasso estimator. In literature, oracle inequalities have been derived under restricted eigenvalue or compatibility conditions. In this paper, we complement this with entropy conditions which allow one to improve the dual norm bound, and demonstrate how this leads to new oracle inequalities. The new oracle inequalities show that a smaller choice for the tuning parameter and a trade-off between


arXiv: Methodology | 2014

Tuning Lasso for sup-norm optimality

Michaël Chichignoud; Johannes Lederer; Martin J. Wainwright

\ell_1


Bernoulli | 2014

A robust, adaptive M-estimator for pointwise estimation in heteroscedastic regression

Michaël Chichignoud; Johannes Lederer

-norms and small compatibility constants are possible. This implies, in particular for correlated design, improved bounds for the prediction error of the Lasso estimator as compared to the methods based on restricted eigenvalue or compatibility conditions only.


arXiv: Methodology | 2013

Trust, but verify: benefits and pitfalls of least-squares refitting in high dimensions

Johannes Lederer

While effective concentration inequalities for suprema of empirical processes exist under boundedness or strict tail assumptions, no comparable results have been available under considerably weaker assumptions. In this paper, we derive concentration inequalities assuming only low moments for an envelope of the empirical process. These concentration inequalities are beneficial even when the envelope is much larger than the single functions under consideration.


arXiv: Statistics Theory | 2016

Oracle Inequalities for High-dimensional Prediction

Johannes Lederer; Lu Yu; Irina Gaynanova

Collaboration


Dive into the Johannes Lederer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yiyuan She

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge