Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitris Tsipras is active.

Publication


Featured researches published by Dimitris Tsipras.


foundations of computer science | 2017

Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods

Michael B. Cohen; Aleksander Madry; Dimitris Tsipras; Adrian Vladu

In this paper, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time \widetilde{O}(m\log \kappa \log^2 (1/≥ilon)) where ≥ilon is the amount of error we are willing to tolerate. Here, \kappa represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever \kappa is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time \widetilde{O}(m^{3/2} \log (1/≥ilon)).In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving that we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient.


Theory of Computing Systems \/ Mathematical Systems Theory | 2016

Efficient Money Burning in General Domains

Dimitris Fotakis; Dimitris Tsipras; Christos Tzamos; Emmanouil Zampetakis

We study mechanism design where the objective is to maximize the residual surplus, i.e., the total value of the outcome minus the payments charged to the agents, by truthful mechanisms. The motivation comes from applications where the payments charged are not in the form of actual monetary transfers, but take the form of wasted resources. We consider a general mechanism design setting with m discrete outcomes and n multidimensional agents. We present two randomized truthful mechanisms that extract an O(logm) fraction of the maximum social surplus as residual surplus. The first mechanism achieves an O(logm)-approximation to the social surplus, which is improved to an O(1)-approximation by the second mechanism. An interesting feature of the second mechanism is that it optimizes over an appropriately restricted space of probability distributions, thus achieving an efficient tradeoff between social surplus and the total amount of payments charged to the agents.


international conference on learning representations | 2018

Towards Deep Learning Models Resistant to Adversarial Attacks

Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu


neural information processing systems | 2018

Adversarially Robust Generalization Requires More Data

Ludwig Schmidt; Shibani Santurkar; Dimitris Tsipras; Kunal Talwar; Aleksander Madry


arXiv: Learning | 2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations.

Logan Engstrom; Dimitris Tsipras; Ludwig Schmidt; Aleksander Madry


neural information processing systems | 2018

How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)

Shibani Santurkar; Dimitris Tsipras; Andrew Ilyas; Aleksander Madry


arXiv: Machine Learning | 2018

How Does Batch Normalization Help Optimization

Shibani Santurkar; Dimitris Tsipras; Andrew Ilyas; Aleksander Madry


arXiv: Machine Learning | 2018

There Is No Free Lunch In Adversarial Robustness (But There Are Unexpected Benefits).

Dimitris Tsipras; Shibani Santurkar; Logan Engstrom; Alexander Turner; Aleksander Madry


arXiv: Machine Learning | 2018

Robustness May Be at Odds with Accuracy

Dimitris Tsipras; Shibani Santurkar; Logan Engstrom; Alexander Turner; Aleksander Madry

Collaboration


Dive into the Dimitris Tsipras's collaboration.

Top Co-Authors

Avatar

Aleksander Madry

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shibani Santurkar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ludwig Schmidt

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adrian Vladu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew Ilyas

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aleksandar Makelov

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christos Tzamos

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Emmanouil Zampetakis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Logan Engstrom

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge