Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Madhu Advani is active.

Publication


Featured researches published by Madhu Advani.


Journal of Statistical Mechanics: Theory and Experiment | 2013

Statistical mechanics of complex neural systems and high dimensional data

Madhu Advani; Subhaneil Lahiri; Surya Ganguli

Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.


Physical Review X | 2016

Statistical Mechanics of Optimal Convex Inference in High Dimensions

Madhu Advani; Surya Ganguli

To model modern large-scale datasets, we need efficient algorithms to infer a set of


bioRxiv | 2017

Environmental engineering is an emergent feature of diverse ecosystems and drives community structure

Madhu Advani; Guy Bunin; Pankaj Mehta

P


Journal of Statistical Mechanics: Theory and Experiment | 2018

Statistical physics of community ecology: a cavity solution to MacArthur’s consumer resource model

Madhu Advani; Guy Bunin; Pankaj Mehta

unknown model parameters from


Molecular Physics | 2018

Energy–entropy competition and the effectiveness of stochastic gradient descent in machine learning

Yao Zhang; Andrew M. Saxe; Madhu Advani; Alpha A. Lee

N


PLOS ONE | 2017

Position and orientation inference via on-board triangulation

Madhu Advani; Daniel S. Weile

noisy measurements. What are fundamental limits on the accuracy of parameter inference, given finite signal-to-noise ratios, limited measurements, prior information, and computational tractability requirements? How can we combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density


arXiv: Machine Learning | 2017

High-dimensional dynamics of generalization error in neural networks.

Madhu Advani; Andrew M. Saxe

\alpha = \frac{N}{P}\rightarrow \infty


international conference on learning representations | 2018

On the Information Bottleneck Theory of Deep Learning

Andrew M. Saxe; Yamini Bansal; Joel Dapello; Madhu Advani; Artemy Kolchinsky; Brendan D. Tracey; David Cox

. However, these classical results are not relevant to modern high-dimensional inference problems, which instead occur at finite


neural information processing systems | 2016

An equivalence between high dimensional Bayes optimal inference and M-estimation

Madhu Advani; Surya Ganguli

\alpha


arXiv: Machine Learning | 2018

Minnorm training: an algorithm for training overcomplete deep neural networks

Yamini Bansal; Madhu Advani; David Cox; Andrew M. Saxe

. We formulate and analyze high-dimensional inference as a problem in the statistical physics of quenched disorder. Our analysis uncovers fundamental limits on the accuracy of inference in high dimensions, and reveals that widely cherished inference algorithms like maximum likelihood (ML) and maximum-a posteriori (MAP) inference cannot achieve these limits. We further find optimal, computationally tractable algorithms that can achieve these limits. Intriguingly, in high dimensions, these optimal algorithms become computationally simpler than MAP and ML, while still outperforming them. For example, such optimal algorithms can lead to as much as a 20% reduction in the amount of data to achieve the same performance relative to MAP. Moreover, our analysis reveals simple relations between optimal high dimensional inference and low dimensional scalar Bayesian inference, insights into the nature of generalization and predictive power in high dimensions, information theoretic limits on compressed sensing, phase transitions in quadratic inference, and connections to central mathematical objects in convex optimization theory and random matrix theory.

Collaboration


Dive into the Madhu Advani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guy Bunin

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge