Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Risi Kondor is active.

Publication


Featured researches published by Risi Kondor.


conference on learning theory | 2003

Kernels and Regularization on Graphs

Alexander J. Smola; Risi Kondor

We introduce a family of kernels on graphs based on the notion of regularization operators. This generalizes in a natural way the notion of regularization and Greens functions, as commonly used for real valued functions, to graphs. It turns out that diffusion kernels can be found as a special case of our reasoning. We show that the class of positive, monotonically decreasing functions on the unit interval leads to kernels and corresponding regularization operators.


Physical Review Letters | 2010

Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons

Albert P. Bartók; M. C. Payne; Risi Kondor; Gábor Csányi

We introduce a class of interatomic potential models that can be automatically generated from data consisting of the energies and forces experienced by atoms, as derived from quantum mechanical calculations. The models do not have a fixed functional form and hence are capable of modeling complex potential energy landscapes. They are systematically improvable with more data. We apply the method to bulk crystals, and test it by calculating properties at high temperatures. Using the interatomic potential to generate the long molecular dynamics trajectories required for such calculations saves orders of magnitude in computational cost.


Physical Review B | 2013

On representing chemical environments

Albert P. Bartók; Risi Kondor; Gábor Csányi

We review some recently published methods to represent atomic neighbourhood environments, and analyse their relative merits in terms of their faithfulness and suitability for fitting potential energy surfaces. The crucial properties that such representations (sometimes called descriptors) must have are differentiability with respect to moving the atoms, and invariance to the basic symmetries of physics: rotation, reflection, translation, and permutation of atoms of the same species. We demonstrate that certain widely used descriptors that initially look quite different are specific cases of a general approach, in which a finite set of basis functions with increasing angular wave numbers are used to expand the atomic neighbourhood density function. Using the example system of small clusters, we quantitatively show that this expansion needs to be carried to higher and higher wave numbers as the number of neighbours increases in order to obtain a faithful representation, and that variants of the descriptors converge at very different rates. We also propose an altogether new approach, called Smooth Overlap of Atomic Positions (SOAP), that sidesteps these difficulties by directly defining the similarity between any two neighbourhood environments, and show that it is still closely connected to the invariant descriptors. We test the performance of the various representations by fitting models to the potential energy surface of small silicon clusters and the bulk crystal.


conference on learning theory | 2003

Bhattacharyya and expected likelihood kernels

Tony Jebara; Risi Kondor

We introduce a new class of kernels between distributions. These induce a kernel on the input space between data points by associating to each datum a generative model fit to the data point individually. The kernel is then computed by integrating the product of the two generative models corresponding to two data points. This kernel permits discriminative estimation via, for instance, support vector machines, while exploiting the properties, assumptions, and invariances inherent in the choice of generative model. It satisfies Mercer’s condition and can be computed in closed form for a large class of models, including exponential family models, mixtures, hidden Markov models and Bayesian networks. For other models the kernel can be approximated by sampling methods. Experiments are shown for multinomial models in text classification and for hidden Markov models for protein sequence classification.


international conference on machine learning | 2009

The graphlet spectrum

Risi Kondor; Nino Shervashidze; Karsten M. Borgwardt

Current graph kernels suffer from two limitations: graph kernels based on counting particular types of subgraphs ignore the relative position of these subgraphs to each other, while graph kernels based on algebraic methods are limited to graphs without node labels. In this paper we present the graphlet spectrum, a system of graph invariants derived by means of group representation theory that capture information about the number as well as the position of labeled subgraphs in a given graph. In our experimental evaluation the graphlet spectrum outperforms state-of-the-art graph kernels.


international conference on machine learning | 2008

The skew spectrum of graphs

Risi Kondor; Karsten M. Borgwardt

The central issue in representing graph-structured data instances in learning algorithms is designing features which are invariant to permuting the numbering of the vertices. We present a new system of invariant graph features which we call the skew spectrum of graphs. The skew spectrum is based on mapping the adjacency matrix of any (weigted, directed, unlabeled) graph to a function on the symmetric group and computing bispectral invariants. The reduced form of the skew spectrum is computable in O(n3) time, and experiments show that on several benchmark datasets it can outperform state of the art graph kernels.


Journal of Chemical Physics | 2018

Predicting molecular properties with covariant compositional networks

Truong Son Hy; Shubhendu Trivedi; Horace Pan; Brandon M. Anderson; Risi Kondor

Density functional theory (DFT) is the most successful and widely used approach for computing the electronic structure of matter. However, for tasks involving large sets of candidate molecules, running DFT separately for every possible compound of interest is forbiddingly expensive. In this paper, we propose a neural network based machine learning algorithm which, assuming a sufficiently large training sample of actual DFT results, can instead learn to predict certain properties of molecules purely from their molecular graphs. Our algorithm is based on the recently proposed covariant compositional networks framework and involves tensor reduction operations that are covariant with respect to permutations of the atoms. This new approach avoids some of the representational limitations of other neural networks that are popular in learning from molecular graphs and yields promising results in numerical experiments on the Harvard Clean Energy Project and QM9 molecular datasets.


computer vision and pattern recognition | 2017

The Incremental Multiresolution Matrix Factorization Algorithm

Vamsi K. Ithapu; Risi Kondor; Sterling C. Johnson; Vikas Singh

Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices – an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct global factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision.


arXiv: Social and Information Networks | 2016

Data Mining When Each Data Point is a Network

Karthikeyan Rajendran; Assimakis A. Kattis; Alexander Holiday; Risi Kondor; Ioannis G. Kevrekidis

We discuss the problem of extending data mining approaches to cases in which data points arise in the form of individual graphs. Being able to find the intrinsic low-dimensionality in ensembles of graphs can be useful in a variety of modeling contexts, especially when coarse-graining the detailed graph information is of interest. One of the main challenges in mining graph data is the definition of a suitable pairwise similarity metric in the space of graphs. We explore two practical solutions to solving this problem: one based on finding subgraph densities, and one using spectral information. The approach is illustrated on three test data sets (ensembles of graphs); two of these are obtained from standard literature graph generating algorithms, while the graphs in the third example are sampled as dynamic snapshots from an evolving network simulation. We further combine these approaches with equation free techniques, demonstrating how such data mining can enhance scientific computation of network evolution dynamics.


Archive | 2011

Non-commutative harmonic analysis in multi-object tracking

Risi Kondor

Simultaneously tracking n targets in space involves two closely coupled tasks: estimating the current positions x1, x2, . . . , xn of their tracks, and estimating the assignment σ: {1, 2, . . . , n} → {1, 2, . . . , n} of targets to tracks. While the former is often a relatively straightforward extension of the single target case, the latter, called identity management or data association, is a fundamentally combinatorial problem, which is harder to fit in a computationally efficient probabilistic framework. Identity management is difficult because the number of possible assignments grows with n!. This means that for n greater than about 10 or 12, representing the distribution p(σ) explicitly as an array of n! numbers is generally not possible. In this chapter we discuss a solution to this problem based on the generalisation of harmonic analysis to non-commutative groups, specifically, in our case, the group of permutations. According to this theory, the Fourier transform of p takes the form ^p(λ)= Σ_(σ∈S_n)p(σ)pλ(σ) where S_n denotes the group of permutations of n objects, λ is a combinatorial object called an integer partition, and ρλ is a special matrix-valued function called a representation. These terms are defined in our short primer on representation theory in Section 13.2. What is important to note is that, since ρλ is matrix-valued, each Fourier component ^p(λ) is a matrix, not just a scalar. Apart from this surprising feature, non-commutative Fourier transforms are very similar to their familiar commutative counterparts. In particular, we argue that there is a well-defined sense in which some of the ^p(λ) matrices are the ‘low-frequency’ components of p, and approximating p with this subset of components is optimal. A large part of this chapter is focused on how to define such a notion of ‘frequency’, and how to find the corresponding Fourier components.We describe two seemingly very different approaches to answering this question, and find, reassuringly, that they give exactly the same answer. Of course, in addition to a compact way of representing p, efficient inference also demands fast algorithms for updating p with observations. Section 13.6 gives an overview of the fast Fourier methods that are employed for this purpose.

Collaboration


Dive into the Risi Kondor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shubhendu Trivedi

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Vikas Singh

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Deepti Pachauri

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pramod Kaushik Mudrakarta

Memorial Sloan Kettering Cancer Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge