Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rati Gelashvili is active.

Publication


Featured researches published by Rati Gelashvili.


principles of distributed computing | 2015

Fast and Exact Majority in Population Protocols

Dan Alistarh; Rati Gelashvili; Milan Vojnovic

Population protocols, roughly defined as systems consisting of large numbers of simple identical agents, interacting at random and updating their state following simple rules, are an important research topic at the intersection of distributed computing and biology. One of the fundamental tasks that a population protocol may solve is majority: each node starts in one of two states; the goal is for all nodes to reach a correct consensus on which of the two states was initially the majority. Despite considerable research effort, known protocols for this problem are either exact but slow (taking linear parallel time to converge), or fast but approximate (with non-zero probability of error). In this paper, we show that this trade-off between precision and speed is not inherent. We present a new protocol called Average and Conquer (AVC) that solves majority exactly in expected parallel convergence time O(log n/(sε) + log n log s), where n is the number of nodes, ε n is the initial node advantage of the majority state, and s = Ω(log n log log n) is the number of states the protocol employs. This shows that the majority problem can be solved exactly in time poly-logarithmic in n, provided that the memory per node is s = Ω(1/ε + log n log{1/ε). On the negative side, we establish a lower bound of Ω(1/ε) on the expected parallel convergence time for the case of four memory states per node, and a lower bound of Ω(log n) parallel time for protocols using any number of memory states per node.


Robust and Online Large-Scale Optimization | 2009

Mining Railway Delay Dependencies in Large-Scale Real-World Delay Data

Holger Flier; Rati Gelashvili; Thomas Graffagnino; Marc Nunkesser

The propagation of delays between trains has a considerable impact on railway operations. Ideally, planners would like to create timetables that avoid such propagation as much as possible. To improve existing timetables, tools for automatic detection of systematic dependencies of delays among trains would be of great aid. We present efficient algorithms to detect two of the most important types of dependencies, namely dependencies due to resource conflicts and due to maintained connections. We give experimental results on real-world data that demonstrate the practical applicability of our algorithms.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Sparse sign-consistent Johnson-Lindenstrauss matrices: compression with neuroscience-based constraints.

Zeyuan Allen-Zhu; Rati Gelashvili; Silvio Micali; Nir Shavit

Significance Significant biological evidence indicates that the brain may perform some form of compression. To be meaningful, such compression should preserve pairwise correlation of the input data. It is mathematically well known that multiplying the input vectors by a sparse and fixed random matrix A achieves the desired compression. But, to implement such an approach in the brain via a synaptic-connectivity matrix, A should also to be sign consistent: that is, all entries in a single column must be either all nonnegative or all nonpositive. This is so because most neurons are either excitatory or inhibitory. We prove that sparse sign-consistent matrices can deliver the desired compression, lending credibility to the hypothesis that correlation-preserving compression occurs in the brain via synaptic-connectivity matrices. Johnson–Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign consistent (i.e., all entries in a single column must be either all nonnegative or all nonpositive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix should be sparse. We construct sparse JL matrices that are sign consistent and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.Johnson-Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign-consistent (i.e., all entries in a single column must be either all non-negative or all non-positive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix had better be sparse. We construct sparse JL matrices that are sign-consistent, and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain, and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Johnson-Lindenstrauss Compression with Neuroscience-Based Constraints.

Zeyuan Allen Zhu; Rati Gelashvili; Silvio Micali; Nir Shavit

Significance Significant biological evidence indicates that the brain may perform some form of compression. To be meaningful, such compression should preserve pairwise correlation of the input data. It is mathematically well known that multiplying the input vectors by a sparse and fixed random matrix A achieves the desired compression. But, to implement such an approach in the brain via a synaptic-connectivity matrix, A should also to be sign consistent: that is, all entries in a single column must be either all nonnegative or all nonpositive. This is so because most neurons are either excitatory or inhibitory. We prove that sparse sign-consistent matrices can deliver the desired compression, lending credibility to the hypothesis that correlation-preserving compression occurs in the brain via synaptic-connectivity matrices. Johnson–Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign consistent (i.e., all entries in a single column must be either all nonnegative or all nonpositive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix should be sparse. We construct sparse JL matrices that are sign consistent and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.Johnson-Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign-consistent (i.e., all entries in a single column must be either all non-negative or all non-positive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix had better be sparse. We construct sparse JL matrices that are sign-consistent, and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain, and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.


principles of distributed computing | 2016

A Complexity-Based Hierarchy for Multiprocessor Synchronization: [Extended Abstract]

Faith Ellen; Rati Gelashvili; Nir Shavit; Leqi Zhu

For many years, Herlihys elegant computability based Consensus Hierarchy has been our best explanation of the relative power of various types of multiprocessor synchronization objects when used in deterministic algorithms. However, key to this hierarchy is treating these instructions as distinct objects, an approach that is far from the real-world, where multiprocessor programs apply synchronization instructions to collections of arbitrary memory locations. We were surprised to realize that, when considering instructions applied to memory locations, the computability based hierarchy collapses. This leaves open the question of how to better captures the power of various synchronization instructions. In this paper, we provide an approach to answering this question. We present a hierarchy of synchronization instructions, classified by their space complexity in solving obstruction-free consensus. Our hierarchy provides a classification of combinations of known instructions that seems to fit with our intuition of how useful some are in practice, while questioning the effectiveness of others. We prove an essentially tight characterization of the power of buffered read and write instructions. Interestingly, we show a similar result for multi-location atomic assignments.


IEEE Transactions on Information Theory | 2016

Restricted Isometry Property for General p-Norms

Zeyuan Allen-Zhu; Rati Gelashvili; Ilya P. Razenshteyn

The restricted isometry property (RIP) is a fundamental property of a matrix, which enables sparse recovery. Informally, an


principles of distributed computing | 2018

Revisionist Simulations: A New Approach to Proving Space Lower Bounds

Faith Ellen; Rati Gelashvili; Leqi Zhu

m \times n


Sigact News | 2018

Recent Algorithmic Advances in Population Protocols

Dan Alistarh; Rati Gelashvili

matrix satisfies RIP of order


Distributed Computing | 2018

On the optimal space complexity of consensus for anonymous processes

Rati Gelashvili

k


symposium on computational geometry | 2015

Restricted Isometry Property for General p-Norms.

Zeyuan Allen-Zhu; Rati Gelashvili; Ilya P. Razenshteyn

for the

Collaboration


Dive into the Rati Gelashvili's collaboration.

Top Co-Authors

Avatar

Dan Alistarh

Institute of Science and Technology Austria

View shared research outputs
Top Co-Authors

Avatar

Nir Shavit

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ilya P. Razenshteyn

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Silvio Micali

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leqi Zhu

University of Toronto

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald L. Rivest

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge