Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amin Karbasi is active.

Publication


Featured researches published by Amin Karbasi.


IEEE Transactions on Information Theory | 2012

Graph-Constrained Group Testing

Mahdi Cheraghchi; Amin Karbasi; Soheil Mohajer; Venkatesh Saligrama

Nonadaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in network tomography, sensor networks and infection propagation, a variation of group testing problems on graphs is formulated. Unlike conventional group testing problems, each group here must conform to the constraints imposed by a graph. For instance, items can be associated with vertices and each pool is any set of nodes that must be path connected. In this paper, a test is associated with a random walk. In this context, conventional group testing corresponds to the special case of a complete graph on n vertices. For interesting classes of graphs a rather surprising result is obtained, namely, that the number of tests required to identify d defective items is substantially similar to what is required in conventional group testing problems, where no such constraints on pooling is imposed. Specifically, if T(n) corresponds to the mixing time of the graph G, it is shown that with m = O(d2T2(n) log(n/d)) nonadaptive tests, one can identify the defective items. Consequently, for the Erdos-Rényi random graph G(n, p), as well as expander graphs with constant spectral gap, it follows that m = O(d2 log3 n) non-adaptive tests are sufficient to identify d defective items. Next, a specific scenario is considered that arises in network tomography, for which it is shown that m = O(d3 log3 n) nonadaptive tests are sufficient to identify d defective items. Noisy counterparts of the graph constrained group testing problem are considered, for which parallel results are developed. We also briefly discuss extensions to compressive sensing on graphs.


IEEE Transactions on Information Theory | 2011

Group Testing With Probabilistic Tests: Theory, Design and Application

Mahdi Cheraghchi; Ali Hormati; Amin Karbasi; Martin Vetterli

Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design nonadaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items. In particular, for a population of size N and at most K defective items with activation probability p, our results show that M = O(K2 log (N/K)/p3) tests is sufficient if the sampling procedure should work for all possible sets of defective items, while M = O(K log (N)/p3) tests is enough to be successful for any single set of defective items. Moreover, we show that the defective members can be recovered using a simple reconstruction algorithm with complexity of O(MN).


information theory workshop | 2010

Sensor network localization from local connectivity: Performance analysis for the MDS-MAP algorithm

Sewoong Oh; Andrea Montanari; Amin Karbasi

Sensor localization from only connectivity information is a highly challenging problem. To this end, our result for the first time establishes an analytic bound on the performance of the popular MDS-MAP algorithm based on multidimensional scaling. For a network consisting of n sensors positioned randomly on a unit square and a given radio range r = o(1), we show that resulting error is bounded, decreasing at a rate that is inversely proportional to r, when only connectivity information is given. The same bound holds for the range-based model, when we have an approximate measurements for the distances, and the same algorithm can be applied without any modification.


international symposium on information theory | 2010

Graph-constrained group testing

Mahdi Cheraghchi; Amin Karbasi; Soheil Mohajer; Venkatesh Saligrama

Non-adaptive group testing involves grouping arbitrary subsets of n items into different pools and identifying defective items based on tests obtained for each pool. Motivated by applications in network tomography, sensor networks and infection propagation we formulate non-adaptive group testing problems on graphs. Unlike conventional group testing problems each group here must conform to the constraints imposed by a graph. For instance, items can be associated with vertices and each pool is any set of nodes that must be path connected. In this paper we associate a test with a random walk. In this context conventional group testing corresponds to the special case of a complete graph on n vertices. For interesting classes of graphs we arrive at a rather surprising result, namely, that the number of tests required to identify d defective items is substantially similar to that required in conventional group testing problems, where no such constraints on pooling is imposed. Specifically, if T(n) corresponds to the mixing time of the graph G, we show that with m = O(d2T2(n) log(n/d)) non-adaptive tests, one can identify the defective items. Consequently, for the Erdős-Renyi random graph G(n, p), as well as expander graphs with constant spectral gap, it follows that m = O(d2 log3 n) non-adaptive tests are sufficient to identify d defective items. We next consider a specific scenario that arises in network tomography and show that m = O(d3 log3 n) non-adaptive tests are sufficient to identify d defective items. We also consider noisy counterparts of the graph constrained group testing problem and develop parallel results for these cases.


international symposium on information theory | 2009

Support recovery in compressed sensing: An estimation theoretic approach

Amin Karbasi; Ali Hormati; Soheil Mohajer; Martin Vetterli

Compressed sensing (CS) deals with the reconstruction of sparse signals from a small number of linear measurements. One of the main challenges in CS is to find the support of a sparse signal from a set of noisy observations. In the CS literature, several information-theoretic bounds on the scaling law of the required number of measurements for exact support recovery have been derived, where the focus is mainly on random measurement matrices. In this paper, we investigate the support recovery problem from an estimation theory point of view, where no specific assumption is made on the underlying measurement matrix. By using the Hammersley-Chapman-Robbins (HCR) bound, we derive a fundamental lower bound on the performance of any unbiased estimator which provides necessary conditions for reliable l2-norm support recovery. We then analyze the optimal decoder to provide conditions under which the HCR bound is achievable. This leads to a set of sufficient conditions for reliable l2-norm support recovery.


allerton conference on communication, control, and computing | 2009

Compressed sensing with probabilistic measurements: A group testing solution

Mahdi Cheraghchi; Ali Hormati; Amin Karbasi; Martin Vetterli

Detection of defective members of large populations has been widely studied in the statistics community under the name “group testing”, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected populapteioopnle and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for “viral infections”, we design non-adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1 − o(1). We propose both probabilistic and explicit design procedures that require a “small” number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k2(log n)/p2) and m = O(k2 (log2 n)/p2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in time O(mn).


international symposium on information theory | 2012

Multi-level error-resilient neural networks

Amir Hesam Salavati; Amin Karbasi

The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.


IEEE ACM Transactions on Networking | 2013

Robust localization from incomplete local information

Amin Karbasi; Sewoong Oh

We consider the problem of localizing wireless devices in an ad hoc network embedded in a d-dimensional Euclidean space. Obtaining a good estimate of where wireless devices are located is crucial in wireless network applications including environment monitoring, geographic routing, and topology control. When the positions of the devices are unknown and only local distance information is given, we need to infer the positions from these local distance measurements. This problem is particularly challenging when we only have access to measurements that have limited accuracy and are incomplete. We consider the extreme case of this limitation on the available information, namely only the connectivity information is available, i.e., we only know whether a pair of nodes is within a fixed detection range of each other or not, and no information is known about how far apart they are. Furthermore, to account for detection failures, we assume that even if a pair of devices are within the detection range, they fail to detect the presence of one another with some probability, and this probability of failure depends on how far apart those devices are. Given this limited information, we investigate the performance of a centralized positioning algorithm MDS-MAP introduced by Shang and a distributed positioning algorithm HOP-TERRAIN introduced by Savarese In particular, for a network consisting of n devices positioned randomly, we provide a bound on the resulting error for both algorithms. We show that the error is bounded, decreasing at a rate that is proportional to RCritical/R, where RCritical is the critical detection range when the resulting random network starts to be connected, and R is the detection range of each device.


measurement and modeling of computer systems | 2010

Distributed sensor network localization from local connectivity: performance analysis for the HOP-TERRAIN algorithm

Amin Karbasi; Sewoong Oh

This paper addresses the problem of determining the node locations in ad-hoc sensor networks when only connectivity information is available. In previous work, we showed that the localization algorithm MDS-MAP proposed by Y. Shang et al. is able to localize sensors up to a bounded error decreasing at a rate inversely proportional to the radio range r. The main limitation of MDS-MAP is the assumption that the available connectivity information is processed in a centralized way. In this work we investigate a practically important question whether similar performance guarantees can be obtained in a distributed setting. In particular, we analyze the performance of the HOP-TERRAIN algorithm proposed by C. Savarese et al. This algorithm can be seen as a distributed version of the MDS-MAP algorithm. More precisely, assume that the radio range r=o(1) and that the network consists of n sensors positioned randomly on a d-dimensional unit cube and d+1 anchors in general positions. We show that when only connectivity information is available, for every unknown node i, the Euclidean distance between the estimate xi and the correct position xi is bounded by ||xi-xi|| < r0/r + o(1), where r0=Cd (log n/ n)(1/d) for some constant Cd which only depends on d. Furthermore, we illustrate that a similar bound holds for the range-based model, when the approximate measurement for the distances is provided.


international colloquium on automata languages and programming | 2011

Content search through comparisons

Amin Karbasi; Stratis Ioannidis; Laurent Massoulié

We study the problem of navigating through a database of similar objects using comparisons under heterogeneous demand, a problem closely related to small-world network design. We show that, under heterogeneous demand, the small-world network design problem is NP-hard. Given the above negative result, we propose a novel mechanism for small-world network design and provide an upper bound on its performance under heterogeneous demand. The above mechanism has a natural equivalent in the context of content search through comparisons, again under heterogeneous demand; we use this to establish both upper and lower bounds on content search through comparisons.

Collaboration


Dive into the Amin Karbasi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amir Hesam Salavati

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Martin Vetterli

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Amin Shokrollahi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moran Feldman

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge