Tom Gur
Weizmann Institute of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tom Gur.
American Journal of Human Genetics | 2010
Noah Zaitlen; Bogdan Pasaniuc; Tom Gur; Elad Ziv; Eran Halperin
Genome-wide association studies have been performed extensively in the last few years, resulting in many new discoveries of genomic regions that are associated with complex traits. It is often the case that a SNP found to be associated with the condition is not the causal SNP, but a proxy to it as a result of linkage disequilibrium. For the identification of the actual causal SNP, fine-mapping follow-up is performed, either with the use of dense genotyping or by sequencing of the region. In either case, if the causal SNP is in high linkage disequilibrium with other SNPs, the fine-mapping procedure will require a very large sample size for the identification of the causal SNP. Here, we show that by leveraging genetic variability across populations, we significantly increase the localization success rate (LSR) for a causal SNP in a follow-up study that involves multiple populations as compared to a study that involves only one population. Thus, the average power for detection of the causal variant will be higher in a joint analysis than that in studies in which only one population is analyzed at a time. On the basis of this observation, we developed a framework to efficiently search for a follow-up study design: our framework searches for the best combination of populations from a pool of available populations to maximize the LSR for detection of a causal variant. This framework and its accompanying software can be used to considerably enhance the power of fine-mapping studies.
Genetic Epidemiology | 2010
Bogdan Pasaniuc; Ram Avinery; Tom Gur; Christine F. Skibola; Paige M. Bracci; Eran Halperin
An important component in the analysis of genome‐wide association studies involves the imputation of genotypes that have not been measured directly in the studied samples. The imputation procedure uses the linkage disequilibrium (LD) structure in the population to infer the genotype of an unobserved single nucleotide polymorphism. The LD structure is normally learned from a dense genotype map of a reference population that matches the studied population. In many instances there is no reference population that exactly matches the studied population, and a natural question arises as to how to choose the reference population for the imputation. Here we present a Coalescent‐based method that addresses this issue. In contrast to the current paradigm of imputation methods, our method assigns a different reference dataset for each sample in the studied population, and for each region in the genome. This allows the flexibility to account for the diversity within populations, as well as across populations. Furthermore, because our approach treats each region in the genome separately, our method is suitable for the imputation of recently admixed populations. We evaluated our method across a large set of populations and found that our choice of reference data set considerably improves the accuracy of imputation, especially for regions with low LD and for populations without a reference population available as well as for admixed populations such as the Hispanic population. Our method is generic and can potentially be incorporated in any of the available imputation methods as an add‐on. Genet. Epidemiol 34:773‐782, 2010.
conference on innovations in theoretical computer science | 2015
Tom Gur; Ron D. Rothblum
We initiate a study of non-interactive proofs of proximity. These proof-systems consist of a verifier that wishes to ascertain the validity of a given statement, using a short (sublinear length) explicitly given proof, and a sublinear number of queries to its input. Since the verifier cannot even read the entire input, we only require it to reject inputs that are far from being valid. Thus, the verifier is only assured of the proximity of the statement to correct one. Such proof-systems can be viewed as the NP (or more accurately MA) analogue of property testing. We explore both the power and limitations of non interactive proofs of proximity. We show that such proof-systems can be exponentially stronger than property testers, but are exponentially weaker than the interactive proofs of proximity studied by Rothblum, Vadhan and Wigderson (STOC 2013). In addition, we show a natural problem that has a full and (almost) tight multiplicative trade-off between the length of the proof and the verifiers query complexity. On the negative side, we also show that there exist properties for which even a linearly-long (non-interactive) proof of proximity cannot significantly reduce the query complexity.
Information & Computation | 2015
Tom Gur; Ran Raz
We study the power of Arthur-Merlin probabilistic proof systems in the data stream model. We show a canonical AM streaming algorithm for a class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it.As an application, we give an AM streaming algorithm for the Distinct Elements problem. Given a data stream of length m over alphabet of size n, the algorithm uses O ? ( s ) space and a proof of size O ? ( w ) , for every s , w such that s ? w ? n (where O ? hides a polylog ( m , n ) factor). We also prove a lower bound, showing that every MA streaming algorithm for the Distinct Elements problem that uses s bits of space and a proof of size w, satisfies s ? w = ? ( n ) . Furthermore, the lower bound also holds for approximating the number of distinct elements within a multiplicative factor of 1 ? 1 / n .As a part of the proof of the lower bound for the Distinct Elements problem, we show a new lower bound of ? ( n ) on the MA communication complexity of the Gap Hamming Distance problem, and prove its tightness.
Computational Complexity | 2018
Clément L. Canonne; Tom Gur
AbstractAdaptivity is known to play a crucial role in property testing. In particular, there exist properties for which there is an exponential gap between the power of adaptive testing algorithms, wherein each query may be determined by the answers received to prior queries, and their non-adaptive counterparts, in which all queries are independent of answers obtained from previous queries. In this work, we investigate the role of adaptivity in property testing at a finer level. We first quantify the degree of adaptivity of a testing algorithm by considering the number of “rounds of adaptivity” it uses. More accurately, we say that a tester is k-(round) adaptive if it makes queries in
LIPIcs - Leibniz International Proceedings in Informatics | 2017
Eric Blais; Clément L. Canonne; Tom Gur
conference on innovations in theoretical computer science | 2018
Tom Gur; Govind Ramnarayan; Ron D. Rothblum
{k+1}
international colloquium on automata, languages and programming | 2015
Oded Goldreich; Tom Gur; Ron D. Rothblum
conference on computational complexity | 2015
Oded Goldreich; Tom Gur; Ilan Komargodski
k+1 rounds, where the queries in the
Information & Computation | 2018
Oded Goldreich; Tom Gur; Ron D. Rothblum