Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tzvika Hartman is active.

Publication


Featured researches published by Tzvika Hartman.


IEEE/ACM Transactions on Computational Biology and Bioinformatics | 2006

A 1.375-Approximation Algorithm for Sorting by Transpositions

Isaac Elias; Tzvika Hartman

Sorting permutations by transpositions is an important problem in genome rearrangements. A transposition is a rearrangement operation in which a segment is cut out of the permutation and pasted in a different location. The complexity of this problem is still open and it has been a 10-year-old open problem to improve the best known 1.5-approximation algorithm. In this paper, we provide a 1.375-approximation algorithm for sorting by transpositions. The algorithm is based on a new upper bound on the diameter of 3-permutations. In addition, we present some new results regarding the transposition diameter: We improve the lower bound for the transposition diameter of the symmetric group and determine the exact transposition diameter of simple permutations


combinatorial pattern matching | 2003

A simpler 1.5-approximation algorithm for sorting by transpositions

Tzvika Hartman

An important problem in genome rearrangements is sorting permutations by transpositions. Its complexity is still open, and two rather complicated 1.5-approximation algorithms for sorting linear permutations are known (Bafna and Pevzner, 96 and Christie, 98). In this paper, we observe that the problem of sorting circular permutations by transpositions is equivalent to the problem of sorting linear permutations by transpositions. Hence, all algorithms for sorting linear permutations by transpositions can be used to sort circular permutations. Our main result is a new 1.5-approximation algorithm, which is considerably simpler than the previous ones, and achieves running time which is equal to the best known. Moreover, the analysis of the algorithm is significantly less involved, and provides a good starting point for studying related open problems.


Information & Computation | 2006

A simpler and faster 1.5-approximation algorithm for sorting by transpositions

Tzvika Hartman; Ron Shamir

An important problem in genome rearrangements is sorting permutations by transpositions. The complexity of the problem is still open, and two rather complicated 1.5-approximation algorithms for sorting linear permutations are known (Bafna and Pevzner, 98 and Christie, 99). The fastest known algorithm is the quadratic algorithm of Bafna and Pevzner. In this paper, we observe that the problem of sorting circular permutations by transpositions is equivalent to the problem of sorting linear permutations by transpositions. Hence, all algorithms for sorting linear permutations by transpositions can be used to sort circular permutations. Our main result is a new O(n 3/2 log n) 1.5-approximation algorithm, which is considerably simpler


Journal of Computer and System Sciences | 2005

A 1.5-approximation algorithm for sorting by transpositions and transreversals

Tzvika Hartman; Roded Sharan

One of the most promising ways to determine evolutionary distance between two organisms is to compare the order of appearance of orthologous genes in their genomes. The resulting genome rearrangement problem calls for finding a shortest sequence of rearrangement operations that sorts one genome into the other. In this paper we provide a 1.5-approximation algorithm for the problem of sorting by transpositions and transreversals, improving on a five-year-old 1.75 ratio for this problem. Our algorithm is also faster than current approaches and requires O(n^3^/^2logn) time for n genes.


workshop on algorithms in bioinformatics | 2005

A 1.375-approximation algorithm for sorting by transpositions

Isaac Elias; Tzvika Hartman

Sorting permutations by transpositions is an important problem in genome rearrangements. A transposition is a rearrangement operation in which a segment is cut out of the permutation and pasted in a different location. The complexity of this problem is still open and it has been a ten-year-old open problem to improve the best known 1.5-approximation algorithm. In this paper we provide a 1.375-approximation algorithm for sorting by transpositions. The algorithm is based on a new upper bound on the diameter of 3-permutations. In addition, we present some new results regarding the transposition diameter: We improve the lower bound for the transposition diameter of the symmetric group, and determine the exact transposition diameter of 2-permutations and simple permutations.


SIAM Journal on Computing | 2009

On the Cost of Interchange Rearrangement in Strings

Amihood Amir; Tzvika Hartman; Oren Kapah; Avivit Levy; Ely Porat

Consider the following optimization problem: given two strings over the same alphabet, transform one into another by a succession of interchanges of two elements. In each interchange the two participating elements exchange positions. An interchange is given a weight that depends on the distance in the string between the two exchanged elements. The object is to minimize the total weight of the interchanges. This problem is a generalization of a classical problem on permutations (where every element appears once). The generalization considers general strings with possibly repeating elements, and a function assigning weights to the interchanges. The generalization to general strings (with unit weights) was mentioned by Cayley in the 19th century, and its complexity has been an open question since. We solve this open problem and consider various weight functions as well.


workshop on algorithms in bioinformatics | 2004

A 1.5-Approximation Algorithm for Sorting by Transpositions and Transreversals

Tzvika Hartman; Roded Sharan

One of the most promising ways to determine evolutionary distance between two organisms is to compare the order of appearance of orthologous genes in their genomes. The resulting genome rearrangement problem calls for finding a shortest sequence of rearrangement operations that sorts one genome into the other. In this paper we provide a 1.5-approximation algorithm for the problem of sorting by transpositions and transreversals, improving on a five years old 1.75 ratio for this problem. Our algorithm is also faster than current approaches and requires \(O(n^{3/2} \sqrt{\log{n}})\) time for n genes.


Theoretical Computer Science | 2008

Generalized LCS

Amihood Amir; Tzvika Hartman; Oren Kapah; B. Riva Shalom; Dekel Tsur

The Longest Common Subsequence (LCS) is a well studied problem, having a wide range of implementations. Its motivation is in comparing strings. It has long been of interest to devise a similar measure for comparing higher dimensional objects, and more complex structures. In this paper we study the Longest Common Substructure of two matrices and show that this problem is NP-hard. We also study the Longest Common Subforest problem for multiple trees including a constrained version, as well. We show NP-hardness for k>2 unordered trees in the constrained LCS. We also give polynomial time algorithms for ordered trees and prove a lower bound for any decomposition strategy for k trees.


research in computational molecular biology | 2002

Handling long targets and errors in sequencing by hybridization

Eran Halperin; Shay Halperin; Tzvika Hartman; Ron Shamir

Sequencing by hybridization (SBH) is a DNA sequencing technique, in which the sequence is reconstructed using its k-mer content. This content, which is called the spectrum of the sequence, is obtained by hybridization to a universal DNA array. Standard universal arrays contain all k-mers for some fixed k, typically 8 to 10. Currently, in spite of its promise and elegance, SBH is not competitive with standard gel-based sequencing methods. This is due to two main reasons: lack of tools to handle realistic levels of hybridization errors, and an inherent limitation on the length of uniquely reconstructible sequence by standard universal arrays.In this paper we deal with both problems. We introduce a simple polynomial reconstruction algorithm which can be applied to spectra from standard arrays and has provable performance in the presence of both false negative and false positive errors. We also propose a novel design of chips containing universal bases, that differs from the one proposed by Preparata et al. We give a simple algorithm that uses spectra from such chips to reconstruct with high probability random sequences of length lower only by a squared log factor compared to the information theoretic bound. Our algorithm is very robust to errors, and has a provable performance even if there are both false negative and false positive errors. Simulations indicate that its sensitivity to errors is also very small in practice.


Random Structures and Algorithms | 2003

On the distribution of the number of roots of polynomials and explicit weak designs

Tzvika Hartman; Ran Raz

Weak designs were defined in R. Raz, O. Reingold, and S. Vadhan [Extracting all the randomness and reducing the error in Trevisans extractors, Proc 31st ACM Symp Theory of Computing, Atlanta, GA, May 1999, to appear in J Comput System Sci Special Issue on STOC 99] and are used there in constructions of extractors. Roughly speaking, a weak design is a collection of subsets satisfying some near-disjointness properties. Constructions of weak designs with certain parameters are given in Raz et al. These constructions are explicit in the sense that they require time and space polynomial in the number of subsets. However, the constructions require time and space polynomial in the number of subsets even when needed to output only one specific subset out of the collection. Hence, the constructions are not explicit in a stronger sense. In this work we provide constructions of weak designs (with parameters similar to the ones of Raz et al.) that can be carried out in space logarithmic in the number of subsets. Moreover, our constructions are explicit even in a stronger sense: Given an index to a subset, we output the specified subset in time and space polynomial in the size of the index. Using our constructions, we obtain extractors similar to some of the ones given in Raz et al. in terms of parameters, and that can be evaluated in logarithmic space. Our main construction is algebraic. In order to prove the properties of weak designs, we prove some algebro-combinatorial lemmas that may be interesting in their own right. These lemmas regard the number of roots of polynomials over finite fields. In particular, we prove that the number of polynomials (over any finite field) with k roots, vanishes exponentially in k. In other words, we prove that the number of roots of a random polynomial is not only bounded by its degree (a well-known fact), but, furthermore, it is concentrated exponentially around its expectation (which is 1). Our lemmas are proved by algebro-combinatorial arguments. The main lemma is also proved by a probabilistic argument.

Collaboration


Dive into the Tzvika Hartman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ran Raz

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Amihood Amir

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Anne Bergeron

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karine St-onge

Weizmann Institute of Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge