Vladimir Makarenkov
Université du Québec à Montréal
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vladimir Makarenkov.
Nucleic Acids Research | 2012
Alix Boc; Alpha Boubacar Diallo; Vladimir Makarenkov
T-REX (Tree and reticulogram REConstruction) is a web server dedicated to the reconstruction of phylogenetic trees, reticulation networks and to the inference of horizontal gene transfer (HGT) events. T-REX includes several popular bioinformatics applications such as MUSCLE, MAFFT, Neighbor Joining, NINJA, BioNJ, PhyML, RAxML, random phylogenetic tree generator and some well-known sequence-to-distance transformation models. It also comprises fast and effective methods for inferring phylogenetic trees from complete and incomplete distance matrices as well as for reconstructing reticulograms and HGT networks, including the detection and validation of complete and partial gene transfers, inference of consensus HGT scenarios and interactive HGT identification, developed by the authors. The included methods allows for validating and visualizing phylogenetic trees and networks which can be built from distance or sequence data. The web server is available at: www.trex.uqam.ca.
Bioinformatics | 2001
Vladimir Makarenkov
UNLABELLED T-REX (tree and reticulogram reconstruction) is an application to reconstruct phylogenetic trees and reticulation networks from distance matrices. The application includes a number of tree fitting methods like NJ, UNJ or ADDTREE which have been very popular in phylogenetic analysis. At the same time, the software comprises several new methods of phylogenetic analysis such as: tree reconstruction using weights, tree inference from incomplete distance matrices or modeling a reticulation network for a collection of objects or species. T-REX also allows the user to visualize obtained tree or network structures using Hierarchical, Radial or Axial types of tree drawing and manipulate them interactively. AVAILABILITY T-REX is a freeware package available online at: http://www.fas.umontreal.ca/biol/casgrain/en/labo/t-rex
Ecology | 2002
Vladimir Makarenkov; Pierre Legendre
Among the various forms of canonical analysis available in the statistical literature, RDA (redundancy analysis) and CCA (canonical correspondence analysis) have become instruments of choice for ecological research because they recognize different roles for the explanatory and response data tables. Data table Y contains the response variables (e.g., species data) while data table X contains the explanatory variables. RDA is an extension of multiple linear regression; it uses a linear model of relationship between the variables in X and Y. In CCA, the response variables are chi-square transformed as the initial step, but the relationship between the transformed response data and the explanatory variables in X is still assumed to be linear. There is no special reason why nature should linearly relate changes in species assemblages to changes in environmental variables. When modeling ecological processes, to assume linearity is unrealistic in most instances and is only done because more appropriate methods of analysis are not available. We propose two empirical methods of canonical analysis based on polynomial regression to do away with the assumption of linearity in modeling the relationships between the variables in X and Y. They are called polynomial RDA and polynomial CCA, respectively, and may be viewed as alternatives to classical linear RDA and CCA. Because the analysis uses nonlinear functions of the explanatory variables, new ways of representing these variables in biplot diagrams have been developed. The use of these methods is demonstrated on real data sets and using simulations. In the examples, the new techniques produced a noticeable increase in the amount of variation of Y accounted for by the model, compared to standard linear RDA and CCA. Freeware to carry out the new analyses is available in ESAs Electronic Data Archive, Ecological Archives.
Systematic Biology | 2002
Pierre Legendre; Vladimir Makarenkov
A reticulogram is a general network capable of representing a reticulate evolutionary structure. It is particularly useful for portraying relationships among organisms that may be related in a nonunique way to their common ancestor - relationships that cannot be represented by a dendrogram or a phylogenetic tree. We propose a new method for constructing reticulograms that represent a given distance matrix. Reticulate evolution applies first to phylogenetic problems; it has been found in nature, for example, in the within-species microevolution of eukaryotes and in lateral gene transfer in bacteria. In this paper, we propose a new method for reconstructing reticulation networks and we develop applications of the reticulate evolution model to ecological biogeographic, population microevolutionary, and hybridization problems. The first example considers a spatially constrained reticulogram representing the postglacial dispersal of freshwater fishes in the Québec peninsula; the reticulogram provides a better model of postglacial dispersal than does a tree model. The second example depicts the morphological similarities among local populations of muskrats in a river valley in Belgium; adding supplementary branches to a tree depicting the river network leads to a better representation of the morphological distances among local populations of muskrats than does a tree structure. A third example involves hybrids between plants of the genus Aphelandra.
Bioinformatics | 2007
Vladimir Makarenkov; Pablo Zentilli; Dmytro Kevorkov; Andrei Gagarin; Nathalie Malo; Robert Nadon
MOTIVATION High-throughput screening (HTS) is an early-stage process in drug discovery which allows thousands of chemical compounds to be tested in a single study. We report a method for correcting HTS data prior to the hit selection process (i.e. selection of active compounds). The proposed correction minimizes the impact of systematic errors which may affect the hit selection in HTS. The introduced method, called a well correction, proceeds by correcting the distribution of measurements within wells of a given HTS assay. We use simulated and experimental data to illustrate the advantages of the new method compared to other widely-used methods of data correction and hit selection in HTS. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Journal of Classification | 2001
Vladimir Makarenkov; Pierre Legendre
K-means partitioning. We also describe some new features and improvements to the algorithm proposed by De Soete. Monte Carlo simulations have been conducted using different error conditions. In all cases (i.e., ultrametric or additive trees, or K-means partitioning), the simulation results indicate that the optimal weighting procedure should be used for analyzing data containing noisy variables that do not contribute relevant information to the classification structure. However, if the data involve error-perturbed variables that are relevant to the classification or outliers, it seems better to cluster or partition the entities by using variables with equal weights. A new computer program, OVW, which is available to researchers as freeware, implements improved algorithms for optimal variable weighting for ultrametric and additive tree clustering, and includes a new algorithm for optimal variable weighting for K-means partitioning.
Journal of Computational Biology | 2004
Vladimir Makarenkov; Pierre Legendre
In many phylogenetic problems, assuming that species have evolved from a common ancestor by a simple branching process is unrealistic. Reticulate phylogenetic models, however, have been largely neglected because the concept of reticulate evolution have not been supported by using appropriate analytical tools and software. The reticulate model can adequately describe such complicated mechanisms as hybridization between species or lateral gene transfer in bacteria. In this paper, we describe a new algorithm for inferring reticulate phylogenies from evolutionary distances among species. The algorithm is capable of detecting contradictory signals encompassed in a phylogenetic tree and identifying possible reticulate events that may have occurred during evolution. The algorithm produces a reticulate phylogeny by gradually improving upon the initial solution provided by a phylogenetic tree model. The new algorithm is compared to the popular SplitsGraph method in a reanalysis of the evolution of photosynthetic organisms. A computer program to construct and visualize reticulate phylogenies, called T-Rex (Tree and Reticulogram Reconstruction), is available to researchers at the following URL: www.fas.umontreal.ca/biol/casgrain/en/labo/t-rex.
Systematic Biology | 2010
Alix Boc; Hervé Philippe; Vladimir Makarenkov
Horizontal gene transfer (HGT) is one of the main mechanisms driving the evolution of microorganisms. Its accurate identification is one of the major challenges posed by reticulate evolution. In this article, we describe a new polynomial-time algorithm for inferring HGT events and compare 3 existing and 1 new tree comparison indices in the context of HGT identification. The proposed algorithm can rely on different optimization criteria, including least squares (LS), Robinson and Foulds (RF) distance, quartet distance (QD), and bipartition dissimilarity (BD), when searching for an optimal scenario of subtree prune and regraft (SPR) moves needed to transform the given species tree into the given gene tree. As the simulation results suggest, the algorithmic strategy based on BD, introduced in this article, generally provides better results than those based on LS, RF, and QD. The BD-based algorithm also proved to be more accurate and faster than a well-known polynomial time heuristic RIATA-HGT. Moreover, the HGT recovery results yielded by BD were generally equivalent to those provided by the exponential-time algorithm LatTrans, but a clear gain in running time was obtained using the new algorithm. Finally, a statistical framework for assessing the reliability of obtained HGTs by bootstrap analysis is also presented.
Journal of Biomolecular Screening | 2005
Dmytro Kevorkov; Vladimir Makarenkov
High-throughput screening (HTS) is an efficient technology for drug discovery. It allows for screening of more than 100,000 compounds a day per screen and requires effective procedures for quality control. The authors have developed a method for evaluating a background surface of an HTS assay; it can be used to correct raw HTS data. This correction is necessary to take into account systematic errors that may affect the procedure of hit selection. The described method allows one to analyze experimental HTS data and determine trends and local fluctuations of the corresponding background surfaces. For an assay with a large number of plates, the deviations of the background surface from a plane are caused by systematic errors. Their influence can be minimized by the subtraction of the systematic background from the raw data. Two experimental HTS assays from the ChemBank database are examined in this article. The systematic error present in these data was estimated and removed from them. It enabled the authors to correct the hit selection procedure for both assays.
Bioinformatics | 2006
Vladimir Makarenkov; Dmytro Kevorkov; Pablo Zentilli; Andrei Gagarin; Nathalie Malo; Robert Nadon
MOTIVATION High-throughput screening (HTS) plays a central role in modern drug discovery, allowing for testing of >100,000 compounds per screen. The aim of our work was to develop and implement methods for minimizing the impact of systematic error in the analysis of HTS data. To the best of our knowledge, two new data correction methods included in HTS-Corrector are not available in any existing commercial software or freeware. RESULTS This paper describes HTS-Corrector, a software application for the analysis of HTS data, detection and visualization of systematic error, and corresponding correction of HTS signals. Three new methods for the statistical analysis and correction of raw HTS data are included in HTS-Corrector: background evaluation, well correction and hit-sigma distribution procedures intended to minimize the impact of systematic errors. We discuss the main features of HTS-Corrector and demonstrate the benefits of the algorithms.