Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon de Givry is active.

Publication


Featured researches published by Simon de Givry.


Bioinformatics | 2005

Carhta Gene: multipopulation integrated genetic and radiation hybrid mapping

Simon de Givry; Martin Bouchez; Patrick Chabrier; Denis Milan; Thomas Schiex

UNLABELLED CAR(H)(T)A GENE: is an integrated genetic and radiation hybrid (RH) mapping tool which can deal with multiple populations, including mixtures of genetic and RH data. CAR(H)(T)A GENE: performs multipoint maximum likelihood estimations with accelerated expectation-maximization algorithms for some pedigrees and has sophisticated algorithms for marker ordering. Dedicated heuristics for framework mapping are also included. CAR(H)(T)A GENE: can be used as a C++ library, through a shell command and a graphical interface. The XML output for companion tools is integrated. AVAILABILITY The program is available free of charge from www.inra.fr/bia/T/CarthaGene for Linux, Windows and Solaris machines (with Open Source). CONTACT [email protected].


principles and practice of constraint programming | 2003

Solving Max-SAT as weighted CSP

Simon de Givry; Javier Larrosa; Pedro Meseguer; Thomas Schiex

For the last ten years, a significant amount of work in the constraint community has been devoted to the improvement of complete methods for solving soft constraints networks. We wanted to see how recent progress in the weighted CSP (WCSP) field could compete with other approaches in related fields. One of these fields is propositional logic and the well-known Max-SAT problem. In this paper, we show how Max-SAT can be encoded as a weighted constraint network, either directly or using a dual encoding. We then solve Max-SAT instances using state-of-the-art algorithms for weighted Max-CSP, dedicated Max-SAT solvers and the state-of-the-art MIP solver CPLEX. The results show that, despite a limited adaptation to CNF structure, WCSP-solver based methods are competitive with existing methods and can even outperform them, especially on the hardest, most over-constrained problems. This research is partially supported by the French-Spanish collaboration PICASSO 05158SM - Integrated Action HF02-69. The second and third authors are also supported by the REPLI project TIC-2002-04470-C03.


Artificial Intelligence | 2008

A logical approach to efficient Max-SAT solving

Javier Larrosa; Federico Heras; Simon de Givry

Weighted Max-SAT is the optimization version of SAT and many important problems can be naturally encoded as such. Solving weighted Max-SAT is an important problem from both a theoretical and a practical point of view. In recent years, there has been considerable interest in finding efficient solving techniques. Most of this work focuses on the computation of good quality lower bounds to be used within a branch and bound DPLL-like algorithm. Most often, these lower bounds are described in a procedural way. Because of that, it is difficult to realize the logic that is behind. In this paper we introduce an original framework for Max-SAT that stresses the parallelism with classical SAT. Then, we extend the two basic SAT solving techniques: search and inference. We show that many algorithmic tricks used in state-of-the-art Max-SAT solvers are easily expressible in logical terms in a unified manner, using our framework. We also introduce an original search algorithm that performs a restricted amount of weighted resolution at each visited node. We empirically compare our algorithm with a variety of solving alternatives on several benchmarks. Our experiments, which constitute to the best of our knowledge the most comprehensive Max-SAT evaluation ever reported, demonstrate the practical usability of our approach.


PLOS ONE | 2011

Gene Regulatory Network Reconstruction Using Bayesian Networks, the Dantzig Selector, the Lasso and Their Meta-Analysis

Matthieu Vignes; Jimmy Vandel; David Allouche; Nidal Ramadan-Alban; Christine Cierco-Ayrolles; Thomas Schiex; Brigitte Mangin; Simon de Givry

Modern technologies and especially next generation sequencing facilities are giving a cheaper access to genotype and genomic data measured on the same sample at once. This creates an ideal situation for multifactorial experiments designed to infer gene regulatory networks. The fifth “Dialogue for Reverse Engineering Assessments and Methods” (DREAM5) challenges are aimed at assessing methods and associated algorithms devoted to the inference of biological networks. Challenge 3 on “Systems Genetics” proposed to infer causal gene regulatory networks from different genetical genomics data sets. We investigated a wide panel of methods ranging from Bayesian networks to penalised linear regressions to analyse such data, and proposed a simple yet very powerful meta-analysis, which combines these inference methods. We present results of the Challenge as well as more in-depth analysis of predicted networks in terms of structure and reliability. The developed meta-analysis was ranked first among the teams participating in Challenge 3A. It paves the way for future extensions of our inference method and more accurate gene network estimates in the context of genetical genomics.


Bioinformatics | 2013

A new framework for computational protein design through cost function network optimization

Seydou Traoré; David Allouche; Isabelle André; Simon de Givry; George Katsirelos; Thomas Schiex; Sophie Barbe

MOTIVATION The main challenge for structure-based computational protein design (CPD) remains the combinatorial nature of the search space. Even in its simplest fixed-backbone formulation, CPD encompasses a computationally difficult NP-hard problem that prevents the exact exploration of complex systems defining large sequence-conformation spaces. RESULTS We present here a CPD framework, based on cost function network (CFN) solving, a recent exact combinatorial optimization technique, to efficiently handle highly complex combinatorial spaces encountered in various protein design problems. We show that the CFN-based approach is able to solve optimality a variety of complex designs that could often not be solved using a usual CPD-dedicated tool or state-of-the-art exact operations research tools. Beyond the identification of the optimal solution, the global minimum-energy conformation, the CFN-based method is also able to quickly enumerate large ensembles of suboptimal solutions of interest to rationally build experimental enzyme mutant libraries. AVAILABILITY The combined pipeline used to generate energetic models (based on a patched version of the open source solver Osprey 2.0), the conversion to CFN models (based on Perl scripts) and CFN solving (based on the open source solver toulbar2) are all available at http://genoweb.toulouse.inra.fr/~tschiex/CPD


Artificial Intelligence | 2014

Computational protein design as an optimization problem

David Allouche; Isabelle André; Sophie Barbe; Jessica Davies; Simon de Givry; George Katsirelos; Barry O'Sullivan; Steven David Prestwich; Thomas Schiex; Seydou Traoré

Proteins are chains of simple molecules called amino acids. The three-dimensional shape of a protein and its amino acid composition define its biological function. Over millions of years, living organisms have evolved a large catalog of proteins. By exploring the space of possible amino acid sequences, protein engineering aims at similarly designing tailored proteins with specific desirable properties. In Computational Protein Design (CPD), the challenge of identifying a protein that performs a given task is defined as the combinatorial optimization of a complex energy function over amino acid sequences. In this paper, we introduce the CPD problem and some of the main approaches that have been used by structural biologists to solve it, with an emphasis on the exact method embodied in the dead-end elimination/A? algorithm (DEE/A?). The CPD problem is a specific form of binary Cost Function Network (CFN, aka Weighted CSP). We show how DEE algorithms can be incorporated and suitably modified to be maintained during search, at reasonable computational cost. We then evaluate the efficiency of CFN algorithms as implemented in our solver toulbar2, on a set of real CPD instances built in collaboration with structural biologists. The CPD problem can be easily reduced to 0/1 Linear Programming, 0/1 Quadratic Programming, 0/1 Quadratic Optimization, Weighted Partial MaxSAT and Graphical Model optimization problems. We compare toulbar2 with these different approaches using a variety of solvers. We observe tremendous differences in the difficulty that each approach has on these instances. Overall, the CFN approach shows the best efficiency on these problems, improving by several orders of magnitude against the exact DEE/A? approach. The introduction of dead-end elimination before or during search allows to further improve these results.


BMC Genomics | 2007

A high resolution radiation hybrid map of bovine chromosome 14 identifies scaffold rearrangement in the latest bovine assembly

E. Marques; Simon de Givry; Paul Stothard; B. Murdoch; Z. Wang; James E. Womack; Stephen S. Moore

BackgroundRadiation hybrid (RH) maps are considered to be a tool of choice for fine mapping closely linked loci, considering that the resolution of linkage maps is determined by the number of informative meiosis and recombination events which may require very large mapping populations. Accurately defining the marker order on chromosomes is crucial for correct identification of quantitative trait loci (QTL), haplotype map construction and refinement of candidate gene searches.ResultsA 12 k Radiation hybrid map of bovine chromosome 14 was constructed using 843 single nucleotide polymorphism markers. The resulting map was aligned with the latest version of the bovine assembly (Btau_3.1) as well as other previously published RH maps. The resulting map identified distinct regions on Bovine chromosome 14 where discrepancies between this RH map and the bovine assembly occur. A major region of discrepancy was found near the centromere involving the arrangement and order of the scaffolds from the assembly. The map further confirms previously published conserved synteny blocks with human chromosome 8. As well, it identifies an extra breakpoint and conserved synteny block previously undetected due to lower marker density. This conserved synteny block is in a region where markers between the RH map presented here and the latest sequence assembly are in very good agreement.ConclusionThe increase of publicly available markers shifts the rate limiting step from marker discovery to the correct identification of their order for further use by the research community. This high resolution map of bovine chromosome 14 will facilitate identification of regions in the sequence assembly where additional information is required to resolve marker ordering.


Journal of Chemical Theory and Computation | 2015

Guaranteed Discrete Energy Optimization on Large Protein Design Problems

David Simoncini; David Allouche; Simon de Givry; Céline Delmas; Sophie Barbe; Thomas Schiex

In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbracks rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.


Bioinformatics | 2010

Statistical confidence measures for genome maps

Bertrand Servin; Simon de Givry; Thomas Faraut

MOTIVATION Genome maps are imperative to address the genetic basis of the biology of an organism. While a growing number of genomes are being sequenced providing the ultimate genome maps-this being done at an even faster pace now using new generation sequencers-the process of constructing intermediate maps to build and validate a genome assembly remains an important component for producing complete genome sequences. However, current mapping approach lack statistical confidence measures necessary to identify precisely relevant inconsistencies between a genome map and an assembly. RESULTS We propose new methods to derive statistical measures of confidence on genome maps using a comparative model for radiation hybrid data. We describe algorithms allowing to (i) sample from a distribution of maps and (ii) exploit this distribution to construct robust maps. We provide an example of application of these methods on a dog dataset that demonstrates the interest of our approach. AVAILABILITY Methods are implemented in two freely available softwares: Carthagene (http://www.inra.fr/mia/T/CarthaGene/) and a companion software (metamap, available at: http://snp.toulouse.inra.fr/~servin/index.cgi/Metamap).


principles and practice of constraint programming | 2015

Anytime Hybrid Best-First Search with tree decomposition for weighted CSP

David Allouche; Simon de Givry; Georgios Katsirelos; Thomas Schiex; Matthias Zytnicki

We propose Hybrid Best-First Search (HBFS), a search strategy for optimization problems that combines Best-First Search (BFS) and Depth-First Search (DFS). Like BFS, HBFS provides an anytime global lower bound on the optimum, while also providing anytime upper bounds, like DFS. Hence, it provides feedback on the progress of search and solution quality in the form of an optimality gap. In addition, it exhibits highly dynamic behavior that allows it to perform on par with methods like limited discrepancy search and frequent restarting in terms of quickly finding good solutions. We also use the lower bounds reported by HBFS in problems with small treewidth, by integrating it into Backtracking with Tree Decomposition (BTD). BTD-HBFS exploits the lower bounds reported by HBFS in individual clusters to improve the anytime behavior and global pruning lower bound of BTD. In an extensive empirical evaluation on optimization problems from a variety of application domains, we show that both HBFS and BTD-HBFS improve both anytime and overall performance compared to their counterparts.

Collaboration


Dive into the Simon de Givry's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Allouche

Institut national de la recherche agronomique

View shared research outputs
Top Co-Authors

Avatar

Gauthier Quesnel

Institut national de la recherche agronomique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthias Zytnicki

Institut national de la recherche agronomique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandre Joannon

Institut national de la recherche agronomique

View shared research outputs
Top Co-Authors

Avatar

Aurélie Favier

Institut national de la recherche agronomique

View shared research outputs
Top Co-Authors

Avatar

Brigitte Mangin

Institut national de la recherche agronomique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge