Jean-Paul Delahaye
Laboratoire d'Informatique Fondamentale de Lille
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jean-Paul Delahaye.
Applied Mathematics and Computation | 2012
Jean-Paul Delahaye; Hector Zenil
We describe an alternative method (to compression) that combines several theoretical and experimental results to numerically approximate the algorithmic Kolmogorov-Chaitin complexity of all ? n = 1 8 2 n bit strings up to 8 bits long, and for some between 9 and 16 bits long. This is done by an exhaustive execution of all deterministic 2-symbol Turing machines with up to four states for which the halting times are known thanks to the Busy Beaver problem, that is 11019960576 machines. An output frequency distribution is then computed, from which the algorithmic probability is calculated and the algorithmic complexity evaluated by way of the Levin-Chaitin coding theorem.
Bioinformatics | 1997
Eric Rivals; Olivier Delgrange; Jean-Paul Delahaye; Max Dauchet; Marie-Odile Delorme; Alain Hénaut; Emmanuelle Ollivier
MOTIVATION Compression algorithms can be used to analyse genetic sequences. A compression algorithm tests a given property on the sequence and uses it to encode the sequence: if the property is true, it reveals some structure of the sequence which can be described briefly, this yields a description of the sequence which is shorter than the sequence of nucleotides given in extenso. The more a sequence is compressed by the algorithm, the more significant is the property for that sequence. RESULTS We present a compression algorithm that tests the presence of a particular type of dosDNA (defined ordered sequence-DNA): approximate tandem repeats of small motifs (i.e. of lengths < 4). This algorithm has been experimented with on four yeast chromosomes. The presence of approximate tandem repeats seems to be a uniform structural property of yeast chromosomes.
PLOS ONE | 2014
Fernando Soler-Toscano; Hector Zenil; Jean-Paul Delahaye; Nicolas Gauvrit
Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all binary strings of length and for most strings of length by running all Turing machines with 5 states and 2 symbols ( with reduction techniques) using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short) strings. Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com.
Biochimie | 1996
É. Rivals; M. Dauchet; Jean-Paul Delahaye; Olivier Delgrange
A novel approach to genetic sequence analysis is presented. This approach, based on compression of algorithms, has been launched simultaneously by Grumbach and Tahi, Milosavljevic and Rivals. To reduce the description of an object, a compression algorithm replaces some regularities in the description by special codes. Thus a compression algorithm can be applied to a sequence in order to study the presence of those regularities all over the sequence. This paper explains this ability, gives examples of compression algorithms already developed and mentions their applications. Finally, the theoretical foundations of the approach are presented in an overview of the algorithmic theory of information.
Numerische Mathematik | 1980
Jean-Paul Delahaye; B. Germain-Bonne
SummaryIt is well known that some information is needed for accelerating efficiently the convergence of a sequence. We show in this article that, for several families of sequences, there is no algorithm accelerating the convergence of every sequence of the family.
Evolutionary Programming | 1998
Bruno Beaufils; Jean-Paul Delahaye; Philippe Mathieu
The Classical Iterated Prisoners Dilemma (CIPD) is used to study the evolution of cooperation. We show, with a genetic approach, how basic ideas could be used in order to generate automatically a great numbers of strategies. Then we show some results of ecological evolution on those strategies, with the description of the experimentations we have made. Our main purpose is to find an objective method to evaluate strategies for the CIPD. Finally we use the former results to add a new argument confirming that there is, in order to be good, an infinite gradient in the level of complexity in structure of strategies.
Behavior Research Methods | 2014
Nicolas Gauvrit; Hector Zenil; Jean-Paul Delahaye; Fernando Soler-Toscano
As human randomness production has come to be more closely studied and used to assess executive functions (especially inhibition), many normative measures for assessing the degree to which a sequence is randomlike have been suggested. However, each of these measures focuses on one feature of randomness, leading researchers to have to use multiple measures. Although algorithmic complexity has been suggested as a means for overcoming this inconvenience, it has never been used, because standard Kolmogorov complexity is inapplicable to short strings (e.g., of length l ≤ 50), due to both computational and theoretical limitations. Here, we describe a novel technique (the coding theorem method) based on the calculation of a universal distribution, which yields an objective and universal measure of algorithmic complexity for short strings that approximates Kolmogorov–Chaitin complexity.
arXiv: Information Theory | 2013
Fernando Soler-Toscano; Hector Zenil; Jean-Paul Delahaye; Nicolas Gauvrit
We show that real-value approximations of Kolmogorov-Chaitin (K_m) using the algorithmic Coding theorem as calculated from the output frequency of a large set of small deterministic Turing machines with up to 5 states (and 2 symbols), is in agreement with the number of instructions used by the Turing machines producing s, which is consistent with strict integer-value program-size complexity. Nevertheless, K_m proves to be a finer-grained measure and a potential alternative approach to lossless compression algorithms for small entities, where compression fails. We also show that neither K_m nor the number of instructions used shows any correlation with Bennetts Logical Depth LD(s) other than whats predicted by the theory. The agreement between theory and numerical calculations shows that despite the undecidability of these theoretical measures, approximations are stable and meaningful, even for small programs and for short strings. We also announce a first Beta version of an Online Algorithmic Complexity Calculator (OACC), based on a combination of theoretical concepts, as a numerical implementation of the Coding Theorem Method.
PeerJ | 2015
Hector Zenil; Fernando Soler-Toscano; Jean-Paul Delahaye; Nicolas Gauvrit
We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluating n-dimensional complexity by using an n-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complex n-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence eff ectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.
Theoretical Computer Science | 1991
Jean-Paul Delahaye; V. Thibau
Abstract The aim of this paper is to propose a logical and algebraic theory which seems well-suited to logic programs with negation and deductive databases. This theory has similar properties to those of Prolog theory limited to programs with Horn clauses and thus can be considered as an extension of the usual theory. This parallel with logic programming without negation lies in the introduction of a third truth value (Indefinite) and of a new non-monotonic implication connective. Our proposition is different from the other ways of introducing a third truth value already used in Logic Programming and databases but it is somehow related to some of them, especially to Fittings theory. We introduce a “consequence” operator associated with a logic program with negation which extends the operator of Apt and Van Emden. In the case of a consistent program, the post-fixpoints of this operator are the models of the program as they are usually. This operator is related to Fittings one, the relation being obtained by completing the program. We finally give an operational semantics for a program with negation by the obtention of a three-valued interpreter from a bivalued one.