Danny Dubé
Laval University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Danny Dubé.
Applied Intelligence | 1996
Jean-Yves Potvin; Danny Dubé; Christian Robillard
A competitive neural network model and a genetic algorithm are used to improve the initialization and construction phase of a parallel insertion heuristic for the vehicle routing problem with time windows. The neural network identifies seed customers that are distributed over the entire geographic area during the initialization phase, while the genetic algorithm finds good parameter settings in the route construction phase that follows. Computational results on a standard set of problems are also reported.
data compression conference | 2010
Danny Dubé; Vincent Beaudoin
We present a technique that compresses a string
Acta Informatica | 2000
Danny Dubé; Marc Feeley
w
international symposium on information theory | 2011
Danny Dubé; Hidetoshi Yokoo
by enumerating all the substrings of
world congress on computational intelligence | 1994
Jean-Yves Potvin; Danny Dubé
w
international symposium on information theory and its applications | 2010
Danny Dubé
. The substrings are enumerated from the shortest to the longest and in lexicographic order. Compression is obtained from the fact that the set of the substrings of a particular length gives a lot of information about the substrings that are one bit longer. A linear-time, linear-space algorithm is presented. Experimental results show that the compression efficiency comes close to that of the best PPM variants. Other compression techniques are compared to ours.
international symposium on information theory | 2008
Danny Dubé; Vincent Beaudoin
Abstract. We show in this paper that parsing with regular expressions instead of context-free grammars, when it is possible, is desirable. We present efficient algorithms for performing different tasks that concern parsing: producing the external representation and the internal representation of parse trees; producing all possible parse trees or a single one. Each of our algorithms to produce a parse tree from an input string has an optimal time complexity, linear with the length of the string. Moreover, ambiguous regular expressions can be used.
trans. computational science | 2010
Heidar Pirzadeh; Danny Dubé; Abdelwahab Hamou-Lhadj
A new lossless data compression technique called compression by substring enumeration (CSE) has recently been introduced. Two conjectures have been stated in the original paper and they have not been proved there nor in subsequent papers on CSE. The first conjecture says that CSE is universal for Markovian sources, provided an appropriate predictor is devised. The second one says that CSE has a linear complexity both in time and in space. In this paper, we present an appropriate predictor and demonstrate that CSE indeed becomes universal for any order-k Markovian source. Finally, we prove that the compacted substring tree on which CSEs linear complexity depends effectively has linear size.
data compression conference | 2011
Danny Dubé
A genetic algorithm is applied to the search of good parameter settings for a vehicle routing heuristic. The parameter settings identified by the genetic search allow the insertion heuristic to generate solutions that are much better than the solutions previously reported on a standard set of routing problems.<<ETX>>
information theory workshop | 2009
Danny Dubé; Vincent Beaudoin
A new lossless data compression technique called compression via substring enumeration (CSE) has recently been introduced. It has been observed that CSE achieves lower performance on binary data. An hypothesis has been formulated that suggests that CSE loses track of the position of the bits relative to the byte boundaries more easily in binary data and that this confusion incurs a penalty for CSE. This paper questions the validity of the hypothesis and proposes a simple technique to reduce the penalty, in case the hypothesis is correct. The technique consists in adding a preprocessing step that inserts synchronization bits in the data in order to boost the performance of CSE. Experiments provide strong evidence that the formulated hypothesis is true and they demonstrate the effectiveness of the use of synchronization bits.