Data compression and learning in time sequences analysis
A. Puglisi, D. Benedetto, E. Caglioti, V. Loreto, A. Vulpiani
Abstract
Motivated by the problem of the definition of a distance between two sequences of characters, we investigate the so-called learning process of typical sequential data compression schemes. We focus on the problem of how a compression algorithm optimizes its features at the interface between two different sequences A and B while zipping the sequence A+B obtained by simply appending B after A. We show the existence of a universal scaling function (the so-called learning function) which rules the way in which the compression algorithm learns a sequence B after having compressed a sequence A. In particular it turns out that it exists a crossover length for the sequence B, which depends on the relative entropy between A and B, below which the compression algorithm does not learn the sequence B (measuring in this way the relative entropy between A and B) and above which it starts learning B, i.e. optimizing the compression using the specific features of B. We check the scaling function on three main classes of systems: Bernoulli schemes, Markovian sequences and the symbolic dynamic generated by a non trivial chaotic system (the Lozi map). As a last application of the method we present the results of a recognition experiment, namely recognize which dynamical systems produced a given time sequence. We finally point out the potentiality of these results for segmentation purposes, i.e. the identification of homogeneous sub-sequences in heterogeneous sequences (with applications in various fields from genetic to time-series analysis).