Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Sgarro is active.

Publication


Featured researches published by Andrea Sgarro.


IEEE Transactions on Information Theory | 1980

Universally attainable error exponents for broadcast channels with degraded message sets

János Körner; Andrea Sgarro

Universally attainable error exponents for broadcast channels with degraded message sets are obtained using a technique which generalizes that introduced by Csiszar, Korner, and Martron for the ordinary channel. Lower and upper bounds to the error probabilities over a single broadcast channel are also given.


IEEE Transactions on Information Theory | 1983

Error probabilities for simple substitution ciphers

Andrea Sgarro

Unlike recent works by Blom and Dunham on simple substitution ciphers, papers, we do not consider equivocations (conditional entropies given the cryptogram) but rather the probability that the enemy makes an error when he tries to decipher the cryptogram or to identify the key by means of optimal identification procedures. This approach is suggested by the usual approach to coding problems taken in Shannon theory, where one evaluates error probabilities with respect to optimal encoding-decoding procedures. The main results are asymptotic; the same relevant parameters are obtained as in Blom or Dunham.


IEEE Transactions on Information Theory | 1979

The source coding theorem revisited: A combinatorial approach

Giuseppe O. Longo; Andrea Sgarro

A combinatorial approach is proposed for proving the classical source coding theorems for a finite memoryless stationary source (giving achievable rates and the error probability exponent). This approach provides a sound heuristic justification for the widespread appearence of entropy and divergence (Kullbacks discrimination) in source coding. The results are based on the notion of composition class -- a set made up of all the distinct source sequences of a given length which are permutations of one another. The asymptotic growth rate of any composition class is precisely an entropy. For a finite memoryless constant source all members of a composition class have equal probability; the probability of any given class therefore is equal to the number of sequences in the class times the probability of an individual sequence in the class. The number of different composition classes is algebraic in block length, whereas the probability of a composition class is exponential, and the probability exponent is a divergence. Thus if a codeword is assigned to all sequences whose composition classes have rate less than some rate R , the probability of error is asymptotically the probability of the must probable composition class of rate greater than R . This is expressed in terms of a divergence. No use is made either of the law of large numbers or of Chebyshevs inequality.


theory and application of cryptographic techniques | 1990

Informational divergence bounds for authentication codes

Andrea Sgarro

We give an easy derivation of Simmons’ lower bound for impersonation games which is based on the non-negativity of the informational divergence. We show that substitution games can be reduced to ancillary impersonation games. We use this fact to extend Simmons’ bound to substitution games: the lower bound we obtain performs quite well against those available in the literature.


soft methods in probability and statistics | 2010

Possibilistic Coding: Error Detection vs. Error Correction

Luca Bortolussi; Andrea Sgarro

Possibilistic information theory is a flexible approach to old and new forms of coding; it is based on possibilities and patterns, rather than pointwise probabilities and traditional statistics. Here we fill up a gap of the possibilistic approach, and extend it to the case of error detection, while so far only error correction had been considered.


Fuzzy Sets and Systems | 2004

An axiomatic derivation of the coding-theoretic possibilistic entropy

Andrea Sgarro

Abstract We re-take the possibilistic (strictly non-probabilistic) model for information sources and information coding put forward in (Fuzzy Sets and Systems 132–1 (2002) 11–32); the coding-theoretic possibilistic entropy is defined there as the asymptotic rate of compression codes, which are optimal with respect to a possibilistic (not probabilistic) criterion. By proving a uniqueness theorem, in this paper we provide also an axiomatic derivation for such a possibilistic entropy, and so are able to support its use as an adequate measure of non-specificity, or rather of “possibilistic ignorance”, as we shall prefer to say. We compare our possibilistic entropy with two well-known measures of non-specificity: Hartley measure as found in set theory and U-uncertainty as found in possibility theory. The comparison allows us to show that the latter possesses also a coding-theoretic meaning.


Fundamenta Informaticae | 2012

Spearman Permutation Distances and Shannon's Distinguishability

Luca Bortolussi; Liviu P. Dinu; Andrea Sgarro

Spearman distance is a permutation distance which might be used for codes in permutations beside Kendall distance. However, Spearman distance gives rise to a geometry of strings, which is rather unruly from the point of view of error correction and error detection. Special care has to be taken to discriminate between the two notions of codeword distance and codeword distinguishability. This stresses the importance of rejuvenating the latter notion, extending it from Shannons zero-error information theory to the more general setting of metric string distances.


Eurasip Journal on Bioinformatics and Systems Biology | 2007

Splitting the BLOSUM score into numbers of biological significance

Francesco Fabris; Andrea Sgarro; Alessandro Tossi

Mathematical tools developed in the context of Shannon information theory were used to analyze the meaning of the BLOSUM score, which was split into three components termed as the BLOSUM spectrum (or BLOSpectrum). These relate respectively to the sequence convergence (the stochastic similarity of the two protein sequences), to the background frequency divergence (typicality of the amino acid probability distribution in each sequence), and to the target frequency divergence (compliance of the amino acid variations between the two sequences to the protein model implicit in the BLOCKS database). This treatment sharpens the protein sequence comparison, providing a rationale for the biological significance of the obtained score, and helps to identify weakly related sequences. Moreover, the BLOSpectrum can guide the choice of the most appropriate scoring matrix, tailoring it to the evolutionary divergence associated with the two sequences, or indicate if a compositionally adjusted matrix could perform better.


soft methods in probability and statistics | 2006

Possibilistic Channels for DNA Word Design

Luca Bortolussi; Andrea Sgarro

We deal with DNA combinatorial code constructions, as found in the literature, taking the point of view of possibilistic information theory and possibilistic coding theory. The possibilistic framework allows one to tackle an intriguing information-theoretic question: what is channel noise in molecular computation? We examine in detail two representative DNA string distances used for DNA code constructions and point out the merits of the first and the demerits of the second. The two string distances are based on the reverse Hamming distance as required to account for hybridisation of DNA strings.


Journal of Discrete Mathematical Sciences and Cryptography | 2006

Codeword distinguishability in minimum diversity decoding

Andrea Sgarro; Luca Bortolussi

Abstract We re-take a coding-theoretic notion which goes back to Cl. Shannon: codeword distinguishability. This notion is standard in zero-error information theory, but its bearing is definitely wider and it may help to better understand new forms of coding, e.g., DNA word design. In our approach, the underlying decoding principle is very simple and very general: one decodes by trying to minimise the diversity (in the simplest case the Hamming distance) between a codeword and the output sequence observed at the end of the noisy transmission channel. Symmetrically and equivalently, one may use maximum-similarity decoders and codeword confusabilities. The operational meaning of codeword distinguishability is made clear by a reliability criterion, which generalises the well-known criterion on minimum Hamming distances for error-correction codes. We investigate the formal properties of distinguishabilities versus diversities; these two notions are deeply related, and yet essentially different. An encoding theorem is put forward; as a case study, we examine a channel of cryptographic interest.

Collaboration


Dive into the Andrea Sgarro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

János Körner

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge