Giovanni Cancellieri
Marche Polytechnic University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Giovanni Cancellieri.
IEEE Transactions on Communications | 2001
Roberto Garello; Guido Montorsi; Sergio Benedetto; Giovanni Cancellieri
In this paper, the basic theory of interleavers is revisited in a semi-tutorial manner, and extended to encompass noncausal interleavers. The parameters that characterize the interleaver behavior (like delay, latency, and period) are clearly defined. The input-output interleaver code is introduced and its complexity studied. Connections among various interleaver parameters are explored. The classes of convolutional and block interleavers are considered, and their practical implementation discussed. The trellis complexity of turbo codes is tied to the complexity of the constituent interleaver. A procedure of complexity reduction by coordinate permutation is also presented, together with some examples of its application.
Journal of The Optical Society of America B-optical Physics | 1995
Giovanni Cancellieri; Franco Chiaraluce; Ennio Gambi; Paola Pierleoni
The feasibility of and, or, and ex-or functions based on the interaction of optical solitons is proved by use of a five-layer dielectric structure with a nonlinear core. With the exception of the or logic gate, the design of these devices is rather flexible, offering a wide variety of choices with respect to both the geometrical parameters and the input power levels.
IEEE Communications Letters | 2009
Marco Baldi; Giovanni Cancellieri; Andrea Carassai; Franco Chiaraluce
This letter proposes a new class of serially concatenated codes that can be viewed as low-density parity- check (LDPC) codes. They are derived from multiple serially concatenated single parity-check (M-SC-SPC) codes, but they use different components, that we call multiple parity-check (MPC) codes. In comparison with M-SC-SPC codes, the new scheme achieves better performance with similar complexity. The proposed codes can represent an alternative to the well-known family of repeat accumulate (RA) codes, being based on the same principles.
IEEE Transactions on Broadcasting | 2009
Marco Baldi; Franco Chiaraluce; Giovanni Cancellieri
LDPC codes are state-of-art error correcting codes, included in several standards for broadcast transmissions. Iterative soft-decision decoding algorithms for LDPC codes reach excellent error correction capability; their performance, however, is strongly affected by finite-precision issues in the representation of inner variables. Great attention has been paid, in recent literature, to the topic of quantization for LDPC decoders, but mostly focusing on binary modulations and analysing finite precision effects in a disaggregrated manner, i.e., considering separately each block of the receiver. Modern telecommunication standards, instead, often adopt high order modulation schemes, e.g. M-QAM, with the aim to achieve large spectral efficiency. This puts additional quantization problems, that have been poorly debated in previous literature. This paper discusses the choice of suitable quantization characteristics for both the decoder messages and the received samples in LDPC-coded systems using M-QAM schemes. The analysis involves also the demapper block, that provides initial likelihood values for the decoder, by relating its quantization strategy with that of the decoder. A new demapper version, based on approximate expressions, is also presented, that introduces a slight deviation from the ideal case but yields a low complexity hardware implementation.
IEEE Communications Letters | 2014
Marco Baldi; Giovanni Cancellieri; Franco Chiaraluce
This paper presents a design technique for obtaining regular time-invariant low-density parity-check convolutional (RTI-LDPCC) codes with low complexity and good performance. We start from previous approaches which unwrap a low-density parity-check (LDPC) block code into an RTI-LDPCC code, and we obtain a new method to design RTI-LDPCC codes with better performance and shorter constraint length. Differently from previous techniques, we start the design from an array LDPC block code. We show that, for codes with high rate, a performance gain and a reduction in the constraint length are achieved with respect to previous proposals. Additionally, an increase in the minimum distance is observed.
IEEE Transactions on Communications | 2012
Marco Baldi; Giovanni Cancellieri; Franco Chiaraluce
Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.
IEEE Communications Letters | 2012
Marco Baldi; Marco Bianchi; Giovanni Cancellieri; Franco Chiaraluce
We present a new family of low-density parity-check (LDPC) convolutional codes that can be designed using ordered sets of progressive differences. We study their properties and define a subset of codes in this class that have some desirable features, such as fixed minimum distance and Tanner graphs without short cycles. The design approach we propose ensures that these properties are guaranteed independently of the code rate. This makes these codes of interest in many practical applications, particularly when high rate codes are needed for saving bandwidth. We provide some examples of coded transmission schemes exploiting this new class of codes.
IEEE Transactions on Communications | 1989
Giovanni Cancellieri
An investigation is presented of the maximum transmission efficiency that can be reached over an ideal photon counting channel, having fixed the bandwidth expansion factor. First, the ideal situation, represented by C.E. Shannons (1959) theorem for discrete channels, is analyzed. A low average number of photons per pulse is demonstrated to be preferable. A binary transmission, with different a priori probabilities of the two transmitted symbols, exhibits a higher efficiency than that of an orthogonal PPM (pulse-position modulated) transmission, whose M-ary symbols are equiprobable, for an equal bandwidth expansion. Then practical transmissions are considered. The PPM technique can be very efficiently coded, and in some situations, is characterized by a bit error probability lower than that of the uncoded binary technique. However, uncoded binary transmission remains extremely attractive for the achievement of ultrahigh transmission efficiencies. >
2009 First International Conference on Advances in Satellite and Space Communications | 2009
Marco Baldi; Giovanni Cancellieri; Franco Chiaraluce
We propose a novel family of bi-dimensional product codes that can be interpreted as Low-Density Parity-Check (LDPC) codes. They are based on the adoption, as components, of the so-called Multiple Serially Concatenated Multiple Parity-Check (M-SC-MPC) codes, we have recently introduced. The distinctive feature of these codes is encoding and decoding simplicity that make them a valid alternative to more conventional multi-dimensional Single Parity-Check (SPC) product codes. By ensuring good error rate performance, these codes are of interest in satellite and space communications, where the goal to have limited complexity is often a fundamental issue.Various health care devices owned by either hospitals or individuals are producing huge amount of health care data. The big health data may contain valuable knowledge and new business opportunities. Obviously, cloud is a good candidate to collect, store and analyse such big health care data. However, health care data is very sensitive for its owners, and thus should be well protected on cloud. This paper presents our solution to protecting and analyzing health care data stored on cloud. First, we develop novel technologies to protect data privacy and enable secure data sharing on cloud. Secondly, we show the methods and tools to conduct big health care data analysis. Finally, both the security technology and the data analysis methods are evaluated to show the usefulness and efficiency of our solution.This paper considers the workflow scheduling problem in Clouds with the hourly charging model and data transfer times. It deals with the allocation of tasks to suitable VM instances while maintaining the precedence constraints on one hand and meeting the workflow deadline on the other. A bi-direction adjust heuristic (BDA) is proposed for the considered problem. Matching of tasks and the VM types is modeled as Mixed Integer Linear programming (MILP) problem and solved using CPLEX at the first stage of BDA. In the second stage, forward and backward scheduling procedures are applied to allocate tasks to VM instances according to the result of the first stage. In the backward scheduling procedure, a priority rule considering the finish time, wasted time fractions and added hours is developed to make appropriate matches of tasks and free time slots. Extensive experimental results show that the proposed BDA heuristic outperforms the existing state-of-the-art heuristic ICPCP in all cases. Further, compared with ICPCP, about 80% percentage of VM renting cost is saved for instances with 900 tasks at most.Substitution, and reassociation of irregular sparse LU factorization can deliver up to 31% additional speedup over an existing state-of-the-art parallel FPGA implementation where further parallelization was deemed virtually impossible. The state-of-the-art implementation is already capable of delivering 3× acceleration over CPU-based sparse LU solvers. Sparse LU factorization is a well-known computational bottleneck in many existing scientific and engineering applications and is notoriously hard to parallelize due to inherent sequential dependencies in the computation graph. In this paper, we show how to break these alleged inherent dependencies using depth-limited substitution, and reassociation of the resulting computation. This is a work-parallelism tradeoff that is well-suited for implementation on FPGA-based token dataflow architectures. Such compute organizations are capable of fast parallel processing of large irregular graphs extracted from the sparse LU computation. We manage and control the growth in additional work due to substitution through careful selection of substitution depth. We exploit associativity in the generated graphs to restructure long compute chains into reduction trees.
Journal of Lightwave Technology | 1996
Giovanni Cancellieri; Franco Chiaraluce; Ennio Gambi; Paola Pierleoni
The possibility of realizing an all-optical polarization modulator is theoretically demonstrated. Basic device exploits the cross-phase modulation effect involving spatial solitons in a Kerr-like nonlinear material. It is shown that a weak wave (modulating signal) can be used to control a stronger wave (pump signal), in such a way to obtain a polarization switch, if their input phase difference is exactly equal to 90/spl deg/ and the two waves exhibit mutual orthogonal polarizations. Since the required relative phase may be difficult and sometimes impossible to maintain, an improved solution is then proposed which eliminates any effect due to random phase difference between the colliding solitons. It is based on the adoption of a suitable control device employing two additional nonlinear blocks excited by solitons with equal amplitude and linear polarization. The resulting field is then subjected to self-phase modulation. The structure is completed by a cascade of linear and nonlinear waveguides acting as proper soliton filters. This way, the behavior as an all-optical polarization modulator is ensured, even in the most critical input conditions, corresponding to a phase difference of 12.5/spl deg/ for our choice of the physical and geometrical parameters.