Denis Teixeira Franco
Télécom ParisTech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Denis Teixeira Franco.
Microelectronics Reliability | 2008
Denis Teixeira Franco; Maí Correia Vasconcelos; Lirida A. B. Naviner; Jean-François Naviner
Abstract As integrated circuits scale down into nanometer dimensions, a great reduction on the reliability of combinational blocks is expected. This way, the susceptibility of circuits to intermittent and transient faults is becoming a key parameter in the evaluation of logic circuits, and fast and accurate ways of reliability analysis must be developed. This paper presents a reliability analysis methodology based on signal probability, which is of straightforward application and can be easily integrated in the design flow. The proposed methodology computes circuit’s signal reliability as a function of its logical masking capabilities, concerning multiple simultaneous faults occurrence.
2008 Joint 6th International IEEE Northeast Workshop on Circuits and Systems and TAISA Conference | 2008
M.C.R. de Vasconcelos; Denis Teixeira Franco; L.A. de B. Naviner; Jean-François Naviner
Reliability analysis of digital circuits is becoming an important feature in the design process of nanoscale systems. Understanding the relations between circuit structure and its reliability allows the designer to implement some tradeoffs that can improve the resulting design. This work presents a probabilistic model that computes the reliability of combinational logic circuits relating to single and multiple faults. The methodology is targeted (but not limited) to circuits generated by synthesis tools, and standard cell based implementation. To validate the proposed methodology we have studied the reliability of some adder structures. Complexity and scalability of the model are discussed and some optimizations are presented.
midwest symposium on circuits and systems | 2008
Denis Teixeira Franco; Maí Correia Vasconcelos; Lirida A. B. Naviner; Jean-François Naviner
The reliability of integrated circuits has become an unavoidable subject in the nanoscale era. The susceptibility of combinational logic circuits to faults is of increasing interest, and fast and accurate methods are necessary to take the reliability into account earlier in the design process. As circuits scale to nanometer dimensions, the probability of occurrence of multiple simultaneous faults becomes higher and cannot be neglected anymore. In this work, a signal probability reliability analysis (SPRA) algorithm is presented, allowing an evaluation of the reliability of logic circuits relating to multiple simultaneous faults.
latin american symposium on circuits and systems | 2013
Vladimir Afonso; Henrique Maich; Luciano Volcan Agostini; Denis Teixeira Franco
The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents the hardware design of the sub-pixel interpolator for the Fractional Motion Estimation algorithm defined by the HEVC emerging standard. Based on evaluations using the HEVC reference software, a strategy was defined to be used in the architectural design. The designed architecture was described in VHDL and synthesized for Altera FPGAs. The hardware designed presents interesting results in terms of performance, being able to process QFHD videos (3840×2160 pixels) in real time.
international conference on electronics, circuits, and systems | 2008
Denis Teixeira Franco; Maí Correia Vasconcelos; Lirida A. B. Naviner; Jean-François Naviner
This paper presents a reliability analysis algorithm that can be integrated in the design flow of logic circuits. Based on a four state representation of signal probabilities, and the propagation of this probabilities along the cells of a circuit, the signal reliability of the circuit can be directly obtained. The use of signal probabilities rises the well known problem of signals correlation, and we present some relaxing conditions that allow tradeoffs between accuracy and execution time of the algorithm. The main advantages of the proposed methodology are its simplicity and straightforward application, allowing an easy integration with design tools.
Annales Des Télécommunications | 2006
Denis Teixeira Franco; Jean-François Naviner; Lirida A. B. Naviner
Integrated circuits have known a constant evolution in the last decades, with increases in density and speed that follow the rates predicted in Moore’s law. The tradeoffs in area, speed and power, allowed by theCmos technology, and its capacity to integrate analog, digital and mixed components, are key features to its dissemination in the telecommunications field. In fact, the progress of theCmos technology is an important driver for telecommunications evolution, with the continuous integration of complex functions needed by demanding applications. As integrated circuits evolve, they approach some limits that make further improvements more difficult and even unpredictable. With deep-submicron structures, the yield of manufacturing processes is one of the main challenges of the semiconductor industry, with negative impacts on time-to-market and profitability. With reduced voltages and increased speed and density, the reliability of deep-submicron circuits is another concern for designers, since noise immunity is reduced and thermal noise effects show-up. In this paper we present an overview of the issues related with the scaling of integrated circuits into nanometer technologies, detailing the yield and reliability problems. We present the state of the art in proposed solutions and alternatives that can improve the chances of a large utilization of these nanotechnologies.RésuméLes circuits intégrés ont connu une évolution constante au cours des dernières décennies, avec des améliorations en densité et en vitesse qui suivent les variations prévues par la loi de Moore. Les possibilités offertes par la technologieCmos d’échanges entre surface, vitesse et puissance ainsi que d’intégration de composants analogiques, numériques et mixtes sont la raison principale de la large diffusion de cette technologie dans le domaine des télécommunications. En effet, les progrès de la technologieCmos ont contribué à l’évolution de ce domaine, par l’intégration de fonctions de plus en plus complexes, diverses et demandeuses de puissance de calcul. Néanmoins, plus les circuits intégrés évoluent, plus ceux-ci approchent certaines limites rendant de nouvelles améliorations plus difficiles voire impossibles ou tout au moins imprévisibles. Le rendement des procédés de fabrication employant des structures fortement submicroniques est l’un des défis majeurs de l’industrie des semiconducteurs, du fait de son impact négatif sur le délai de mise sur le marché et la rentabilité. Par ailleurs, la réduction des tensions, l’augmentation des fréquences et l’accroissement de la densité d’intégration font de la fiabilité des circuits fortement submicroniques un autre défi pour les concepteurs, puisque l’immunité au bruit est de ce fait réduite et que les effets du bruit thermique augmentent. Dans cet article, nous établissons un panorama des questions liées à l’arrivée des circuits intégrés en technologies nanométriques en nous intéressant tout particulièrement aux problèmes de rendement et de fiabilité. Nous présentons l’état de l’art des solutions proposées et proposons quelques pistes alternatives qui permettraient de lever les verrous à l’utilisation plus large de ces technologies.
international conference on design and technology of integrated systems in nanoscale era | 2008
Lirida A. B. Naviner; M.C.R. de Vasconcelos; Denis Teixeira Franco; Jean-François Naviner
The development of fault tolerance techniques to enhance systems dependability is becoming an unavoidable task as IC industry enters in the nanoscale era. However, before consider the fault tolerant design, an accurate method of reliability evaluation is necessary. The knowledge of the natural error masking capabilities of a given circuit will be essential to handle design tradeoffs in the conception stage. This paper is intended to deal with improvements in the reliability computation of logic circuits using the probabilistic transfer matrix (PTM) approach. We propose an algorithm that reduces memory requirements and can improve run-time performances in the most significant stage of the PTM algorithm, keeping the same accurate reliability analysis.
Microelectronics Reliability | 2008
Maí Correia Vasconcelos; Denis Teixeira Franco; Lirida A. B. Naviner; Jean-François Naviner
Concurrent error detection (CED) schemes are becoming essential features in the design process as IC technologies progress into the nanoscale era. Soft error rate reduction has emerged as an important challenge and several works are dedicated to quantify the CED effective enhancement in systems reliability. However, none of them make a comprehensive description of the output events that can occur in such schemes. In this paper we propose a methodology to evaluate circuits with CED, including the time penalty as a relevant metric even in hardware redundancy techniques. Our experiments have shown that systems can reduce their throughput by half in multiple fault environments making the choice of the CED scheme strongly dependent on this analysis.
international conference on electronics, circuits, and systems | 2013
Henrique Maich; Vladimir Afonso; Denis Teixeira Franco; Bruno Zatt; Marcelo Schiavon Porto; Luciano Volcan Agostini
This paper presents a hardware design for the Fractional Motion Estimation (FME) Interpolation Unit compatible with the High Efficiency Video Coding (HEVC) standard. The proposed architecture was designed to consider fixed 16×16 Prediction Unit (PU) size in order to drastically reduce the computational effort. This decision was made taking into account several evaluations, using the HEVC Reference Software, to find out the number of occurrences of each PU size and their coding efficiency impact. The designed architecture was described in VHDL and synthesized to an Altera Stratix III FPGA. The results show that the designed architecture is able to process QFHD videos at 60 frames per second with a 353.8 MHz clock frequency.
data compression conference | 2013
Vladimir Afonso; Henrique Maich; Luciano Volcan Agostini; Denis Teixeira Franco
Summary form only given. The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents a simplified version of the original Fractional Motion Estimation (FME) algorithm defined by the HEVC emerging video coding standard targeting a low cost and high throughput hardware design. Based on evaluations using the HEVC Model (HM), the HEVC reference software, a simplification strategy was defined to be used in the hardware design, drastically reducing the HEVC complexity, but with some losses in terms of compression rates and quality. The used strategy considered the use of only the most used PU size in the Motion Estimation process, avoiding the evaluation of the 24 PU sizes defined in the HEVC and avoiding also the RDO decision process. This expressively reduces the ME complexity and causes a bit-rate loss lower than 13.18% and a quality loss lower than 0.45dB. Even with the proposed simplification, the proposed solution is fully compliant with the current version of the HEVC standard. The FME interpolation was also simplified targeting the hardware design through some algebraic manipulations, converting multiplications in shift-adds and sharing sub-expressions. The simplified FME interpolator was designed in hardware and the results showed a low use of hardware resources and a processing rate high enough to process QFHD videos (3840x2160 pixels) in real time.