Sankalita Saha
Ames Research Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sankalita Saha.
reliability and maintainability symposium | 2012
José R. Celaya; Abhinav Saxena; Chetan S. Kulkarni; Sankalita Saha; Kai Goebel
The prognostic technique for a power MOSFET presented in this paper is based on accelerated aging of MOSFET IRF520Npbf in a TO-220 package. The methodology utilizes thermal and power cycling to accelerate the life of the devices. The major failure mechanism for the stress conditions is die-attachment degradation, typical for discrete devices with lead-free solder die attachment. It has been determined that die-attach degradation results in an increase in ON-state resistance due to its dependence on junction temperature. Increasing resistance, thus, can be used as a precursor of failure for the die-attach failure mechanism under thermal stress. A feature based on normalized ON-resistance is computed from in-situ measurements of the electro-thermal response. An Extended Kalman filter is used as a model-based prognostics techniques based on the Bayesian tracking framework. The proposed prognostics technique reports on preliminary work that serves as a case study on the prediction of remaining life of power MOSFETs and builds upon the work presented in [1]. The algorithm considered in this study had been used as prognostics algorithm in different applications and is regarded as suitable candidate for component level prognostics. This work attempts to further the validation of such algorithm by presenting it with real degradation data including measurements from real sensors, which include all the complications (noise, bias, etc.) that are regularly not captured on simulated degradation data. The algorithm is developed and tested on the accelerated aging test timescale. In real world operation, the timescale of the degradation process and therefore the RUL predictions will be considerable larger. It is hypothesized that even though the timescale will be larger, it remains constant through the degradation process and the algorithm and model would still apply under the slower degradation process. By using accelerated aging data with actual device measurements and real sensors (no simulated behavior), we are attempting to assess how such algorithm behaves under realistic conditions.
autotestcon | 2010
José R. Celaya; Philip F. Wysocki; Vladislav Vashchenko; Sankalita Saha; Kai Goebel
Prognostics is an engineering discipline that focuses on estimation of the health state of a component and the prediction of its remaining useful life (RUL) before failure. Health state estimation is based on actual conditions and it is fundamental for the prediction of RUL under anticipated future usage. Failure of electronic devices is of great concern as future aircraft will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. Therefore, development of prognostics solutions for electronics is of key importance. This paper presents an accelerated aging system for gate-controlled power transistors. This system allows for the understanding of the effects of failure mechanisms, and the identification of leading indicators of failure which are essential in the development of physics-based degradation models and RUL prediction. In particular, this system isolates electrical overstress from thermal overstress. Also, this system allows for a precise control of internal temperatures, enabling the exploration of intrinsic failure mechanisms not related to the device packaging. By controlling the temperature within safe operation levels of the device, accelerated aging is induced by electrical overstress only, avoiding the generation of thermal cycles. The temperature is controlled by active thermal-electric units. Several electrical and thermal signals are measured in-situ and recorded for further analysis in the identification of leading indicators of failures. This system, therefore, provides a unique capability in the exploration of different failure mechanisms and the identification of precursors of failure that can be used to provide a health management solution for electronic devices.
ieee aerospace conference | 2010
Sankalita Saha; Bhaskar Saha; Abhinav Saxena; Kai Goebel
Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. 12A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper, we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.
ieee aerospace conference | 2010
Abhinav Saxena; José R. Celaya; Bhaskar Saha; Sankalita Saha; Kai Goebel
Uncertainty Representation and Management (URM) are an integral part of the prognostic system development. As capabilities of prediction algorithms evolve, research in developing newer and more competent methods for URM is gaining momentum. Beyond initial concepts, more sophisticated prediction distributions are obtained that are not limited to assumptions of Normality and unimodal characteristics. Most prediction algorithms yield non-parametric distributions that are then approximated as known ones for analytical simplicity, especially for performance assessment methods. Although applying the prognostic metrics introduced earlier with their simple definitions has proven useful, a lot of information about the distributions gets thrown away. In this paper, several techniques have been suggested for incorporating information available from Remaining Useful Life (RUL) distributions, while applying the prognostic performance metrics. These approaches offer a convenient and intuitive visualization of algorithm performance with respect to metrics like prediction horizon and ?-? performance, and also quantify the corresponding performance while incorporating the uncertainty information. A variety of options have been shortlisted that could be employed depending on whether the distributions can be approximated to some known form or cannot be parameterized. This paper presents a qualitative analysis on how and when these techniques should be used along with a quantitative comparison on a real application scenario. A particle filter based prognostic framework has been chosen as the candidate algorithm on which to evaluate the performance metrics due to its unique advantages in uncertainty management and flexibility in accommodating non-linear models and non-Gaussian noise. We investigate how performance estimates get affected by choosing different options of integrating the uncertainty estimates. This allows us to identify the advantages and limitations of these techniques and their applicability towards a standardized performance evaluation method.
computer vision and pattern recognition | 2005
Mainak Sen; Ivan Corretjer; Fiorella Haim; Sankalita Saha; Shuvra S. Bhattacharyya; Jason Schlessman; Wayne H. Wolf
In this paper we develop a design methodology for generating efficient, target specific Hardware Description Language (HDL) code from an algorithm through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We demonstrate this methodology through an algorithm for gesture recognition that has been developed previously in software [9]. Using the recently introduced modeling technique of homogeneous parameterized dataflow (HPDF) [3], which effectively captures the structure of an important class of computer vision applications, we systematically transform the gesture recognition application into a streamlined HDL implementation, which is based on Verilog and VHDL. To demonstrate the utility and efficiency of our approach we synthesize the HDL implementation on the Xilinx Virtex II FPGA. This paper describes our design methodology based on the HPDF representation, which offers useful properties in terms of verifying correctness and exposing performance- enhancing transformations; discusses various challenges that we addressed in efficiently linking the HPDFbased application representation to target-specific HDL code; and provides experimental results pertaining to the mapping of the gesture recognition application onto the Virtex II using our methodology.
international symposium on power semiconductor devices and ic's | 2011
José R. Celaya; Abhinav Saxena; Sankalita Saha; Vladislav Vashchenko; Kai Goebel
This paper demonstrates how to apply prognostics to power MOSFETs (metal oxide field effect transistor). The methodology uses thermal cycling to age devices and Gaussian process regression to perform prognostics. The approach is validated with experiments on 100V power MOSFETs. The failure mechanism for the stress conditions is determined to be die-attachment degradation. Change in ON-state resistance is used as a precursor of failure due to its dependence on junction temperature. The experimental data is augmented with a finite element analysis simulation that is based on a two-transistor model. The simulation assists in the interpretation of the degradation phenomena and SOA (safe operation area) change.
reliability and maintainability symposium | 2012
José R. Celaya; Chetan S. Kulkarni; Sankalita Saha; Gautam Biswas; Kai Goebel
The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.
Eurasip Journal on Embedded Systems | 2007
Mainak Sen; Ivan Corretjer; Fiorella Haim; Sankalita Saha; Jason Schlessman; Tiehan Lv; Shuvra S. Bhattacharyya; Wayne H. Wolf
We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF), which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.
Computer Vision and Image Understanding | 2010
Sankalita Saha; Neal K. Bambha; Shuvra S. Bhattacharyya
Particle filtering methods are gradually attaining significant importance in a variety of embedded computer vision applications. For example, in smart camera systems, object tracking is a very important application and particle filter based tracking algorithms have shown promising results with robust tracking performance. However, most particle filters involve vast amount of computational complexity, thereby intensifying the challenges faced in their real-time, embedded implementation. Many of these applications share common characteristics, and the same system design can be reused by identifying and varying key system parameters and varying them appropriately. In this paper, we present a System-on-Chip (SoC) architecture involving both hardware and software components for a class of particle filters. The framework uses parameterization to enable fast and efficient reuse of the architecture with minimal re-design effort for a wide range of particle filtering applications as well as implementation platforms. Using this framework, we explore different design options for implementing three different particle filtering applications on field-programmable gate arrays (FPGAs). The first two applications involve particle filters with one-dimensional state transition models, and are used to demonstrate the key features of the framework. The main focus of this paper is on design methodology for hardware/software implementation of multi-dimensional particle filter application and we explore this in the third application which is a 3D facial pose tracking system for videos. In this multi-dimensional particle filtering application, we extend our proposed architecture with models for hardware/software co-design so that limited hardware resources can be utilized most effectively. Our experiments demonstrate that the framework is easy and intuitive to use, while providing for efficient design and implementation. We present different memory management schemes along with results on trade-offs between area (FPGA resource requirement) and execution speed.
international conference on parallel processing | 2006
Sankalita Saha; Chung-Ching Shen; Chia-Jui Hsu; Gaurav Aggarwal; Ashok Veeraraghavan; Alan Sussman; Shuvra S. Bhattacharyya
Most image processing applications are characterized by computation-intensive operations, and high memory and performance requirements. Parallelized implementation on shared-memory systems offer an attractive solution to this class of applications. However, we cannot thoroughly exploit the advantages of such architectures without proper modeling and analysis of the application. In this paper, we describe our implementation of a 3D facial pose tracking system using the OpenMP platform. Our implementation is based on a design methodology that uses coarse-grain dataflow graphs to model and schedule the application. We present our modeling approach, details of the implementation that we derived based on this modeling approach, and associated performance results. The parallelized implementation achieves significant speedup, and meets or exceeds the target frame rate under various configurations