Monalisa Sarma
Indian Institute of Technology Kharagpur
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Monalisa Sarma.
international conference on advanced computing | 2007
Monalisa Sarma; Debasish Kundu; Rajib Mall
This paper presents a novel approach of generating test cases from UML design diagrams. Our approach consists of transforming a UML sequence diagram into a graph called the sequence diagram graph (SDG) and augmenting the SDG nodes with different information necessary to compose test vectors. These information are mined from use case templates, class diagrams and data dictionary. The SDG is then traversed to generate test cases. The test cases thus generated are suitable for system testing and to detect interaction and scenario faults.
Information & Software Technology | 2009
Monalisa Sarma; Rajib Mall
Adequate system testing of present day application programs requires satisfactory coverage of system states and transitions. This can be achieved by using a system state model. However, the system state models are rarely constructed by system developers, as these are large and complex. The only state models that are constructed by the developers are those of individual objects. However test case generation for state-based system testing based on traversal of statecharts of individual objects appears to be infeasible, since system test cases would have to be specified in the form of scenario sequences rather than transitions on individual object statecharts. In this paper, we propose a novel approach to coverage of elementary transition paths of an automatically synthesized system state model. Our technique for coverage of elementary transition paths would also ensure coverage of all states and transitions of the system model.
asian test symposium | 2007
Monalisa Sarma; Rajib Mall
Coverage of system states during system testing is a nontrivial problem. It is because the number of system states is usually very large, and system developers often do not construct system state model. In this paper, we propose a method to design system test cases to achieve coverage of system states based on UML models constructed during normal development process. We use UML use case, sequence and class level statechart models to generate a set of sequences of scenarios that can achieve adequate coverage of system states.
india software engineering conference | 2008
Debasish Kundu; Monalisa Sarma; Debasis Samanta
In this paper, we propose an integrated approach to generate system level test cases and assess reliability of a system. Input to our approach is a use case model of the system under test. We generate test cases after converting a use case model into a testable model. For each test case generated, we then analytically determine its importance to asses the reliability of the system. Existing approaches for reliability assessment are mainly based on usage probability of different user operations invoked by different users. But existing methods have the limitations such as, performance entirely depends on expertise of the system analysts and computation of usage probability of different user operations invoked by different users can not be known for a completely new system or non-existing system. To address these limitations, we propose an analytical technique and also propose a reliability metric to assess the reliability of a system under development
Sigplan Notices | 2007
Monalisa Sarma; Rajib Mall
Many modern systems are state-based. For such systems, a system state model is important not only for understanding the behavior of the system, but also for test case design, test coverage analysis, maintenance, etc. However, developers rarely construct the system state model for practical systems because it is usually too complex and cumbersome to construct. On the other hand, developers normally construct the state models of individual classes. We propose a novel method to automatically synthesize the state model of a system by analyzing the different sequences of scenarios and determining whether these lead to any state changes of the individual objects.
international conference of the ieee engineering in medicine and biology society | 2017
Shabnam Samima; Monalisa Sarma; Debasis Samanta
Vigilance or sustained attention is defined as the ability to maintain concentrated attention over prolonged time periods. It is an important aspect in industries such as aerospace and nuclear power, which involve tremendous man-machine interaction and where safety of any component/system or environment as a whole is extremely crucial. Many methods for vigilance detection, based on biological and behavioral characteristics, have been proposed in the literature. Nevertheless, the existing methods are associated with high time complexity, unhandy devices and incur huge equipment overhead. This paper aims to pave an alternative solution to the existing techniques using brain computing interface (BCI). EEG device being a non-invasive BCI technique is popular in many applications. In this work, we have utilized P300 component of ERPs of EEG signal for vigilance detection task as it can be detected fast and accurately. Through this work, we aim to establish the correlation between P300 ERP and vigilance. We have performed a number of experiments to substantiate the correctness of our proposal and have also proposed an approach to measure the vigilance level.
ieee region 10 conference | 2004
Monalisa Sarma; Debasis Samanta; Anindya Sundar Dhar
Out of several block matching motion estimation techniques, full search block matching (FSBM) motion estimation is the best so far when the quality of the video is concerned. However, the FSBM algorithm is computationally expensive and hence not suitable for real life video. On the other hand, the three step search (TSS) algorithm requires lesser number of computations and suitable for low bit rate video applications. The TSS algorithm requires complex hardware compare to its FSBM counterparts. In this paper a new motion estimation techniques using the concept of multi-resolution is proposed. Our proposed motion estimation technique follows the three steps in the TSS algorithm, but each step has a different image resolution. To achieve the different resolutions of an image, we use Haar wavelet transformation. It has been experimented that the proposed technique requires even lesser number of computations compared to the TSS algorithm. So when it comes to video quality, the proposed algorithm outperforms the TSS algorithm. Further, the algorithm is synthesizable to a simple VLSI architecture with lesser number of gate counts and lower power dissipation.
Journal of Systems and Software | 2015
Debasish Kundu; Monalisa Sarma; Debasis Samanta
This research investigates the model based approach to detect infeasible paths.Investigated two patterns and their effects on path infeasibility.Proposed an elegant model-based approach to all infeaspaths for two patterns.Our approach is more effective compared to the traditional code-based approach.With our approach it is possible to significantly minimize testing effort. UML model-based analysis is gaining wide acceptance for its cost effectiveness and lower overhead for processing compared to code-based analysis. A possible way to enhance the precision of the results of UML based analysis is by detecting infeasible paths in UML models. Our investigation reveals that two interaction patterns called Null Reference Check (NLC) and Mutually Exclusive (MUX) can cause a large number of infeasible paths in UML sequence diagrams. To detect such infeasible paths, we construct a graph model (called SIG), generate MM paths from the graph model, where an MM path refers to an execution sequence of model elements from the start to end of a method scope. Subsequently, we determine infeasibility of the MM paths with respect to MUX and NLC patterns. Our proposed model-based approach is useful to help exclude generation of test cases and test data for prior-detected infeasible paths, refine test effort estimation, and facilitate better test planning in the early stages of software development life cycle.
Advances in Software Engineering | 2014
Monalisa Sarma
In general, modern programs are large and complex and it is essential that they should be highly reliable in applications. In order to develop highly reliable software, Java programming language developer provides a rich set of exceptions and exception handling mechanisms. Exception handling mechanisms are intended to help developers build robust programs. Given a program with exception handling constructs, for an effective testing, we are to detect whether all possible exceptions are raised and caught or not. However, complex exception handling constructs make it tedious to trace which exceptions are handled and where and which exceptions are passed on. In this paper, we address this problem and propose a mutation analysis approach to develop reliable object-oriented programs. We have applied a number of mutation operators to create a large set of mutant programs with different type of faults. We then generate test cases and test data to uncover exception related faults. The test suite so obtained is applied to the mutant programs measuring the mutation score and hence verifying whether mutant programs are effective or not. We have tested our approach with a number of case studies to substantiate the efficacy of the proposed mutation analysis technique.
Archive | 2019
Durbadal Chattaraj; Monalisa Sarma; Debasis Samanta
With the exponential growth of information technology, the amount of data to be processed is also increased enormously. Managing such a huge data has emerged as the “Big Data storage issue”, which can only be addressed with new computing paradigms and platforms. Hadoop Distributed File System (HDFS), the principal component of Hadoop, has been evolved to provide the storage service in the vicinity of Big Data paradigm. Although, several studies have been conducted on HDFS few works focus on the storage service dependability analysis of HDFS. This work aims to develop a mathematical model to represent the storage service activities of HDFS and formulates its dependability attributes. To achieve this, a stochastic Petri net (SPN) based modeling technique is put forward. The proposed model accurately quantify two important dependability metrics namely storage service reliability and availability of HDFS.