Siti Rochimah
Sepuluh Nopember Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Siti Rochimah.
international conference on software engineering advances | 2007
Siti Rochimah; Wan M. N. Wan Kadir; Abdul Hanan Abdullah
Requirements traceability is becoming increasingly significant element in software engineering. It provides critical function in the development and maintenance of a software system. From the software evolution point of view, requirements traceability plays an important role in facilitating software evolution. Since the evolution is inevitable, a traceability approach must take as much as possible the important influencing aspects into account to the evolution processes in order to minimize the evolution efforts. This paper evaluates several recent traceability approaches published in literature with the focus on their contributions to software evolution. The evaluation results may be used as a basis for improving requirements traceability approaches that may simplify the software evolution tasks.
international conference on information technology, computer, and electrical engineering | 2014
Ratih Nur Esti Anggraini; Siti Rochimah; Kessya Din Dalmi
In the age of 0-2 years, infants are introduced to foods. But parents could not give food carelessly because in this age infants are vulnerable to allergies. This condition causes parents to consider age and allergies in infant feeding. This application uses the forward chaining and backward chaining method to provide recommendations for infant nutrition in accordance with the age and allergies. The age and allergy constraints are used to make rules to give recommendations for any food ingredient fit the circumstances and determine whether the food is suitable to the infants conditions. There are four most common types of allergies as the constraint in infants feeding that is lactose intolerance, casein of cows milk, egg and fish. Usability testing showed a good result namely 79.5 %.
international conference on data and software engineering | 2014
Umi Laili Yuhana; Agus Budi Raharjo; Siti Rochimah
Nowadays web based applications are widely used in education field. Web based applications are used primarily to support the business processes of educational institutions such as Academic Information System (AIS). However, not all of AIS is well managed. Therefore, we need an instrument that can measure the quality and ensure the quality of AIS. This paper introduced a framework for measuring the quality of web based AIS using visitors, developers and institutions perspectives approach. AIS quality instrument built from the combination of ISO/IEC 9126, ISO/IEC 25010:2011, WBA Quality Model, and COBIT 4.1. This framework is expected to produce a measurement of academic quality web-based information systems more accurately and provide detailed recommendations in order to produce a better system, especially to support the business processes of AIS.
International Journal of Software Engineering and Knowledge Engineering | 2011
Siti Rochimah; Wan M. N. Wan Kadir; Abdul Hanan Abdullah
Software evolution is inevitable. When a system evolves, there are certain relationships among software artifacts that must be maintained. Requirement traceability is one of the important factors in facilitating software evolution since it maintains the artifacts relationship before and after a change is performed. Requirement traceability can be expensive activities. Many researchers have addressed the problem of requirement traceability, especially to support software evolution activities. Yet, the evaluation results of these approaches show that most of them typically provide only limited support to software evolution. Based on the problems of requirement traceability, we have identified three directions that are important for traceability to support software evolution, i.e. process automation, procedure simplicity, and best results achievement. Those three directions are addressed in our multifaceted approach of requirement traceability. This approach utilizes three facets to generate links between artifacts, i.e. syntactical similarity matching, link prioritization, and heuristic-list based processes. This paper proposes the utilization of multifaceted approach to traceability generation and recovery in facilitating software evolution process. The complete experiment has been applied in a real case study. The results show that utilization of these three facets in generating the traceability among artifacts is better than the existing approach, especially in terms of its accuracy.
international conference on information technology systems and innovation | 2015
Mohammad Farid Naufal; Siti Rochimah
One criteria for assessing the software quality is ensuring that there is no defect in the software which is being developed. Software defect classification can be used to prevent software defects. More earlier software defects are detected in the software life cycle, it will minimize the software development costs. This study proposes a software defect classification using Fuzzy Association Rule Mining (FARM) based on complexity metrics. However, not all complexity metrics affect on software defect, therefore it requires metrics selection process using Correlation-based Feature Selection (CFS) so it can increase the classification performance. This study will conduct experiments on the NASA MDP open source dataset that is publicly accessible on the PROMISE repository. This datasets contain history log of software defects based on software complexity metric. In NASA MDP dataset the data distribution between defective and not defective modules are not balanced. It is called class imbalanced problem. Class imbalance problem can affect on classification performance. It needs a technique to solve this problem using oversampling method. Synthetic Minority Oversampling Technique (SMOTE) is used in this study as oversampling method. With the advantages possessed by FARM in learning on dataset which has quantitative data attribute and combined with the software complexity metrics selection process using CFS and oversampling using SMOTE, this method is expected has a better performance than the previous methods.
international conference on data and software engineering | 2014
Bayu Priyambadha; Siti Rochimah
The activities of copy and paste fragments of code from a source code into the other source code, with or without modification, are known as the code cloning. The process is often done by software developers because its easier than generate code manually. In the other hand, this behavior leads to the increasing of effort to maintenance the code. Research on detecting the presence of cloning has been done. Detection of cloned semantically is a detection method that still needs a deepening. One of the detection methods of semantic cloning is based on the behavior of the code. The code behavior detected by looking at input, output and the effects of the method. Methods with the same input, output and effect will show that the function of the method is the same. However, the detection method based on input, output and effect could not be used in a void method and method without parameters. There is no explanations about input, output and effect in that kind of methods. In this case, the solution is needed to find the input, output and the effects of the method. This research is a concern on how to find an input, output and effect in void and non-parameterized method. Detection of input, output and effects done using PDG (Program Dependence Graph). The result is used to reconstruct void and non-parameterizes method. Trial is performed on each method (without exception) using random input data to get the behavior methods. Trial is done using small size source code from jDraw. From the result of the trial process, can be concluded that semantic clone detection using the methods behavior approach is sound very promising. This method can also detect type 1, type 2 and type 3 clone beside the semantic clone itself with an accuracy about 89%.
international conference on information technology and electrical engineering | 2013
Sugiyanto; Siti Rochimah
One of the difficulties that occur in the model is to decide the weights of quality characteristics. This is due to the interrelations existence among the quality factors based model ISO 9126. Each of these characteristics can influence or even contradict each other. The interrelations existence among the factors affects the weight of characteristics software quality, and will affect the software quality calculation. Therefore, researchers will integrate DEMATEL and ANP methods for calculate the weight of characteristics software quality based model ISO 9126. DEMATEL method used to calculate sum of influences for each characteristics model ISO 9126, while the ANP method used to calculate local weights and global weight for each sub characteristics model ISO 9126. Results from this study is the value of local weights for each of the characteristics of ISO 9126, and global weights for each sub characteristics ISO 9126 which represent the level of importance of the characteristics and sub characteristics ISO 9126.
international conference on information and communication technology | 2016
Fachrul Pralienka Bani Muhamad; Riyanarto Sarno; Adhatus Solichah Ahmadiyah; Siti Rochimah
Graphical User Interface (GUI) testing which is done manually requires great effort, because it needs high precision and bunch of time to do the all scenarios repeatedly. In addition, it can be prone to errors and most of testing scenarios are not all done. To solve that problems, it is proposed automated GUI testing. The latest techniques of automated GUI testing (the 3rd generation) is through a visual approach or called by Visual GUI testing (VGT). To automate the VGT, it is necessary to use testing tools. With VGT tools, GUI testing can be performed automatically and can mimic the human behavior. However, in the software development process, VGT feedback is still not automated, so that the effort is still required to run the VGT manually and repeatedly. Continuous integration (CI) is a practice that can automate the build when any program code or any version of the program code is changed. Each build consists of compile, inspection program code, test, and deploy. To automate the VGT feedback, it proposed combination of CI practice and VGT practice. In this paper, the focus of research is combining and assessing the VGT tools and CI tools, because there is no research about it yet. The result of this research show that combination of Jenkins and JAutomate are the highest assessment.
2016 International Seminar on Application for Technology of Information and Communication (ISemantic) | 2016
Siska Arifiani; Siti Rochimah
There are many algorithms used in software statistical testing such as: search algorithm, genetic algorithm, clustering algorithm, Particle Swarm Optimization (PSO), ant Colony Optimization (ACO) and so on. Based on research, ACO algorithm has been shown that it is outperforms the existing simulated annealing (search algorithm), genetic algorithm and other algorithm in statistical testing for the quality of generating test data and its stability. This ACO algorithm is also comparable to PSO-based method. This research proposes statistical testing technique on Gray Box testing using ACO algorithm. Test case and data test are generated from UML State Machine Diagram. UML State Machine Diagram can describe the structural of source code from Software Under Test (SUT). And it has better coverage of the SUT structural source code than another UML Diagrams. This research aims to get comparison result between UML State Machine Diagram and source code in generating test case and test data base on ACO statistical testing.
2016 International Seminar on Application for Technology of Information and Communication (ISemantic) | 2016
Vika F. Insanittaqwa; Siti Rochimah
Data-Driven Software Reliability Modeling (DDSRM) is an approach in software reliability prediction problem which only relies on software failure data. There are two kinds of model architecture in this modeling, which are Single-Input Single-Output (SISO) and Multiple-Delayed-Input Single-Output (MDISO). In MDISO architecture, the prediction process involves having multiple inputs from the failure data to predict single output in the future. Most MDISO literatures use underlying assumption that a failure is correlated with a number of most recent failures. In more “generic” model of MDISO, a failure can be correlated with some of the previous failures. The process of searching which time lags to use as inputs in this model is sometimes referred to as a model mining process. This paper proposes to apply Binary Particle Swarm Optimization (BPSO) algorithm as model mining in software reliability prediction problem in terms of failure count number with Support Vector Regression (SVR) as predictor. Initial experiment shows that the proposed SVR-BPSO method yields more accurate prediction result than a prediction without model mining.