Negar Koochakzadeh
University of Calgary
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Negar Koochakzadeh.
Advances in Software Engineering | 2010
Negar Koochakzadeh; Vahid Garousi
Test redundancy detection reduces test maintenance costs and also ensures the integrity of test suites. One of the most widely used approaches for this purpose is based on coverage information. In a recent work, we have shown that although this information can be useful in detecting redundant tests, it may suffer from large number of false-positive errors, that is, a test case being identified as redundant while it is really not. In this paper, we propose a semiautomated methodology to derive a reduced test suite from a given test suite, while keeping the fault detection effectiveness unchanged. To evaluate the methodology, we apply the mutation analysis technique to measure the fault detection effectiveness of the reduced test suite of a real Java project. The results confirm that the proposed manual interactive inspection process leads to a reduced test suite with the same fault detection ability as the original test suite.
international conference on software testing, verification, and validation | 2009
Negar Koochakzadeh; Vahid Garousi; Frank Maurer
Measurement and detection of redundancy in test suites attempt to achieve test minimization which in turn can help reduce test maintenance costs, and to also ensure the integrity of test cases. Test suite reduction based on coverage information has been discussed in many previous works. However, the applications of such techniques on real test suites and realistic measurements of redundancy have not yet been experimented thoroughly. To address such a need, we formulate in this paper two experimental metrics for coverage-based measurement of test redundancy in the context of JUnit test suites. We then evaluate the approach by measuring the redundancy of four real Java projects. The automated measures are compared with manual redundancy decisions (performed through an inspection by a tester). The results and lessons learned are interesting and somewhat surprising in that besides they show usefulness of coverage information, they present a set of shortcomings (in terms of precision) for the simplistic coverage-based redundancy measurement approach as discussed in the literature. The root-cause analysis of our observations identify several key lessons learned which should help the testing researchers and practitioners in devising better techniques for more precise measurement of test redundancy.
international conference on data mining | 2011
Negar Koochakzadeh; Atieh Sarraf; Keivan Kianmehr; Jon G. Rokne; Reda Alhajj
The social network methodology has gained considerable attention recently. The main motivation is to construct and analyze social networks that involve actors from a specific application domain. The advanced computing technology has facilitated automating the process and provided flexibility, robustness and scalability. A large number of automated tools exist. Each tool supports specific functions in addition to the general common functions inspired from the social network methodology. After identifying some of the interesting functions lacking in the existing tools, we have developed Net Driller as a powerful tool with distinguished capabilities. Compared to the existing tools, Net Driller supports some unique tasks, such as network construction based on data analysis by mining the raw dataset to produce more informative links between actors. Net Driller also facilitates fuzzy search on network metrics. In this demo paper, we introduce the basic features of Net Driller by focusing on the two functionalities mentioned above.
advances in social networks analysis and mining | 2012
Negar Koochakzadeh; Keivan Kianmehr; Atieh Sarraf; Reda Alhajj
Making investment decision on various available stocks in the market is a challenging task. Econometric and statistical models, as well as machine learning and data mining techniques, have proposed heuristic based solutions with limited long-range success. In practice, the capabilities and intelligence of financial experts is required to build a managed portfolio of stocks. However, for non-professional investors, it is too complicated to make subjective judgments on available stocks and thus they might be interested to follow an experts investment decision. For this purpose, it is critical to find an expert with similar investment preferences. In this work, we propose to benefit from the power of Social Network Analysis in this domain. We first build a social network of financial experts based on their publicly available portfolios. This social network is then used for further analysis to recommend an appropriate managed portfolio to non-professional investors based on their behavioral similarities to the expert investors. This approach is evaluated through a case study on real portfolios. The result shows that the proposed portfolio recommendation approach works well in terms of Sharpe ratio as the portfolio performance metric.
Knowledge Based Systems | 2014
Ali Rahmani; Salim Afra; Omar Zarour; Omar Addam; Negar Koochakzadeh; Keivan Kianmehr; Reda Alhajj; Jon Rokne
Outlier detection has a large variety of applications ranging from detecting intrusion in a computer network, to forecasting hurricanes and tornados in weather data, to identifying indicators of potential crisis in stock market data, etc. The problem of finding outliers in sequential data has been widely studied in the data mining literature and many techniques have been developed to tackle the problem in various application domains. However, many of these techniques rely on the peculiar characteristics of a specific type of data to detect the outliers. As a result, they cannot be easily applied to different types of data in other application domains; they should at least be tuned and customized to adapt to the new domain. They also may need certain amount of training data to build their models. This makes them hard to apply especially when only a limited amount of data is available. The work described in this paper tackle the problem by proposing a graph-based approach for the discovery of contextual outliers in sequential data. The developed algorithm offers a higher degree of flexibility and requires less amount of information about the nature of the analyzed data compared to the previous approaches described in the literature. In order to validate our approach, we conducted experiments on stock market and weather data; we compared the results with the results from our previous work. Our analysis of the results demonstrate that the algorithm proposed in this paper is successful and effective in detecting outliers in data from different domains, one financial and the other meteorological.
international conference on data engineering | 2012
Keivan Kianmehr; Negar Koochakzadeh; Reda Alhajj
The user-centric query interface is very common application that allows expressing both the input and the output using fuzzy terms. This is becoming a need in the evolving internet-based era where web-based applications are very common and the number of users accessing structured databases is increasing rapidly. Restricting the user group to only experts in query coding must be avoided. The Ask Fuzzy system has been developed to address this vital issue which has social and industrial impact. It is an attractive and friendly visual user interface that facilitates expressing queries using both fuzziness and traditional methods. The fuzziness is not expressed explicitly inside the database, it is rather absorbed and effectively handled by an intermediate layer which is cleverly incorporated between the front-end visual user-interface and the back-end database.
TAIC PART'10 Proceedings of the 5th international academic and industrial conference on Testing - practice and research techniques | 2010
Vahid Garousi; Negar Koochakzadeh
The code coverage tools (e.g., CodeCover for Java) and the textual coverage information (e.g., only metric values) they produce are very useful for testers. However with increasing size and complexity of code bases of both systems under test and also their automated test suites (e.g., based on JUnit), there is a need for visualization techniques to enable testers to analyze code coverage in higher levels of abstraction. To address the above need, we recently proposed a test coverage visualization tool. To assess the usability, effectiveness and usefulness of this tool in unit testing and test maintenance tasks, we have conducted a controlled experiment, the results of which show that the tool can benefit testers more compared to textual coverage information.
edbt icdt workshops | 2012
Keivan Kianmehr; Negar Koochakzadeh
Privacy concerns in many application domains prevents sharing of data, which limits data mining technology to identify patterns and trends from large amount of data. Traditional data mining algorithms have been developed within a centralized model. However, distributed knowledge discovery has been proposed by many researchers as a solution to privacy preserving data mining techniques. By vertically partitioned data, each site contains some attributes of the entities in the environment. Once an existing data mining technique is executed at each site independently, the local results need to be combined to produce the globally valid result. Learning how to rank existing entities is a central part in many knowledge discovery problems. In this paper, we present a method for ranking problem based on SVMRank algorithm in situations where different sites contain different attributes for a common set of entities. Each site learns the ranking model of entities without knowing the attributes in other sites and at the end the global rank model will be built.
TAIC PART'10 Proceedings of the 5th international academic and industrial conference on Testing - practice and research techniques | 2010
Negar Koochakzadeh; Vahid Garousi
This tool paper presents the feature set, graphical user interface and also the implementation details of a test coverage and test redundancy visualization tool, called TeCReVis. The tool is an Eclipse plug-in and supports JUnit test suites helping testers in analyzing the coverage information more effectively in a visual way compared to traditional text-based coverage tools.
ISMIS Industrial Session | 2011
Negar Koochakzadeh; Fatemeh Keshavarz; Atieh Sarraf; Ali Rahmani; Keivan Kianmehr; Mohammad Rifaie; Reda Alhajj; Jon G. Rokne
The realm of the stock market has always been appealing to individuals because of its beneficial potential. Finding an appropriate set of stocks for investment to ultimately gain more return and face less risk, compared to other selections, attracts many people, whether domain-experts or not. There exist several financial theories and approaches that deal with the issue of return and risk. However, a significant obstacle, which still remains, is to apply those theories in the real world since it is sometimes unattainable task to complete. To cope with this impediment, machine learning and data mining techniques have been utilized, and their notable power has thoroughly been proven. In this paper, we introduce an automated system, which collects information about the history of stocks in the market and suggests particular stocks to invest in. We argue that stocks do social by having the relationships and connections between them influenced by external factors mostly. In other words, the stocks are actors that dynamically change camps and socialize based on the situation of the company, the news, the market status, the economy, etc. Utilizing social network theory and analysis, we first build the network of stocks in the market, and then cluster stocks into distinct groups according to the similarities of their return trends, in order to comply with diversification strategy. This allows us to propose stocks from different clusters to individuals. To examine the effectiveness of the proposed approach, we conducted experiments on stocks of the S&P 500 market by constructing portfolios for our selected stocks as well as for a well-known benchmark in the area. The result of this study shows that the proposed portfolios have higher Sharpe Ratio compared to the benchmark.