Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Negar Hariri is active.

Publication


Featured researches published by Negar Hariri.


international conference on software engineering | 2011

On-demand feature recommendations derived from mining public product descriptions

Horatiu Dumitru; Marek Gibiec; Negar Hariri; Jane Cleland-Huang; Bamshad Mobasher; Carlos Castro-Herrera; Mehdi Mirakhorli

We present a recommender system that models and recommends product features for a given domain. Our approach mines product descriptions from publicly available online specifications, utilizes text mining and a novel incremental diffusive clustering algorithm to discover domain-specific features, generates a probabilistic feature model that represents commonalities, variants, and cross-category features, and then uses association rule mining and the k-Nearest-Neighbor machine learning strategy to generate product specific feature recommendations. Our recommender system supports the relatively labor-intensive task of domain analysis, potentially increasing opportunities for re-use, reducing time-to-market, and delivering more competitive software products. The approach is empirically validated against 20 different product categories using thousands of product descriptions mined from a repository of free software applications.


foundations of software engineering | 2013

Feature model extraction from large collections of informal product descriptions

Jean-Marc Davril; Edouard Delfosse; Negar Hariri; Mathieu Acher; Jane Cleland-Huang; Patrick Heymans

Feature Models (FMs) are used extensively in software product line engineering to help generate and validate individual product configurations and to provide support for domain analysis. As FM construction can be tedious and time-consuming, researchers have previously developed techniques for extracting FMs from sets of formally specified individual configurations, or from software requirements specifications for families of existing products. However, such artifacts are often not available. In this paper we present a novel, automated approach for constructing FMs from publicly available product descriptions found in online product repositories and marketing websites such as SoftPedia and CNET. While each individual product description provides only a partial view of features in the domain, a large set of descriptions can provide fairly comprehensive coverage. Our approach utilizes hundreds of partial product descriptions to construct an FM and is described and evaluated against antivirus product descriptions mined from SoftPedia.


IEEE Transactions on Software Engineering | 2013

Supporting Domain Analysis through Mining and Recommending Features from Online Product Listings

Negar Hariri; Carlos Castro-Herrera; Mehdi Mirakhorli; Jane Cleland-Huang; Bamshad Mobasher

Domain analysis is a labor-intensive task in which related software systems are analyzed to discover their common and variable parts. Many software projects include extensive domain analysis activities, intended to jumpstart the requirements process through identifying potential features. In this paper, we present a recommender system that is designed to reduce the human effort of performing domain analysis. Our approach relies on data mining techniques to discover common features across products as well as relationships among those features. We use a novel incremental diffusive algorithm to extract features from online product descriptions, and then employ association rule mining and the (k)-nearest neighbor machine learning method to make feature recommendations during the domain analysis process. Our feature mining and feature recommendation algorithms are quantitatively evaluated and the results are presented. Also, the performance of the recommender system is illustrated and evaluated within the context of a case study for an enterprise-level collaborative software suite. The results clearly highlight the benefits and limitations of our approach, as well as the necessary preconditions for its success.


international conference on software engineering | 2012

Recommending source code for use in rapid software prototypes

Collin McMillan; Negar Hariri; Denys Poshyvanyk; Jane Cleland-Huang; Bamshad Mobasher

Rapid prototypes are often developed early in the software development process in order to help project stakeholders explore ideas for possible features, and to discover, analyze, and specify requirements for the project. As prototypes are typically thrown-away following the initial analysis phase, it is imperative for them to be created quickly with little cost and effort. Tool support for finding and reusing components from open-source repositories offers a major opportunity to reduce this manual effort. In this paper, we present a system for rapid prototyping that facilitates software reuse by mining feature descriptions and source code from open-source repositories. Our system identifies and recommends features and associated source code modules that are relevant to the software product under development. The modules are selected such that they implement as many of the desired features as possible while exhibiting the lowest possible levels of external coupling. We conducted a user study to evaluate our approach and the results indicated that our proposed system returned packages that implemented more features and were considered more relevant than the state-of-the-art approach.


conference on recommender systems | 2013

Query-driven context aware recommendation

Negar Hariri; Bamshad Mobasher; Robin D. Burke

Context aware recommender systems go beyond the traditional personalized recommendation models by incorporating a form of situational awareness. They provide recommendations that not only correspond to a users preference profile, but that are also tailored to a given situation or context. We consider the setting in which contextual information is represented as a subset of an item feature space describing short-term interests or needs of a user in a given situation. This contextual information can be provided by the user in the form of an explicit query, or derived implicitly. We propose a unified probabilistic model that integrates user profiles, item representations, and contextual information. The resulting recommendation framework computes the conditional probability of each item given the user profile and the additional context. These probabilities are used as recommendation scores for ranking items. Our model is an extension of the Latent Dirichlet Allocation (LDA) model that provides the capability for joint modeling of users, items, and the meta-data associated with contexts. Each user profile is modeled as a mixture of the latent topics. The discovered latent topics enable our system to handle missing data in item features. We demonstrate the application of our framework for article and music recommendation. In the latter case, the set of popular tags from social tagging Web sites are used for context descriptions. Our evaluation results show that considering context can help improve the quality of recommendations.


IEEE Transactions on Instrumentation and Measurement | 2011

A Distributed Measurement Scheme for Internet Latency Estimation

Negar Hariri; Behnoosh Hariri; Shervin Shirmohammadi

Estimating latency between the hosts in the Internet can play a significant role in the improvement of the performance of many services that use latency among hosts to make routing decisions. A popular example is peer-to-peer networks that need to build an overlay between peers in a way that minimizes the message exchange delay among the peers. Acquisition of latency information requires a considerable amount of measurements to be performed at each node in order for that node to keep a record of its latency to all the other nodes. Moreover, measured latency values are frequently subject to change and need to be regularly repeated in order to be updated against network dynamics. This has motivated the use of techniques that alleviate the need for a large number of empirical measurements and try to predict the entire network latency matrix using a small set of latency measurements. Coordinate-based approaches are the most popular solutions to this problem. The basic idea behind coordinate-based schemes is to model the latency between each pair of nodes as the virtual distance among those nodes in a virtual coordinate system. This paper proposes a new decentralized coordinate-based solution to the problem of Internet delay measurement. The simulation results demonstrate that the proposed system provides relatively accurate estimations.


Recommendation Systems in Software Engineering | 2014

Recommendation Systems in Requirements Discovery

Negar Hariri; Carlos Castro-Herrera; Jane Cleland-Huang; Bamshad Mobasher

Recommendation systems offer the opportunity for supporting and enhancing a wide variety of activities in requirements engineering. We discuss several potential uses. In particular we highlight the role of recommendation systems in online forums that are used for capturing and discussing feature requests. The recommendation system is used to mitigate problems introduced when face-to-face communication is replaced with potentially high-volume online discussions. In this context, recommendation systems can be used to suggest relevant topics to stakeholders and conversely to recommend expert stakeholders for each discussion topic. We also explore the use of recommendation systems in the domain analysis process, where they can be used to recommend sets of features to include in new products.


instrumentation and measurement technology conference | 2010

A distributed measurement system for Internet delay estimation

Negar Hariri; Behnoosh Hariri; Shervin Shirmohammadi

predicting the latency between hosts in the internet can play a significant rolein the improvement of the performance of many services that use latency distances among hosts as a decision making input. Although, the information regarding the point to point delay among internet peers is required in many applications, such information is not easily available to the peers. Latency data acquisition requires a considerableamount of measurements to be performed at each node in order for that node to keep a recordof its latency to all the other nodes. Moreover, the measurements need to be regularly repeated in order to be updatedagainst the network dynamics where the latency values are frequently subject to change. This has motivated the use of techniques that alleviate the need for a large number of empirical measurements. Such techniques basically try to predict the entire latency matrix using a small set of latency measurements. Coordinate-based approaches are the most popular members among the family of latency prediction techniques. In these techniques, latency between each pair of nodes is modeled as the virtual distance among those nodes over a virtual system. This article proposes a new decentralized coordinate based approach to the problem of Internet delay measurement. Simulation resultsdemonstrate the fact that the proposed system provides relatively accurate estimations.


virtual environments human computer interfaces and measurement systems | 2009

Preserving locality in MMVE applications based on ant clustering

Negar Hariri; Shervin Shirmohammadi; Jafar Habibi

Massively Multiuser Virtual Environment (MMVE) applications have the challenge of update message exchange among a large number of users. Real-time collaboration in the virtual environments requires the updates to be exchanged with the minimum end-to-end latency and overhead. This article proposes a new clustering technique to improve the update message exchange efficiency in MMVE applications. The proposed solution is based on the use of ant clustering technique in order to cluster the users based on their geographical proximity. These clusters can be used as localized routing regions in the hierarchical routing architecture and prevent local messages to be routed through long distance hops. Therefore, clustering can help to reduce both update exchange latency and underlay network traffic.


distributed simulation and real-time applications | 2009

DCS: A Distributed Coordinate System for Network Positioning

Negar Hariri; Jafar Habibi; Shervin Shirmohammadi; Behnoosh Hariri

Predicting latency between nodes on the internet can have a significant impact on the performance of many services that use latency distances among nodes as a decision making input. Coordinate-based approaches are among the family of latency prediction techniques where latency between each pair of nodes is modeled as the virtual distance among those nodes over a virtual system. This article proposes the Decentralized Coordinate System (DCS) as a fully distributed system that does not rely on any infrastructure support from the underlying network. DCS uses a two-phase algorithm: first, each host is assigned rough coordinates, then, the rough estimation is refined in a way that it gradually converges to the accurate relative position of nodes. Simulation results demonstrate that the accuracy of DCS is competitive with existing network coordinate systems and that it can be considered as a good alternative for some applications.

Collaboration


Dive into the Negar Hariri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mehdi Mirakhorli

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge