Manoochehr Ghiassi
Santa Clara University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manoochehr Ghiassi.
Expert Systems With Applications | 2013
Manoochehr Ghiassi; J. Skinner; David Zimbra
Twitter messages are increasingly used to determine consumer sentiment towards a brand. The existing literature on Twitter sentiment analysis uses various feature sets and methods, many of which are adapted from more traditional text classification problems. In this research, we introduce an approach to supervised feature reduction using n-grams and statistical analysis to develop a Twitter-specific lexicon for sentiment analysis. We augment this reduced Twitter-specific lexicon with brand-specific terms for brand-related tweets. We show that the reduced lexicon set, while significantly smaller (only 187 features), reduces modeling complexity, maintains a high degree of coverage over our Twitter corpus, and yields improved sentiment classification accuracy. To demonstrate the effectiveness of the devised Twitter-specific lexicon compared to a traditional sentiment lexicon, we develop comparable sentiment classification models using SVM. We show that the Twitter-specific lexicon is significantly more effective in terms of classification recall and accuracy metrics. We then develop sentiment classification models using the Twitter-specific lexicon and the DAN2 machine learning approach, which has demonstrated success in other text classification problems. We show that DAN2 produces more accurate sentiment classification results than SVM while using the same Twitter-specific lexicon.
Expert Systems With Applications | 2012
Manoochehr Ghiassi; M. Olschimke; Brian Moon; P. Arnaudo
Widespread digitization of information in todays internet age has intensified the need for effective textual document classification algorithms. Most real life classification problems, including text classification, genetic classification, medical classification, and others, are complex in nature and are characterized by high dimensionality. Current solution strategies include Naive Bayes (NB), Neural Network (NN), Linear Least Squares Fit (LLSF), k-Nearest-Neighbor (kNN), and Support Vector Machines (SVM); with SVMs showing better results in most cases. In this paper we introduce a new approach called dynamic architecture for artificial neural networks (DAN2) as an alternative for solving textual document classification problems. DAN2 is a scalable algorithm that does not require parameter settings or network architecture configuration. To show DAN2 as an effective and scalable alternative for text classification, we present comparative results for the Reuters-21578 benchmark dataset. Our results show DAN2 to perform very well against the current leading solutions (kNN and SVM) using established classification metrics.
Expert Systems With Applications | 2015
Manoochehr Ghiassi; David Lio; Brian Moon
We show DAN2 to be an effective tool for forecasting movie revenues.DAN2 improved upon benchmark ANN-based revenue forecasting models by 32.8%.We develop a new model to forecast movie revenues during the pre-production period.We offer new insights into the role of various movie attributes in revenue forecasts.DAN2 achieves an accuracy rate of 94.1% with this new model and variable-set. The production of a motion picture is an expensive, risky endeavor. During the five-year period from 2008 through 2012, approximately 90 films were released in the United States with production budgets in excess of
Expert Systems With Applications | 2010
Manoochehr Ghiassi; C. Burnley
100 million. The majority of these films failed to recoup their production costs via gross domestic box office revenues. Existing decision support systems for pre-production analysis and green-lighting decisions lack sufficient accuracy to meaningfully assist decision makers in the film industry.Established models focus primarily upon post-release and post-production forecasts. These models often rely upon opening weekend data and are reasonably accurate but only if data up until the moment of release is included. A forecast made immediately prior to the debut of a film, however, is of limited value to stakeholders because it can only influence late-stage adjustments to advertising or distribution strategies and little else.In this paper we present the development of a model based upon a dynamic artificial neural network (DAN2) for the forecasting of movie revenues during the pre-production period. We first demonstrate the effectiveness of DAN2 and show that DAN2 improves box-office revenue forecasting accuracy by 32.8% over existing models. Subsequently, we offer an alternative modeling strategy by adding production budgets, pre-release advertising expenditures, runtime, and seasonality to the predictive variables. This alternative model produces excellent forecasting accuracy values of 94.1%.
hawaii international conference on system sciences | 2016
David Zimbra; Manoochehr Ghiassi; Sean Lee
Classification is the process of assigning an object to one of a set of classes based on its attributes. Classification problems have been examined in fields as diverse as biology, medicine, business, image recognition, and forensics. Developing more accurate and widely applicable classification methods has significant implications in these and many other fields. This paper presents a dynamic artificial neural network (DAN2) as an alternate approach for solving classification problems. We show DAN2 to be an effective approach and compare its performance with linear discriminant analysis, quadratic discriminant analysis, k-nearest neighbor algorithms, support vector machines, and traditional artificial neural networks using benchmark and real-world application data sets. These data sets vary in the number of classes (two vs. multiple) and the source of the data (synthetic vs. real-world). We found DAN2 to be a very effective classification method for two-class data sets with accuracy improvements as high as 37.2% when compared to the other methods. We also introduce a hierarchical DAN2 model for multiple class data sets that shows marked improvements (up to 89%) over all other methods, and offers better accuracy in all cases.
Journal of Management Information Systems | 2016
Manoochehr Ghiassi; David Zimbra; Sean Lee
We present an approach to brand-related Twitter sentiment analysis using feature engineering and the Dynamic Architecture for Artificial Neural Networks (DAN2). The approach addresses challenges associated with the unique characteristics of the Twitter language, and the recall of mild sentiment expressions that are of interest to brand management practitioners. We demonstrate the effectiveness of the approach on a Starbucks brand-related Twitter data set. The feature engineering produced a final tweet feature representation consisting of only seven dimensions, with greater feature density. Two sets of experiments were conducted in three-class and five-class tweet sentiment classification. We compare the proposed approach to the performances of two state-of-the-art Twitter sentiment analysis systems from the academic and commercial domains. The results indicate that the approach outperforms these state-of-the-art systems in both three-class and five-class tweet sentiment classification by wide margins, with classification accuracies above 80% and excellent recall of mild sentiment tweets.
hawaii international conference on system sciences | 1992
Manoochehr Ghiassi; Mohammad A. Ketabchi; K.J. Sadeghi
Abstract Social media communications offer valuable feedback to firms about their brands. We present a targeted approach to Twitter sentiment analysis for brands using supervised feature engineering and the dynamic architecture for artificial neural networks. The proposed approach addresses challenges associated with the unique characteristics of the Twitter language and brand-related tweet sentiment class distribution. We demonstrate its effectiveness on Twitter data sets related to two distinctive brands. The supervised feature engineering for brands offers final tweet feature representations of only seven dimensions with greater feature density. Reducing the dimensionality of the representations reduces the complexity of the classification problem and feature sparsity. Two sets of experiments are conducted for each brand in three-class and five-class tweet sentiment classification. We examine five-class classification to target the mild sentiment expressions that are of particular interest to firms and brand management practitioners. We compare the proposed approach to the performances of two state-of-the-art Twitter sentiment analysis systems from the academic and commercial domains. The results indicate that it outperforms these state-of-the-art systems by wide margins, with classification F1-measures as high as 88 percent and excellent recall of tweets expressing mild sentiments. Furthermore, they demonstrate the tweet feature representations, though consisting of only seven dimensions, are highly effective in capturing indicators of Twitter sentiment expression. The proposed approach and vast majority of features identified through supervised feature engineering are applicable across brands, allowing researchers and brand management practitioners to quickly generate highly effective tweet feature representations for Twitter sentiment analysis on other brands.
The Computer Journal | 1994
Kamyar Jambor-Sadeghi; Mohammad A. Ketabchi; Junjie Chue; Manoochehr Ghiassi
It is generally believed that the testing phases of software development consume one-third to one-half of the entire software development time and resources. To increase the productivity of the software development process, the cost and time of testing should be reduced. The integrated software testing system described in this paper allows the structural and functional description of software systems, test data and their expected results, and bug reports to be generated and stored in a software database implemented using an object-oriented database management system. The objects in the database and their relationships with each other are used to facilitate local testing and to integrate structural and functional testing approaches. The high-level, uniform user interface of the system increases the productivity of the test engineers, and reduces the cost of developing reliable software systems. The software testing system together with an existing software analysis and maintenance system together cover the software analysis, maintenance, debugging, and testing phases of the software product lifecycle.<<ETX>>
Archive | 1988
Durga Agarwal; Fu-Hwa Wang; Manoochehr Ghiassi
A process-driven, model-based solution to corrective maintenance is described. The solution approach starts by identifying the st of ordered steps that should be performed to complete a corrective maintenance task. Once the steps in the process are clearly defined, the information needed at each step is organized into maintenance information models. A set of tools that operate on the models and provide the capabilities needed for the process of corrective maintenance are then identified. Realizing the models and providing the tools through a uniform interface lead to a software maintenance system that supports effective and reliable corrective maintenance. An overview of SAMS, a Software Analysis and Maintenance System developed based on this approach, is presented. SAMS integrates various tools that are needed to support maintenance processes including the corrective maintenance process. SAMS tools are developed on top of an object model of maintenance information realized using an object-oriented database management system
Expert Systems With Applications | 2018
Haibing Lu; Wendong Zhu; Joseph Phan; Manoochehr Ghiassi; Yi Fang; Yuan Hong; Xiaoyun He
Microtec Research, Inc. (MRI) has developed an integrated set of software tools to support both the fast generation of highly optimized code and subsequent high-level/low-level debugging of this code for embedded and native UNIX® applications. This paper discusses the global and TRON architecture-specific optimization techniques used to produce this highly space/time efficient object code. The use of optimization information to provide source-level debugging of optimized code is also discussed.