Mohammad Al Boni
University of Virginia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohammad Al Boni.
international joint conference on natural language processing | 2015
Mohammad Al Boni; Keira Qi Zhou; Hongning Wang; Matthew S. Gerber
Humans are idiosyncratic and variable: towards the same topic, they might hold different opinions or express the same opinion in various ways. It is hence important to model opinions at the level of individual users; however it is impractical to estimate independent sentiment classification models for each user with limited data. In this paper, we adopt a modelbased transfer learning solution – using linear transformations over the parameters of a generic model – for personalized opinion analysis. Extensive experimental results on a large collection of Amazon reviews confirm our method significantly outperformed a user-independent generic opinion model as well as several state-ofthe-art transfer learning algorithms.
WCSC | 2014
Mohammad Al Boni; Derek T. Anderson; Roger L. King
Both the fuzzy measure and integral have been widely studied for multi-source information fusion. A number of researchers have proposed optimization techniques to learn a fuzzy measure from training data. In part, this task is difficult as the fuzzy measure can have a large number of free parameters (2 N − 2 for N sources) and it has many (monotonicity) constraints. In this paper, a new genetic algorithm approach to constraint preserving optimization of the fuzzy measure is present for the task of learning and fusing different ontology matching results. Preliminary results are presented to show the stability of the leaning algorithm and its effectiveness compared to existing approaches.
systems, man and cybernetics | 2016
Mohammad Al Boni; Matthew S. Gerber
Prior work in statistical crime prediction has not investigated micro-level movement patterns of individuals in the area of interest. Geotagged social media implicitly describe these patterns for many individuals; however, methods of extracting such patterns and integrating them into a statistical model remain undeveloped. This paper presents methods and experiments that begin to fill this gap. We investigate the use of spatiotemporally tagged Twitter posts for inferring micro-level movement patterns, and we use real crime data to develop and test a model informed by such patterns. Our results indicate improved performance for 15 of the 20 crime types studied, when comparing our model with a baseline that does not use micro-level movement patterns.
meeting of the association for computational linguistics | 2016
Lin Gong; Mohammad Al Boni; Hongning Wang
Motivated by the findings in social science that people’s opinions are diverse and variable while together they are shaped by evolving social norms, we perform personalized sentiment classification via shared model adaptation over time. In our proposed solution, a global sentiment model is constantly updated to capture the homogeneity in which users express opinions, while personalized models are simultaneously adapted from the global model to recognize the heterogeneity of opinions from individuals. Global model sharing alleviates data sparsity issue, and individualized model adaptation enables efficient online model learning. Extensive experimentations are performed on two large review collections from Amazon and Yelp, and encouraging performance gain is achieved against several state-of-the-art transfer learning and multi-task learning based sentiment classification solutions.
international conference on machine learning and applications | 2016
Mohammad Al Boni; Matthew S. Gerber
Kernel density estimation is a popular method for identifying crime hotspots for the purpose of data-driven policing. However, computing a kernel density estimate is computationally intensive for large crime datasets, and the quality of the resulting estimate depends heavily on parameters that are difficult to set manually. Inspired by methods from image processing, we propose a novel way for performing hotspot analysis using localized kernel density estimation optimized with an evolutionary algorithm. The proposed method uses local learning to address three challenges associated with traditional kernel density estimation: computational complexity, bandwidth selection, and kernel function selection. We evaluate our localized kernel model on 17 crime types from Chicago, Illinois, USA. Preliminary results indicate significant improvement in prediction performance over the traditional approach. We also examine the effect of data sparseness on the performance of both models.
Journal of intelligent systems | 2016
Mohammad Al Boni; Derek T. Anderson; Roger L. King
Ontologies have been widely used as a knowledge representation framework, and numerous methods have been put forth to match ontologies. It is well known that ontology matchers behave differently in various domains, and it is a challenge to predict or characterize their behavior. Herein, a hybrid expertise‐agreement aggregation strategy is proposed. Although others rely on the existence of a reference ontology, this typically does not exist in the real world. In this article, the fuzzy integral (FI) is used to aggregate multiple ontology matchers in lieu of a reference ontology. Specifically, we present a measure of expertise and fuse it with our previous agreement measure that is motivated by crowd sourcing to improve recall. This way, any available domain knowledge, in terms of partial ordering of a subset of inputs, can be included in the decision‐making process. By adding the domain knowledge to the agreement model, we are able to reach the upmost performance. Preliminary results demonstrate the robustness of our approach across domains. Sensitivity analysis is also provided, which shows the limits to which extreme destructive expertise affects system performance.
international conference on computational science | 2014
Mohammad Al Boni; Osama AbuOmar; Roger L. King
Scientists always face difficulties dealing with disjointed information. There is a need for a standardized and robust way to represent and exchange knowledge. Ontology has been widely used for this purpose. However, since research involves semantics and operations, we need to conceptualize both of them. In this article, we propose ReShare to provide a solution for this problem. Maximizing utilization while preserving the semantics is one of the main challenges when the heterogeneous knowledge is combined. Therefore, operational annotations were designed to allow generic object modeling, binding and representation. Furthermore, a test bed is developed and preliminary results are presented to show the usefulness and robustness of our approach.
international conference on computational science | 2014
Mohammad Al Boni; Derek T. Anderson
Ontologies are widely used to represent knowledge in different domains. As a result, numerous methods have been put forth to match ontologies. No technique has been shown to be robust across all domains. Furthermore, ontology matchers typically make use of a reference ontology. However, this is not guaranteed to exist. In this article, the fuzzy integral is used to aggregate multiple ontology matchers in lieu of a reference ontology. Specifically, we present a way to derive the fuzzy measure based on ideas from crowd sourcing when the worth of individuals is not known. Preliminary results are presented to show the robustness of our approach across different domains.
international conference on machine learning and applications | 2016
Mohammad Al Boni; Matthew S. Gerber
The convergence of public data and statistical modeling has created opportunities for public safety officials to prioritize the deployment of scarce resources on the basis of predicted crime patterns. Current crime prediction methods are trained using observed crime and information describing various criminogenic factors. Researchers have favored global models (e.g., of entire cities) due to a lack of observations at finer resolutions (e.g., ZIP codes). These global models and their assumptions are at odds with evidence that the relationship between crime and criminogenic factors is not homogeneous across space. In response to this gap, we present area-specific crime prediction models based on hierarchical and multi-task statistical learning. Our models mitigate sparseness by sharing information across ZIP codes, yet they retain the advantages of localized models in addressing non-homogeneous crime patterns. Out-of-sample testing on real crime data indicates predictive advantages over multiple state-of-the-art global models.
systems and information engineering design symposium | 2018
Ben Greenawald; Yingjie Liu; Gregory Wert; Mohammad Al Boni; Donald E. Brown