Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amir Hussain is active.

Publication


Featured researches published by Amir Hussain.


IEEE Intelligent Systems | 2013

Enhanced SenticNet with Affective Labels for Concept-Based Opinion Mining

Soujanya Poria; Alexander F. Gelbukh; Amir Hussain; Newton Howard; Dipankar Das; Sivaji Bandyopadhyay

SenticNet 1.0 is one of the most widely used, publicly available resources for concept-based opinion mining. The presented methodology enriches SenticNet concepts with affective information by assigning an emotion label.


COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems | 2011

The hourglass of emotions

Erik Cambria; Andrew G. Livingstone; Amir Hussain

Human emotions and their modelling are increasingly understood to be a crucial aspect in the development of intelligent systems. Over the past years, in fact, the adoption of psychological models of emotions has become a common trend among researchers and engineers working in the sphere of affective computing. Because of the elusive nature of emotions and the ambiguity of natural language, however, psychologists have developed many different affect models, which often are not suitable for the design of applications in fields such as affective HCI, social data mining, and sentiment analysis. To this end, we propose a novel biologically-inspired and psychologically-motivated emotion categorisation model that goes beyond mere categorical and dimensional approaches. Such model represents affective states both through labels and through four independent but concomitant affective dimensions, which can potentially describe the full range of emotional experiences that are rooted in any of us.


Neurocomputing | 2016

Fusing audio, visual and textual clues for sentiment analysis from multimodal content

Soujanya Poria; Erik Cambria; Newton Howard; Guang-Bin Huang; Amir Hussain

A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%.


Information Fusion | 2017

A review of affective computing

Soujanya Poria; Erik Cambria; Rajiv Bajpai; Amir Hussain

First review on affective computing that is dealing with both unimodal and multimodal analysis.The survey takes into account recent approaches, e.g., embeddings, which are missing from previous reviews.It covers and compares all state-of-the-art methods in details, while most available surveys just quickly describes them. Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.


IEEE Communications Magazine | 2009

Agent-based tools for modeling and simulation of self-organization in peer-to-peer, ad hoc, and other complex networks

Muaz A. Niazi; Amir Hussain

Agent-based modeling and simulation tools provide a mature platform for development of complex simulations. They however, have not been applied much in the domain of mainstream modeling and simulation of computer networks. In this article, we evaluate how and if these tools can offer any value-addition in the modeling & simulation of complex networks such as pervasive computing, large-scale peer-to-peer systems, and networks involving considerable environment and human/animal/habitat interaction. Specifically, we demonstrate the effectiveness of NetLogo - a tool that has been widely used in the area of agent-based social simulation.


Knowledge Based Systems | 2014

EmoSenticSpace: a novel framework for affective common-sense reasoning

Soujanya Poria; Alexander F. Gelbukh; Erik Cambria; Amir Hussain; Guang-Bin Huang

Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and support-vector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotion-related natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement.


Neural Networks | 2015

Towards an intelligent framework for multimodal affective data analysis

Soujanya Poria; Erik Cambria; Amir Hussain; Guang-Bin Huang

An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate.


Lecture Notes in Computer Science | 2010

Development of Multimodal Interfaces: Active Listening and Synchrony

Anna Esposito; Nick Campbell; Carl Vogel; Amir Hussain; Antinus Nijholt

This volume brings together, through a peer-revision process, the advanced research results obtained by the European COST Action 2102: Cross-Modal Analysis of Verbal and Nonverbal Communication, primarily discussed for the first time at the Second COST 2102 International Training School on “Development of Multimodal Interfaces: Active Listening and Synchrony” held in Dublin, Ireland, March 23–27 2009. The school was sponsored by COST (European Cooperation in the Field of Scientific and Technical Research, www.cost.esf.org ) in the domain of Information and Communication Technologies (ICT) for disseminating the advances of the research activities developed within the COST Action 2102: “Cross-Modal Analysis of Verbal and Nonverbal Communication” (cost2102.cs.stir.ac.uk) COST Action 2102 in its third year of life brought together about 60 European and 6 overseas scientific laboratories whose aim is to develop interactive dialogue systems and intelligent virtual avatars graphically embodied in a 2D and/or 3D interactive virtual world, capable of interacting intelligently with the environment, other avatars, and particularly with human users. The main focus of the school was the development of multimodal interfaces. Traditional approaches to multimodal interface design tend to assume a “ping-pong” or “push-to-talk” approach to speech interaction wherein either the system or the human interlocutor is active at any one time. This is contrary to many recent findings in conversation and discourse analysis, where the definition of a “turn” or even an “utterance” is found to be very complex. People don’t “take turns” to talk in a typical conversational interaction, but they each contribute actively to the joint emergence of a “common understanding.” The sub-theme of the school was “Synchrony and Active Listening” selected with the idea to identify contributions that actively give support to the ongoing research into the dynamics of human spoken interaction, to the production of multimodal conversation data and to the subsequent analysis and modelling of interaction dynamics, with the dual goal of appropriately designing multimodal interfaces, as well as providing new approaches and developmental paradigms.


Cognitive Computation | 2013

Common Sense Knowledge for Handwritten Chinese Text Recognition

Qiu-Feng Wang; Erik Cambria; Cheng-Lin Liu; Amir Hussain

Compared to human intelligence, computers are far short of common sense knowledge which people normally acquire during the formative years of their lives. This paper investigates the effects of employing common sense knowledge as a new linguistic context in handwritten Chinese text recognition. Three methods are introduced to supplement the standard n-gram language model: embedding model, direct model, and an ensemble of these two. The embedding model uses semantic similarities from common sense knowledge to make the n-gram probabilities estimation more reliable, especially for the unseen n-grams in the training text corpus. The direct model, in turn, considers the linguistic context of the whole document to make up for the short context limit of the n-gram model. The three models are evaluated on a large unconstrained handwriting database, CASIA-HWDB, and the results show that the adoption of common sense knowledge yields improvements in recognition performance, despite the reduced concept list hereby employed.


IEEE Sensors Journal | 2011

A Novel Agent-Based Simulation Framework for Sensing in Complex Adaptive Environments

Muaz A. Niazi; Amir Hussain

In this paper, we present a novel formal agent-based simulation framework (FABS). FABS uses formal specification as a means of clear description of wireless sensor networks (WSNs) sensing a complex adaptive environment. This specification model is then used to develop an agent-based model of both the WSN as well as the environment. As proof of concept, we demonstrate the application of FABS to a boids model of self-organized flocking of animals monitored by a random deployment of proximity sensors.

Collaboration


Dive into the Amir Hussain's collaboration.

Top Co-Authors

Avatar

Erik Cambria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Andrew Abel

University of Stirling

View shared research outputs
Top Co-Authors

Avatar

Erfu Yang

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Muaz A. Niazi

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Kaizhu Huang

Xi'an Jiaotong-Liverpool University

View shared research outputs
Top Co-Authors

Avatar

Ahsan Adeel

University of Stirling

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soujanya Poria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge