Mehdi Ghayoumi
Kent State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mehdi Ghayoumi.
international conference on signal processing and multimedia applications | 2014
Mehdi Ghayoumi; Arvind K. Bansal
This paper describes a new automated facial expression analysis system that integrates Locality Sensitive Hashing (LSH) with Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) to improve execution efficiency of emotion classification and continuous identification of unidentified facial expressions. Images are classified using feature-vectors on two most significant segments of face: eye segments and mouth-segment. LSH uses a family of hashing functions to map similar images in a set of collision-buckets. Taking a representative image from each cluster reduces the image space by pruning redundant similar images in the collision-buckets. The application of PCA and LDA reduces the dimension of the data-space. We describe the overall architecture and the implementation. The performance results show that the integration of LSH with PCA and LDA significantly improves computational efficiency, and improves the accuracy by reducing the frequency-bias of similar images during PCA and SVM stage. After the classification of image on database, we tag the collision-buckets with basic emotions, and apply LSH on new unidentified facial expressions to identify the emotions. This LSH based identification is suitable for fast continuous recognition of unidentified facial expressions.
distributed multimedia systems | 2016
Mehdi Ghayoumi; Arvind K. Bansal; Maha Thafar
Social robotics is related to the robotic systems and human interaction. Social robots have applications in elderly care, health care, home care, customer service and reception in industrial settings. Human-Robot Interaction (HRI) requires better understanding of human emotion. There are few multimodal fusion systems that integrate limited amount of facial expression, speech and gesture analysis. In this paper, we describe the implementation of a semantic algebra based formal model that integrates six basic facial expressions, speech phrases and gesture trajectories. The system is capable of real-time interaction. We used the decision level fusion approach for integration and the prototype system has been implemented using Matlab. KeywordsAffective computing, Emotion recognition, Humanmachine interaction, Multimedia, Multimodal, Decision level fusion, Social robotics.
annual acis international conference on computer and information science | 2015
Mehdi Ghayoumi
Biometrics is the science and technology of measuring and analyzing biological data of human body for increasing systems security by providing accurate and reliable patterns and algorithms for person verification and identification and its solutions are widely used in governments, military and industries. Single source of information in biometric systems are called unimodal systems and are perfect but they often suffer from some problems when they face with noisy data such as: intra-class variations, restricted degrees of freedom, spoof attacks and non-universality. Several of these problems can be solved by using multimodal biometric systems that combine two or more biometric modalities. Various methods, fusion levels and integration strategies can be applied to combine information in multimodal systems.
Journal of Communication and Computer | 2017
Mehdi Ghayoumi
Over the last few years, deep artificial neural networks have gotten the most attention in computer science, especially in pattern recognition, machine vision and machine learning. One of its excellent applications is in the emotion recognition via facial expression area. Facial expression analysis is useful for many tasks and the application of deep learning in this area is also developing very fast. We review some recent research works in this domain, introduce some new applications and show the general steps to implementing each of them.
future technologies conference | 2016
Arvind K. Bansal; Mehdi Ghayoumi
These days, some robots have emotional state (expression and recognition) to make Human-Robot Interaction (HRI) and Robot-Robot Interaction (RRI) better. In this article we analyze what it means for a robot to have emotion and distinguishing emotional state for communication from an emotional state as a mechanism for the organization of its behavior with humans and robots by convolutional neural network (CNN). We discuss these relations and explain why it can be more effective by CNN for having better emotion in the robots. Here, we present a multimodal system for Emotions in Robots by CNN.
australian joint conference on artificial intelligence | 2006
Mehdi Ghayoumi; P. Porkar Rezayeyeh; M. H. Korayem
Stereo vision is one of the most active research topics in machine vision. The most difficult task in stereo vision to getting depth is to find corresponding points in different images of the same scene. There are some approaches and correlation is one of them. This method has some errors and up to now has presented some methods to reduce these errors. In this paper a heuristic and fuzzy approach has been demonstrated for this purpose. Also the experimental test is presented on 3p robot. Simulation results have demonstrated improvement in compare with neural network method.
international conference on social robotics | 2016
Mehdi Ghayoumi; Arvind K. Bansal
These years, emotion recognition has been one of the hot topics in computer science and especially in Human-Robot Interaction (HRI) and Robot-Robot Interaction (RRI). By emotion (recognition and expression), robots can recognize human behavior and emotion better and can communicate in a more human way. On that point are some research for unimodal emotion system for robots, but because, in the real world, Human emotions are multimodal then multimodal systems can work better for the recognition. Yet, beside this multimodality feature of human emotion, using a flexible and reliable learning method can help robots to recognize better and makes more beneficial interaction. Deep learning showed its force in this area and here our model is a multimodal method which use 3 main traits (Facial Expression, Speech and gesture) for emotion (recognition and expression) in robots. We implemented the model for six basic emotion states and there are some other states of emotion, such as mix emotions, which are really laborious to be picked out by robots. Our experiments show that a significant improvement of identification accuracy is accomplished when we use convolutional Neural Network (CNN) and multimodal information system, from 91 % reported in the previous research [27] to 98.8 %.
annual acis international conference on computer and information science | 2015
Mehdi Ghayoumi; Kambiz Ghazinour
Biometric data are the sensitive personal information and the large intra-class variability due to changes of the environment conditions is an issue in these type of data. Adaptive biometric is the solution that has been introduced and can make the systems more accurate and reliable. For this purpose, semi-supervised learning has been shown to be a possible strategy. On the other hand, one problem in semi-supervised learning is selecting the decision threshold for adaption which can make the strategy unstable. In particular, a strong classifier, in a multimodal system, is better if adapted threshold is replaced with an inflexible one. This paper presents a fuzzy system to find the better threshold for adaptation. Experiments on MOBIO face and speech database show that the proposed strategy is a better approach in comparison to normal adaptive method.
international conference on audio, language and image processing | 2014
Mehdi Ghayoumi; Cheng Chang Lu
Image inpainting is a repairing method for missed or damaged area in the images. This field of image processing research is very active over recent years and many researchers work on it to create new methods or improve existing methods. Exemplar-based inpainting is one of them and dropping effect is a problem in this method that effect on inpainting results. In this article, a fuzzy method for solving dropping effect is presented. The proposed approach presents better quality on the final results.
international symposium on visual computing | 2014
Mehdi Ghayoumi; Ye Zhao
Information Visualization has been used in many areas and now its applications are growing each day. Modelling and visualization psychological processes, based on the analysis of psychological data can be a real necessary for all experts in this field. In this article we use PCA and person fit statistical for analyzing data, and some visualization tools for modeling to provide some visualizations of the Parkinson patient’s information. The results show some meaningful relations between BMI, Age in one side and percentage of some Parkinson patient features that can be appear in men and women on the other side.