Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ghassem Tofighi is active.

Publication


Featured researches published by Ghassem Tofighi.


bioRxiv | 2016

DeepAD: Alzheimer′s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI

Saman Sarraf; Ghassem Tofighi; John A. E. Anderson

To extract patterns from neuroimaging data, various statistical methods and machine learning algorithms have been explored for the diagnosis of Alzheimer’s disease among older adults in both clinical and research applications; however, distinguishing between Alzheimer’s and healthy brain data has been challenging in older adults (age > 75) due to highly similar patterns of brain atrophy and image intensities. Recently, cutting-edge deep learning technologies have rapidly expanded into numerous fields, including medical image analysis. This paper outlines state-of-the-art deep learning-based pipelines employed to distinguish Alzheimer’s magnetic resonance imaging (MRI) and functional MRI (fMRI) from normal healthy control data for a given age group. Using these pipelines, which were executed on a GPU-based high-performance computing platform, the data were strictly and carefully preprocessed. Next, scale- and shift-invariant low- to high-level features were obtained from a high volume of training images using convolutional neural network (CNN) architecture. In this study, fMRI data were used for the first time in deep learning applications for the purposes of medical image analysis and Alzheimer’s disease prediction. These proposed and implemented pipelines, which demonstrate a significant improvement in classification output over other studies, resulted in high and reproducible accuracy rates of 99.9% and 98.84% for the fMRI and MRI pipelines, respectively. Additionally, for clinical purposes, subject-level classification was performed, resulting in an average accuracy rate of 94.32% and 97.88% for the fMRI and MRI pipelines, respectively. Finally, a decision making algorithm designed for the subject-level classification improved the rate to 97.77% for fMRI and 100% for MRI pipelines.


future technologies conference | 2016

Deep learning-based pipeline to recognize Alzheimer's disease using fMRI data

Saman Sarraf; Ghassem Tofighi

Over the past decade, machine learning techniques and in particular predictive modeling and pattern recognition in biomedical sciences, from drug delivery systems to medical imaging, have become one of the most important methods of assisting researchers in gaining a deeper understanding of issues in their entirety and solving complex medical problems. Deep learning is a powerful machine learning algorithm in classification that extracts low-to high-level features. In this paper, we employ a convolutional neural network to distinguish an Alzheimers brain from a normal, healthy brain. The importance of classifying this type of medical data lies in its potential to develop a predictive model or system in order to recognize the symptoms of Alzheimers disease when compared with normal subjects and to estimate the stages of the disease. Classification of clinical data for medical conditions such as Alzheimers disease has always been challenging, and the most problematic aspect has always been selecting the strongest discriminative features. Using the Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimers subjects from normal controls, where the accuracy of testing data reached 96.85%. This experiment suggests that the shift and scale invariant features extracted by CNN followed by deep learning classification represents the most powerful method of distinguishing clinical data from healthy data in fMRI. This approach also allows for expansion of the methodology to predict more complicated systems.


international conference on digital signal processing | 2013

Hand posture recognition using K-NN and Support Vector Machine classifiers evaluated on our proposed HandReader dataset

Ghassem Tofighi; Anastasios N. Venetsanopoulos; Kaamran Raahemifar; Soosan Beheshti; Helia Mohammadi

In this paper, we propose a real-time vision-based hand posture recognition approach, based on appearance-based features of the hand poses. Our approach has three main steps: Preprocessing, Feature Extraction and Posture Recognition. Additionally, a new hand posture dataset called HandReader is created and introduced. HandReader is a dataset of 500 images of 10 different hand postures which are 10 non-motion-based American Sign Language alphabets with dark backgrounds. The dataset is gathered by capturing images of 50 male and female individuals performing these 10 hand postures in front of a common camera. 20% of the HandReader images are used for the training purpose and the remaining 80% are used to test the proposed methodology. All the images are normalized after applying the preprocessing step. The normalized images are then converted to feature vectors in the Feature Extraction step. In order to train the system, k-NN classifier and SVM classifiers with linear and RBF kernel have been employed and results were compared. These approaches were used to classify hand posture images into 10 different posture classes. The SVM classifier with linear kernel performed better with the highest true detection rate (96%) among other proposed techniques.


international conference on digital signal processing | 2014

Hand pointing detection using live histogram template of forehead skin

Ghassem Tofighi; Nasser Ali Afarin; Kamraan Raahemifar; Anastasios N. Venetsanopoulos

Hand pointing detection has multiple applications in many fields such as virtual reality and control devices in smart homes. In this paper, we proposed a novel approach to detect pointing vector in 2D space of a room. After background subtraction, face and forehead is detected. In the second step, forehead skin H-S plane histograms in HS V space is calculated. By using these histogram templates of users skin, and back projection method, skin areas are detected. The contours of hand are extracted using Freeman chain code algorithm. Next step is finding fingertips. Points in hand contour which are candidates for the fingertip can be found in convex defects of convex hull and contour. We introduced a novel method for finding the fingertip based on the special points on the contour and their relationships. Our approach detects hand-pointing vectors in live video from a common webcam with 94%TP and 85%TN.


arXiv: Computer Vision and Pattern Recognition | 2016

Vision-based engagement detection in Virtual Reality

Ghassem Tofighi; Haisong Gu; Kaamraan Raahemifar

User engagement modeling for manipulating actions in vision-based interfaces is one of the most important case studies of user mental state detection. In a Virtual Reality environment that employs camera sensors to recognize human activities, we have to know were user intend to perform an action and when he/she is disengaged. Without a proper algorithm for recognizing engagement status, any kind of activities could be interpreted as manipulating actions, called “Midas Touch” problem. Baseline approach for solving this problem is activating gesture recognition system using some focus gestures such as waiving or raising hand. However, a desirable natural user interface should be able to understand users mental status automatically. In this paper, a novel multi-modal model for engagement detection, DAIA 1, is presented. using DAIA, the spectrum of mental status for performing an action is quantized in a finite number of engagement states. For this purpose, a Finite State Transducer (FST) is designed. This engagement framework shows how to integrate multi-modal information from user biometric data streams such as 2D and 3D imaging. FST is employed to make the state transition smoothly using combination of several boolean expressions. Our FST true detection rate is 92.3% in total for four different states. Results also show FST can segment user hand gestures more robustly.


arXiv: Performance | 2014

A Brief Review on Models for Performance Evaluation in DSS Architecture

Ghassem Tofighi; Kaamran Raahemifar; Anastasios N. Venetsanopoulos

Distributed Software Systems (DSS) are used these days by many people in the real time operations and modern enterprise applications. One of the most important and essential attributes of measurements for the quality of service of distributed software is performance. Performance models can be employed at early stages of the software development cycle to characterize the quantitative behavior of software systems. In this research, performance models based on fuzzy logic approach, queuing network approach and Petri net approach have been reviewed briefly. One of the most common ways in performance analysis of distributed software systems is translating the UML diagrams to mathematical modeling languages for the description of distributed systems such as queuing networks or Petri nets. In this paper, some of these approaches are reviewed briefly. Attributes which are used for performance modeling in the literature are mostly machine based. On the other hand, end users and client parameters for performance evaluation are not covered extensively. In this way, future research could be based on developing hybrid models to capture users decision variables which make system performance evaluation more user driven.


arXiv: Computer Vision and Pattern Recognition | 2016

Classification of Alzheimer's Disease using fMRI Data and Deep Learning Convolutional Neural Networks.

Saman Sarraf; Ghassem Tofighi


arXiv: Computer Vision and Pattern Recognition | 2016

Classification of Alzheimer's Disease Structural MRI Data by Deep Learning Convolutional Neural Networks.

Saman Sarraf; Ghassem Tofighi


Archive | 2017

SYSTEM AND METHOD OF REAL-TIME INTERACTIVE OPERATION OF USER INTERFACE

Nandita. M. Nayak; Ghassem Tofighi; Haisong Gu


arXiv: Computer Vision and Pattern Recognition | 2016

Engagement Detection in Meetings.

Maria Frank; Ghassem Tofighi; Haisong Gu; Renate Fruchter

Collaboration


Dive into the Ghassem Tofighi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge