Theus H. Aspiras
University of Dayton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Theus H. Aspiras.
international conference on information and communication security | 2011
Theus H. Aspiras; Vijayan K. Asari
We present a new computational technique for obtaining effective spectral feature metrics from EEG data for the recognition of emotional states of mind. Sequences of 256 channel EEG data captured by applying appropriate stimuli patterns are analyzed to establish the spatiotemporal relationships of the signals for different emotional states. The signal sequence in each channel of the EEG data is decomposed into five specific spectral bands (delta, theta, alpha, beta and gamma bands) by a series of discrete wavelet transformations. Logarithmic compression of the spectral power values for each frequency band creates an effective set of features to represent different emotional states. The EEG data is preprocessed using a band-pass filter to remove frequency outliers, a notch filter to eliminate 60Hz line noise, and a surface Laplacian montage to reduce the effect of ocular artifacts. EEG data of five subjects for five different emotions were recorded by our dense array data acquisition system (Geodesic EEG System 300 from EGI, Inc.) with visual stimuli patterns from the International Affective Picture System. A trained multi-layer perceptron network based classifier is used to categorize the extracted feature sets to the respective emotional states of mind. It is experimentally observed that the new set of features could achieve 94.27% average recognition rate across five different emotions, which is a significant improvement over other state of the art feature representation methods.
international conference on image processing | 2014
Theus H. Aspiras; Vijayan K. Asari; Juan R. Vasquez
Most detection algorithms are established by using well defined features. Since wide area imagery is low resolution and has features that are not well defined, a local intensity distribution based methodology seems a likely candidate. We propose a new methodology, Gaussian Ringlet Intensity Distribution (GRID), which is a derivative of the ring-partitioned histograms for local intensity distribution based object tracking in low-resolution environments, which deals with the issue of rotation invariance. We observed that the proposed algorithm produces the highest accuracy among other state of the art methodologies and provides robust features for rotationally invariant detection and tracking in wide area motion imagery.
Neurocomputing | 2017
Theus H. Aspiras; Vijayan K. Asari
Extensive research and evaluations have been conducted on neural networks to improve classification accuracy and training time. Many classical architectures of neural networks have been modified in several different ways for advancement in design. We propose a new architecture, the hierarchical autoassociative polynomial neural network (HAP Net), which is a formulation of different neural network concepts. HAP Net is a combination of polynomial networks, which provides the network with nonlinear weighting, deep belief networks, which obtains higher level abstraction of the incoming data, and convolutional neural networks, which localizes regions of neurons. By incorporating all of these concepts together along with a derivation of a standard backpropagation algorithm, we produce a strong neural network that has the strengths of each concept. Evaluations have been conducted on the MNIST Database, which is a well-known character database tested by many state of the art classification algorithms, and have found the HAP Net to have one of the lowest test error rates among many leading algorithms.
international conference information processing | 2011
Theus H. Aspiras; Vijayan K. Asari
Evaluation of several feature metrics derived from decomposed wavelet coefficients of electroencephalographic data for emotion recognition is presented in this paper. Five different emotions (joy, sadness, disgust, fear, and neutral) are elicited by providing stimulus patterns and EEG data is recorded for each participant. The collected dataset is preprocessed through a band-pass filter, a notch filter, and a Laplacian Montage for noise and artifact removal. Discrete Wavelet Transform based spectral decomposition is employed to separate each of the 256-channel data into 5 specific frequency bands (Delta, Theta, Alpha, Beta, and Gamma bands), and several feature metrics are calculated to represent different emotions. A multi-layer perceptron neural network is used to classify the feature data into different emotions. Experimental evaluations performed on EEG data captured by a 256 channel EGI data acquisition system shows promising results with an average emotion recognition rate of 91.73% for 5 subjects.
Procedia Computer Science | 2011
Binu Muraleedharan Nair; Jacob Foytik; Richard C. Tompkins; Yakov Diskin; Theus H. Aspiras; Vijayan K. Asari
Abstract We propose a real time system for person detection, recognition and tracking using frontal and profile faces. The system integrates face detection, face recognition and tracking techniques. The face detection algorithm uses both frontal face and profile face detectors by extracting the Haar’ features and uses them in a cascade of boosted classifiers. The pose is determined from the face detection algorithm which uses a combination of profile and frontal face cascades and, depending on the pose, the face is compared with a particular set of faces having the same range for classification. The detected faces are recognized by projecting them onto the Eigenspace obtained from the training phase using modular weighted PCA and then, are tracked using the Kalman filter multiple face tracker. In this proposed system, the pose range is divided into three bins onto which the faces are sorted and each bin is trained separately to have its own Eigenspace. This system has the advantage of recognizing and tracking an individual with minimum false positives due to pose variations.
national aerospace and electronics conference | 2015
Evan Krieger; Paheding Sidike; Theus H. Aspiras; Vijayan K. Asari
The tracking of vehicles in wide area motion imagery (WAMI) can be a challenge due to the full and partial occlusions that can occur. The proposed solution for this challenge is to use the Directional Ringlet Intensity Feature Transform (DRIFT) feature extraction method with a Kalman filter. The proposed solution will utilize the properties of the DRIFT feature to solve the partial occlusion challenges. The Kalman filter will be used to estimate the object location during a full occlusion. The proposed solution will be tested on several vehicle sequences from the Columbus Large Image Format (CLIF) dataset.
Proceedings of SPIE | 2014
Paheding Sidike; Theus H. Aspiras; Vijayan K. Asari; Mohammad S. Alam
A new rotation-invariant pattern recognition technique, based on spectral fringe-adjusted joint transform correlator (SFJTC) and histogram representation, is proposed. Synthetic discriminant function (SDF) based joint transform correlation (JTC) techniques have shown attractive performance in rotation-invariant pattern recognition applications. However, when the targets present in a complex scene, SDF-based JTC techniques may produce false detections due to inaccurate estimation of rotation angle of the object. Therefore, we herein propose an efficient rotation-invariant JTC scheme which does not require a priori rotation training of the reference image. In the proposed technique, a Vectorized Gaussian Ringlet Intensity Distribution (VGRID) descriptor is also proposed to obtain rotation-invariant features from the reference image. In this step, we divide the reference image into multiple Gaussian ringlets and extract histogram distribution of each ringlet, and then concatenate them into a vector as a target signature. Similarly, an unknown input scene is also represented by the VGRID which produces a multidimensional input image. Finally, the concept of the SFJTC is incorporated and utilized for target detection in the input scene. The classical SFJTC was proposed for detecting very small objects involving only few pixels in hyperspectral imagery. However, in our proposed algorithm, the SFJTC is applied for a two-dimensional image without limitation of the size of objects and most importantly it achieves rotation-invariant target discriminability. Simulation results verify that the proposed scheme performs satisfactorily in detecting targets in the input scene irrespective of rotation of the object.
Proceedings of SPIE | 2011
Theus H. Aspiras; Vijayan K. Asari
In this paper, we evaluate the feature extraction technique of Recoursing Energy Efficiency on electroencephalograph data for human emotion recognition. A protocol has been established to elicit five distinct emotions (joy, sadness, disgust, fear, surprise, and neutral). EEG signals are collected using a 256-channel system, preprocessed using band-pass filters and Laplacian Montage, and decomposed into five frequency bands using Discrete Wavelet Transform. The Recoursing Energy Efficiency (REE) is calculated and applied to a Multi-Layer Perceptron network for classification. We compare the performance of REE features with conventional energy based features.
Pattern Recognition and Tracking XXIX | 2018
Evan Krieger; Theus H. Aspiras; Vijayan K. Asari; Kevin Krucki; Bryce Wauligman; Yakov Diskin; Karl Salva
Object trackers for full-motion-video (FMV) need to handle object occlusions (partial and short-term full), rotation, scaling, illumination changes, complex background variations, and perspective variations. Unlike traditional deep learning trackers that require extensive training time, the proposed Progressively Expanded Neural Network (PENNet) tracker methodology will utilize a modified variant of the extreme learning machine, which encompasses polynomial expansion and state preserving methodologies. This reduces the training time significantly for online training of the object. The proposed algorithm is evaluated on the DAPRA Video Verification of Identity (VIVID) dataset, wherein the selected highvalue-targets (HVTs) are vehicles.
Mobile Multimedia/Image Processing, Security, and Applications 2018 | 2018
Theus H. Aspiras; Hussin K. Ragb; Vijayan K. Asari
Many human detection algorithms are able to detect humans in various environmental conditions with high accuracy, but they strongly use color information for detection, which is not robust to lighting changes and varying colors. This problem is further amplified with infrared imagery, which only contains gray scale information. The proposed algorithm for human detection uses intensity distribution, gradient and texture features for effective detection of humans in infrared imagery. For the detection of intensity, histogram information is obtained in the grayscale channel. For extracting gradients, we utilize Histogram of Oriented Gradients for better information in the various lighting scenarios. For extraction texture information, center-symmetric local binary pattern gives rotational-invariance as well as lighting-invariance for robust features under these conditions. Various binning strategies help keep the inherent structure embedded in the features, which provide enough information for robust detection of the humans in the scene. The features are then classified using an adaboost classifier to provide a tree like structure for detection in multiple scales. The algorithm has been trained and tested on IR imagery and has been found to be fairly robust to viewpoint changes and lighting changes in dynamic backgrounds and visual scenes.