Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Taalimi is active.

Publication


Featured researches published by Ali Taalimi.


advanced video and signal based surveillance | 2015

Online multi-modal task-driven dictionary learning and robust joint sparse representation for visual tracking

Ali Taalimi; Hairong Qi; Rahman Khorsandi

Robust visual tracking is a challenging problem due to pose variance, occlusion and cluttered backgrounds. No single feature can be robust to all possible scenarios in a video sequence. However, exploiting multiple features has demonstrated its effectiveness in overcoming challenging situations in visual tracking. We propose a new framework for multi-modal fusion at both the feature level and decision level by training a reconstructive and discriminative dictionary and classifier for each modality simultaneously with the additional constraint of label consistency across different modalities. In addition, a joint decision measure is designed based on both reconstruction and classification error to adaptively adjust the weights of different features such that unreliable features can be removed from tracking. The proposed tracking scheme is referred to as the label-consistent and fusion-based joint sparse coding (LC-FJSC). Extensive experiments on publicly available videos demonstrate that LC-FJSC outperforms state-of-the-art trackers.


medical image computing and computer assisted intervention | 2015

Multimodal Dictionary Learning and Joint Sparse Representation for HEp-2 Cell Classification

Ali Taalimi; Shahab Ensafi; Hairong Qi; Shijian Lu; Ashraf A. Kassim; Chew Lim Tan

Use of automatic classification for Indirect Immunofluorescence (IIF) images of HEp-2 cells is increasingly gaining interest in Antinuclear Autoantibodies (ANAs) detection. In order to improve the classification accuracy, we propose a multi-modal joint dictionary learning method, to obtain a discriminative and reconstructive dictionary while training a classifier simultaneously. Here, the term ‘multi-modal’ refers to features extracted using different algorithms from the same data set. To utilize information fusion between feature modalities the algorithm is designed so that sparse codes of all modalities of each sample share the same sparsity pattern. The contribution of this paper is two-fold. First, we propose a new framework for multi-modal fusion at the feature level. Second, we impose an additional constraint on consistency of sparse coefficients among different modalities of the same class. Extensive experiments are conducted on the ICPR2012 and ICIP2013 HEp-2 data sets. All results confirm the higher level of accuracy of the proposed method compared with state-of-the-art.


international conference on image processing | 2016

Distributed object recognition in smart camera networks

Alireza Rahimpour; Ali Taalimi; Jiajia Luo; Hairong Qi

Distributed object recognition is a significantly fast-growing research area, mainly motivated by the emergence of high performance cameras and their integration with modern wireless sensor network technologies. In wireless distributed object recognition, the bandwidth is limited and it is desirable to avoid transmitting redundant visual features from multiple cameras to the base station. In this paper, we propose a histogram compression and feature selection framework based on Sparse Non-negative Matrix Factorization (SNMF). In our proposed method, histograms of the features are modeled as linear combination of a small set of signature vectors with associated weight vectors. The recognition process in the base station is then performed based on these small sets of transmitted weights from each camera. Furthermore, we propose another novel distributed object recognition scheme based on local classification in each camera and sending the label information to the base station and making the final decision based on majority voting. Experiments on BMW dataset affirm that our approach outperforms the state of the art in accuracy and bandwidth usage.


international conference on image processing | 2016

Robust coupling in space of sparse codes for multi-view recognition

Ali Taalimi; Alireza Rahimpour; Cristian Capdevila; Zhifei Zhang; Hairong Qi

Classical dictionary learning algorithms that rely on a single source of information have been successfully used for classification tasks. Additionally, the exploitation of multiple sources has shown to be advantageous in challenging real-world situations. We propose a new framework to exploit robust modality fusion in classification in order to achieve better classification performance than single source methods. Multimodal learning is able to leverage any correlations between sensor modalities found in the data. We propose a new bilevel optimization, referred to as (MCJWDL). We perform supervised dictionary learning while forcing a coupling between the resulting sparse codes from different sources of information. Extensive experiments demonstrate that MCJWDL outperforms state-of-the-art sparse representation and dictionary learning approaches for the multi-view object and multi-view action recognition.


arXiv: Systems and Control | 2017

Event analysis of pulse-reclosers in distribution systems through sparse representation

M. Ehsan Raoufat; Ali Taalimi; Kevin Tomsovic; Robert Hay

The pulse-recloser uses pulse testing technology to verify that the line is clear of faults before initiating a reclose operation, which significantly reduces stress on the system components (e.g. substation transformers) and voltage sags on adjacent feeders. Online event analysis of pulse-reclosers are essential to increases the overall utility of the devices, especially when there are numerous devices installed throughout the distribution system. In this paper, field data recorded from several devices were analyzed to identify specific activity and fault locations. An algorithm is developed to screen the data to identify the status of each pole and to tag time windows with a possible pulse event. In the next step, selected time windows are further analyzed and classified using a sparse representation technique by solving an ℓ1-regularized least-square problem. This classification is obtained by comparing the pulse signature with the reference dictionary to find a set that most closely matches the pulse features. This work also sheds additional light on the possibility of fault classification based on the pulse signature. Field data collected from a distribution system are used to verify the effectiveness and reliability of the proposed method.


international conference on signal processing | 2007

Development of Alzheimer's Disease Recognition using Semiautomatic Analysis of Statistical Parameters based on Frequency Characteristics of Medical Images

Meysam Torabi; Hassan Moradzadeh; Reza Vaziri; S. Razavian; Reza Dehestani Ardekani; M. Rahmandoust; Ali Taalimi; Emad Fatemizadeh

The paper presents an effective algorithm to analyze MR-images in order to recognize Alzheimers disease (AD) which appeared in patients brain. The features of interest are categorized in features of the spatial domain (FSDs) and Features of the frequency domain (FFDs) which are based on the first four statistic moments of the wavelet transform. Extracted features have been classified by a multi-layer perceptron artificial neural network (ANN). Before ANN, the number of features is reduced from 44 to 12 to optimize and eliminate any correlation between them. The contribution of this paper is to demonstrate that by using the wavelet transform number of features needed for AD diagnosis has been reduced in comparison with the previous work. We achieved 79% and 100% accuracy among test set and training set respectively, including 93 MR-images.


international conference on acoustics, speech, and signal processing | 2017

Feature encoding in band-limited distributed surveillance systems

Alireza Rahimpour; Ali Taalimi; Hairong Qi

Distributed surveillance systems have become popular in recent years due to security concerns. However, transmitting high dimensional data in bandwidth-limited distributed systems becomes a major challenge. In this paper, we address this issue by proposing a novel probabilistic algorithm based on the divergence between the probability distributions of the visual features in order to reduce their dimensionality and thus save the network bandwidth in distributed wireless smart camera networks. We demonstrate the effectiveness of the proposed approach through extensive experiments on two surveillance recognition tasks.


workshop on applications of computer vision | 2016

Learning patch-dependent kernel forest for person re-identification

Wei Wang; Ali Taalimi; Kun Duan; Rui Guo; Hairong Qi

In this paper, we propose a new approach for the person re-identification problem, discovering the correct matches for a query pedestrian image from a set of gallery images. It is well motivated by our observation that the overall complex inter-camera transformation, caused by the change of camera viewpoints, person poses and view illuminations, can be effectively modelled by a combination of many simple local transforms, which guides us to learn a set of more specific local metrics other than a fixed metric working on the feature vector of a whole image. Given training images in pair, we first align the local patches using spatially constrained dense matching. Then, we use a decision tree structure to partition the space of the aligned local patch-pairs into different configurations according to the similarity of the local cross-view transforms. Finally, a local metric kernel is learned for each configuration at the tree leaf nodes in a linear regression manner. The pairwise distance between a query image and a gallery image is summarized based on all the pairwise distance of local patches measured by different local metric kernels. Multiple decision trees form the proposed random kernel forest, which always discriminatively assign the optimal local metric kernel to the local image patches in re-identification. Experimental results over the public benchmarks demonstrate the effectiveness of our approach for achieving very competitive performances with a relatively simpler learning scheme.


ieee global conference on signal and information processing | 2015

Joint weighted dictionary learning and classifier training for robust biometric recognition

Rahman Khorsandi; Ali Taalimi; Mohamed Abdel-Mottaleb; Hairong Qi

In this paper, we present an automated system for robust biometric recognition based upon sparse representation and dictionary learning. In sparse representation, extracted features from the training data are used to develop a dictionary. Training data of real world applications are likely to be exposed to geometric transformations, which is a big challenge for designing of discriminative dictionaries. Classification is achieved by representing the extracted features of the test data as a linear combination of entries in the dictionary. We propose joint weighted dictionary learning and classifier training (JWDL-CT) approach which simultaneously learns from a set of training samples along with weight vectors that correspond to the atoms in the learnt dictionary. The components of the weight vector associated with an atom represent the relationship between the atom and each of the classes. The weight vectors and atoms are jointly obtained during the dictionary learning. In the proposed method, a constraint is imposed on the correlation between the atoms to decrease the similarity between these atoms. The proposed dictionary learning objective function enhances the class-discrimination capabilities of individual atoms that renders the designed dictionaries especially suitable for classification of query images with very sparse representation. Experiments conducted on the West Virginia University (WVU) and the University of Notre Dame (UND) datasets for ear recognition show that the proposed method outperforms other state-of-the-art classifiers.


workshop on applications of computer vision | 2016

Deep tree-structured face: A unified representation for multi-task facial biometrics

Rui Guo; Liu Liu; Wei Wang; Ali Taalimi; Chi Zhang; Hairong Qi

Automatic facial image analysis has received considerable research interests due to its important role in computer vision and biometrics. As the key component, face feature is usually extracted under largely controlled environment and learnt for specific tasks which limits its discriminant capability in a multi-task learning scenario. In this paper, we present a novel deeply learnt tree-structured face representation to model the human face with multiple semantic meanings, such as identity, expression and age, that wouldyield a unified feature representation of the facial image. The tree structure is built based on the incorporation of an unsupervised shallow network that generates the low-level features serving as the leaf nodes and the recursive application of the designed semi-supervised AutoEncoder to generate the intermediate and root nodes. By incorporating the label information with different semantic meanings, the designed semi-supervised AutoEncoder aims to distinguish the latent factors embedded in facial images with automatically learned tree structure and weights. To validate the effectiveness of the proposed facial representation, we design comprehensive experiments based on the FACES dataset which is considered as the most challenging benchmark that reflects multiple biometric factors. We show that the proposed feature yields unified representation in multitask facial biometrics. The multi-task learning framework is applicable to many other computer vision tasks.

Collaboration


Dive into the Ali Taalimi's collaboration.

Top Co-Authors

Avatar

Hairong Qi

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liu Liu

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rui Guo

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Wei Wang

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Chi Zhang

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hesam Shams

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Jiajia Luo

University of Tennessee

View shared research outputs
Researchain Logo
Decentralizing Knowledge