Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ahmed Bilal Ashraf is active.

Publication


Featured researches published by Ahmed Bilal Ashraf.


Image and Vision Computing | 2009

The painful face - Pain expression recognition using active appearance models

Ahmed Bilal Ashraf; Simon Lucey; Jeffrey F. Cohn; Tsuhan Chen; Zara Ambadar; Kenneth M. Prkachin; Patricia Solomon

Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?


Archive | 2007

Investigating Spontaneous Facial Action Recognition through AAM Representations of the Face

Simon Lucey; Ahmed Bilal Ashraf; Jeffrey F. Cohn

The Facial Action Coding System (FACS) [Ekman et al., 2002] is the leading method for measuring facial movement in behavioral science. FACS has been successfully applied, but not limited to, identifying the differences between simulated and genuine pain, differences betweenwhen people are telling the truth versus lying, and differences between suicidal and non-suicidal patients [Ekman and Rosenberg, 2005]. Successfully recognizing facial actions is recognized as one of the “major” hurdles to overcome, for successful automated expression recognition. How one should represent the face for effective action unit recognition is the main topic of interest in this chapter. This interest is motivated by the plethora of work in existence in other areas of face analysis, such as face recognition [Zhao et al., 2003], that demonstrate the benefit of representation when performing recognition tasks. It is well understood in the field of statistical pattern recognition [Duda et al., 2001] given a fixed classifier and training set that how one represents a pattern can greatly effect recognition performance. The face can be represented in a myriad of ways. Much work in facial action recognition has centered solely on the appearance (i.e., pixel values) of the face given quite a basic alignment (e.g., eyes and nose). In our work we investigate the employment of the Active Appearance Model (AAM) framework [Cootes et al., 2001, Matthews and Baker, 2004] in order to derive effective representations for facial action recognition. Some of the representations we will be employing can be seen in Figure 1. Experiments in this chapter are run across two action unit databases. The CohnKanade FACS-Coded Facial Expression Database [Kanade et al., 2000] is employed to investigate the effect of face representation on posed facial action unit recognition. Posed facial actions are those that have been elicited by asking subjects to deliberately make specific facial actions or expressions. Facial actions are typically recorded under controlled circumstances that include full-face frontal view, good lighting, constrained head movement and selectivity in terms of the type and magnitude of facial actions. Almost all work in automatic facial expression analysis has used posed image data and the Cohn-Kanade database may be the database most widely used [Tian et al., 2005]. The RU-FACS Spontaneous Expression Database is employed to investigate how these same representations affect spontaneous facial action unit recognition. Spontaneous facial actions are representative of “real-world” facial


international conference on multimodal interfaces | 2007

The painful face: pain expression recognition using active appearance models

Ahmed Bilal Ashraf; Simon Lucey; Jeffrey F. Cohn; Tsuhan Chen; Zara Ambadar; Kenneth M. Prkachin; Patty Solomon; Barry-John Theobald

Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or not even possible, as in young children or the severely ill. Behavioral scientists have identified reliable and valid facial indicators of pain. Until now they required manual measurement by highly skilled observers. We developed an approach that automatically recognizes acute pain. Adult patients with rotator cuff injury were video-recorded while a physiotherapist manipulated their affected and unaffected shoulder. Skilled observers rated pain expression from the video on a 5-point Likert-type scale. From these ratings, sequences were categorized as no-pain (rating of 0), pain (rating of 3, 4, or 5), and indeterminate (rating of 1 or 2). We explored machine learning approaches for pain-no pain classification. Active Appearance Models (AAM) were used to decouple shape and appearance parameters from the digitized face images. Support vector machines (SVM) were used with several representations from the AAM. Using a leave-one-out procedure, we achieved an equal error rate of 19% (hit rate = 81%) using canonical appearance and shape features. These findings suggest the feasibility of automatic pain detection from video.


computer vision and pattern recognition | 2008

Learning patch correspondences for improved viewpoint invariant face recognition

Ahmed Bilal Ashraf; Simon Lucey; Tsuhan Chen

Variation due to viewpoint is one of the key challenges that stand in the way of a complete solution to the face recognition problem. It is easy to note that local regions of the face change differently in appearance as the viewpoint varies. Recently, patch-based approaches, such as those of Kanade and Yamada, have taken advantage of this effect resulting in improved viewpoint invariant face recognition. In this paper we propose a data-driven extension to their approach, in which we not only model how a face patch varies in appearance, but also how it deforms spatially as the viewpoint varies. We propose a novel alignment strategy which we refer to as ldquostack flowrdquo that discovers viewpoint induced spatial deformities undergone by a face at the patch level. One can then view the spatial deformation of a patch as the correspondence of that patch between two viewpoints. We present improved identification and verification results to demonstrate the utility of our technique.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Reinterpreting the Application of Gabor Filters as a Manipulation of the Margin in Linear Support Vector Machines

Ahmed Bilal Ashraf; Simon Lucey; Tsuhan Chen

Linear filters are ubiquitously used as a preprocessing step for many classification tasks in computer vision. In particular, applying Gabor filters followed by a classification stage, such as a support vector machine (SVM), is now common practice in computer vision applications like face identity and expression recognition. A fundamental problem occurs, however, with respect to the high dimensionality of the concatenated Gabor filter responses in terms of memory requirements and computational efficiency during training and testing. In this paper, we demonstrate how the preprocessing step of applying a bank of linear filters can be reinterpreted as manipulating the type of margin being maximized within the linear SVM. This new interpretation leads to sizable memory and computational advantages with respect to existing approaches. The reinterpreted formulation turns out to be independent of the number of filters, thereby allowing the examination of the feature spaces derived from arbitrarily large number of linear filters, a hitherto untestable prospect. Further, this new interpretation of filter banks gives new insights, other than the often cited biological motivations, into why the preprocessing of images with filter banks, like Gabor filters, improves classification performance.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Fourier Lucas-Kanade Algorithm

Simon Lucey; Rajitha Navarathna; Ahmed Bilal Ashraf; Sridha Sridharan

In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).


IEEE Transactions on Medical Imaging | 2013

A Multichannel Markov Random Field Framework for Tumor Segmentation With an Application to Classification of Gene Expression-Based Breast Cancer Recurrence Risk

Ahmed Bilal Ashraf; Sara Gavenonis; Dania Daye; Carolyn Mies; Mark A. Rosen; Despina Kontos

We present a methodological framework for multichannel Markov random fields (MRFs). We show that conditional independence allows loopy belief propagation to solve a multichannel MRF as a single channel MRF. We use conditional mutual information to search for features that satisfy conditional independence assumptions. Using this framework we incorporate kinetic feature maps derived from breast dynamic contrast enhanced magnetic resonance imaging as observation channels in MRF for tumor segmentation. Our algorithm based on multichannel MRF achieves an receiver operating characteristic area under curve (AUC) of 0.97 for tumor segmentation when using a radiologists manual delineation as ground truth. Single channel MRF based on the best feature chosen from the same pool of features as used by the multichannel MRF achieved a lower AUC of 0.89. We also present a comparison against the well established normalized cuts segmentation algorithm along with commonly used approaches for breast tumor segmentation including fuzzy C-means (FCM) and the more recent method of running FCM on enhancement variance features (FCM-VES). These previous methods give a lower AUC of 0.92, 0.88, and 0.60, respectively. Finally, we also investigate the role of superior segmentation in feature extraction and tumor characterization. Specifically, we examine the effect of improved segmentation on predicting the probability of breast cancer recurrence as determined by a validated tumor gene expression assay. We demonstrate that an support vector machine classifier trained on kinetic statistics extracted from tumors as segmented by our algorithm gives a significant improvement in distinguishing between women with high and low recurrence risk, giving an AUC of 0.88 as compared to 0.79, 0.76, 0.75, and 0.66 when using normalized cuts, single channel MRF, FCM, and FCM-VES, respectively, for segmentation.


computer vision and pattern recognition | 2010

Fast image alignment in the Fourier domain

Ahmed Bilal Ashraf; Simon Lucey; Tsuhan Chen

In this paper we propose a framework for gradient descent image alignment in the Fourier domain. Specifically, we propose an extension to the classical Lucas & Kanade (LK) algorithm where we represent the source and template images intensity pixels in the complex 2D Fourier domain rather than in the 2D spatial domain. We refer to this approach as the Fourier LK (FLK) algorithm. The FLK formulation is especially advantageous, over traditional LK, when it comes to pre-processing the source and template images with a bank of filters (e.g., Gabor filters) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. We demonstrate robust image matching performance on a variety of objects in the presence of substantial illumination differences with exactly the same computational overhead as that of traditional inverse compositional LK during fitting.


Translational Oncology | 2015

Breast DCE-MRI Kinetic Heterogeneity Tumor Markers: Preliminary Associations With Neoadjuvant Chemotherapy Response

Ahmed Bilal Ashraf; Bilwaj Gaonkar; Carolyn Mies; Angela DeMichele; Mark A. Rosen; Christos Davatzikos; Despina Kontos

The ability to predict response to neoadjuvant chemotherapy for women diagnosed with breast cancer, either before or early on in treatment, is critical to judicious patient selection and tailoring the treatment regimen. In this paper, we investigate the role of contrast agent kinetic heterogeneity features derived from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for predicting treatment response. We propose a set of kinetic statistic descriptors and present preliminary results showing the discriminatory capacity of the proposed descriptors for predicting complete and non-complete responders as assessed from pre-treatment imaging exams. The study population consisted of 15 participants: 8 complete responders and 7 non-complete responders. Using the proposed kinetic features, we trained a leave-one-out logistic regression classifier that performs with an area under the receiver operating characteristic (ROC) curve (AUC) of 0.84 under the ROC. We compare the predictive value of our features against commonly used MRI features including kinetics of the characteristic kinetic curve (CKC), maximum peak enhancement (MPE), hotspot signal enhancement ratio (SER), and longest tumor diameter that give lower AUCs of 0.71, 0.66, 0.64, and 0.54, respectively. Our proposed kinetic statistics thus outperform the conventional kinetic descriptors as well as the classifier using a combination of all the conventional descriptors (i.e., CKC, MPE, SER, and longest diameter), which gives an AUC of 0.74. These findings suggest that heterogeneity-based DCE-MRI kinetic statistics could serve as potential imaging biomarkers for tumor characterization and could be used to improve candidate patient selection even before the start of the neoadjuvant treatment.


IEEE Transactions on Biomedical Engineering | 2015

Pharmacokinetic Tumor Heterogeneity as a Prognostic Biomarker for Classifying Breast Cancer Recurrence Risk.

Majid Mahrooghy; Ahmed Bilal Ashraf; Dania Daye; Elizabeth S. McDonald; Mark A. Rosen; Carolyn Mies; Michael Feldman; Despina Kontos

Goal: Heterogeneity in cancer can affect response to therapy and patient prognosis. Histologic measures have classically been used to measure heterogeneity, although a reliable noninvasive measurement is needed both to establish baseline risk of recurrence and monitor response to treatment. Here, we propose using spatiotemporal wavelet kinetic features from dynamic contrast-enhanced magnetic resonance imaging to quantify intratumor heterogeneity in breast cancer. Methods: Tumor pixels are first partitioned into homogeneous subregions using pharmacokinetic measures. Heterogeneity wavelet kinetic (HetWave) features are then extracted from these partitions to obtain spatiotemporal patterns of the wavelet coefficients and the contrast agent uptake. The HetWave features are evaluated in terms of their prognostic value using a logistic regression classifier with genetic algorithm wrapper-based feature selection to classify breast cancer recurrence risk as determined by a validated gene expression assay. Results: Receiver operating characteristic analysis and area under the curve (AUC) are computed to assess classifier performance using leave-one-out cross validation. The HetWave features outperform other commonly used features (AUC = 0.88 HetWave versus 0.70 standard features). The combination of HetWave and standard features further increases classifier performance (AUCs 0.94). Conclusion: The rate of the spatial frequency pattern over the pharmacokinetic partitions can provide valuable prognostic information. Significance: HetWave could be a powerful feature extraction approach for characterizing tumor heterogeneity, providing valuable prognostic information.

Collaboration


Dive into the Ahmed Bilal Ashraf's collaboration.

Top Co-Authors

Avatar

Despina Kontos

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Simon Lucey

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Carolyn Mies

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Mark A. Rosen

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dania Daye

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Michael Feldman

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Majid Mahrooghy

Mississippi State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge