Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abu Sayeed Md. Sohail is active.

Publication


Featured researches published by Abu Sayeed Md. Sohail.


Archive | 2008

Detection of Facial Feature Points Using Anthropometric Face Model

Abu Sayeed Md. Sohail; Prabir Bhattacharya

This chapter describes an automated technique for detecting the eighteen most important facial feature points using a statistically developed anthropometric face model. Most of the important facial feature points are located just about the area of mouth, nose, eyes and eyebrows. After carefully observing the structural symmetry of human face and performing necessary anthropometric measurements, we have been able to construct a model that can be used in isolating the above mentioned facial feature regions. In the proposed model, distance between the two eye centers serves as the principal parameter of measurement for locating the centers of other facial feature regions. Hence, our method works by detecting the two eye centers in every possible situation of eyes and isolating each of the facial feature regions using the proposed anthropometric face model . Combinations of differnt image processing techniques are then applied within the localized regions for detecting the eighteen most important facial feature points. Experimental result shows that the developed system can detect the eighteen feature points successfully in 90.44% cases when applied over the test databases.


international symposium on biomedical imaging | 2010

Retrieval and classification of ultrasound images of ovarian cysts combining texture features and histogram moments

Abu Sayeed Md. Sohail; Md. Mahmudur Rahman; Prabir Bhattacharya; Srinivasan Krishnamurthy; Sudhir P. Mudur

This paper presents an effective solution for content-based retrieval and classification of ultrasound medical images representing three types of ovarian cysts: Simple Cyst, Endometrioma, and Teratoma. Our proposed solution comprises of the followings: extraction of low level ultrasound image features combining histogram moments with Gray Level Co-Occurrence Matrix (GLCM) based statistical texture descriptors, image retrieval using a similarity model based on Gowers similarity coefficient which measures the relevance between the query image and the target images, and use of multiclass Support Vector Machine (SVM) for classifying the low level ultrasound image features into their corresponding high level categories. Efficiency of the above solution for ultrasound medical image retrieval and classification has been evaluated using an inprogress database, presently consisting of 478 ultrasound ovarian images. Performance-wise, in retrieval of ultrasound images, our proposed solution has demonstrated above 77% and 75% of average precision considering the first 20 and 40 retrieved results respectively, and an average classification accuracy of 86.90%.


computer vision computer graphics collaboration techniques | 2007

Classification of facial expressions using K-nearest neighbor classifier

Abu Sayeed Md. Sohail; Prabir Bhattacharya

In this paper, we have presented a fully automatic technique for detection and classification of the six basic facial expressions from nearly frontal face images. Facial expressions are communicated by subtle changes in one or more discrete features such as tightening the lips, raising the eyebrows, opening and closing of eyes or certain combinations of them. These discrete features can be identified through monitoring the changes in muscles movement (Action Units) located near about the regions of mouth, eyes and eyebrows. In this work, we have used eleven feature points that represent and identify the principle muscle actions as well as provide measurements of the discrete features responsible for each of the six basic human emotions. A multi-detector approach of facial feature point localization has been utilized for identifying these points of interests from the contours of facial components such as eyes, eyebrows and mouth. Feature vector composed of eleven features is then obtained by calculating the degree of displacement of these eleven feature points from a nonchangeable rigid point. Finally, the obtained feature sets are used for training a K-Nearest Neighbor Classifier so that it can classify facial expressions when given to it in the form of a feature set. The developed Automatic Facial Expression Classifier has been tested on a publicly available facial expression database and on an average 90.76% successful classification rate has been achieved.


International Journal of Pattern Recognition and Artificial Intelligence | 2011

CLASSIFYING FACIAL EXPRESSIONS USING LEVEL SET METHOD BASED LIP CONTOUR DETECTION AND MULTI-CLASS SUPPORT VECTOR MACHINES

Abu Sayeed Md. Sohail; Prabir Bhattacharya

This paper describes a fully automated computer vision system for detection and classification of the seven basic facial expressions using Multi-Class Support Vector Machine (SVM). Facial expressions are communicated by subtle changes in one or more discrete features such as tightening of the lips, raising the eyebrows, opening and closing of eyes or certain combination of them, which can be identified through monitoring the changes in muscle movements (Action Units), located around the regions of mouth, eyes and eyebrows. For classifying facial expressions, an analytic representation of face with 15 feature points has been used that provides visual observation of the discrete features responsible for the seven basic facial expressions. Feature points from the region of mouth are detected by segmenting the lip contour applying a variational formulation of the level set method. A multidetector approach of facial feature point detection is utilized for identifying the feature-points from the regions of eyes, eyebrows and nose. Feature vectors composed of 15 features are then obtained with respect to the average representation of neutral face and are used to train a Multiclass SVM classifier. The proposed method has been tested over two different facial expression image databases and the average successful recognition rates of 92.04% and 86.33% have been achieved.


systems, man and cybernetics | 2007

Classifying facial expressions using point-based analytic face model and Support Vector Machines

Abu Sayeed Md. Sohail; Prabir Bhattacharya

This paper describes a fully automated method of classifying facial expressions using support vector machines (SVM). Facial expressions are communicated by subtle changes in one or more discrete features such as tightening the lips, raising the eyebrows, opening and closing of the eyes or certain combination of them, which can be identified through monitoring the changes in muscles movement located near about the regions of mouth, eyes and eyebrows. In this work, we have applied an analytic face model using eleven feature points that represent and identify the principle muscle actions as well as provide measurements of the discrete features responsible for each of the six basic human emotions. A multi-detector approach of facial feature point localization has been utilized for identifying these points of interest from the contours of facial components such as eyes, eyebrows and mouth. Feature vectors composed of eleven features are then obtained by calculating the degree of displacement of these eleven feature points from a non-changeable rigid point. Finally, the obtained feature sets are used to train a SVM classifier so that it can classify facial expressions when given to it in the form of a feature set. The method has been tested on two different publicly available facial expression databases and on average, 89.44% and 84.86% of successful recognition rates have been achieved.


iberian conference on pattern recognition and image analysis | 2011

Classification of ultrasound medical images using distance based feature selection and fuzzy-SVM

Abu Sayeed Md. Sohail; Prabir Bhattacharya; Sudhir P. Mudur; Srinivasan Krishnamurthy

This paper presents a method of classifying ultrasound medical images towards dealing with two important aspects: (i) optimal feature subset selection for representing ultrasound medical images and (ii) improvement of classification accuracy by avoiding outliers. An objective function combining the concept of between-class distance and within-class divergence among the training dataset has been proposed as the evaluation criteria of feature selection. Searching for the optimal subset of features has been performed using Multi-Objective Genetic Algorithm (MOGA). Applying the proposed criteria, a subset of Grey Level Co-occurrence Matrix (GLCM) and Grey Level Run Length Matrix (GLRLM) based statistical texture descriptors have been identified that maximizes separability among the classes of the training dataset. To avoid the impact of noisy data during classification, Fuzzy Support Vector Machine (FSVM) has been adopted that reduces the effects of outliers by taking into account the level of significance of each training sample. The proposed approach of ultrasound medical image classification has been tested using a database of 679 ultrasound ovarian images and 89.60% average classification accuracy has been achieved.


international conference on image analysis and processing | 2007

Automated Lip Contour Detection Using the Level Set Segmentation Method

Abu Sayeed Md. Sohail; Prabir Bhattacharya

This paper describes a fully automated technique of detecting lip contours from static face images. Face detection is performed first on the input image using a variation of the AdaBoost classifier trained with Haar- like features extracted from the face. A second trained classifier is applied over this extracted face region for isolating the mouth section. The detection of lip contour is then performed from this isolated mouth region using the level set method of image segmentation. A new variational formulation of level set method, proposed by Li et al. (CVPR-2005), has been applied here that forces the level set function to be close to a signed distance function and therefore completely eliminates the need of the costly reinitialization procedure. The proposed method has been tested on three different face databases that contain images of both neutral faces as well as facial expressions, and a maximum successful lip contour detection rate of 91.08% has been achieved.


canadian conference on electrical and computer engineering | 2011

Local relative GLRLM-based texture feature extraction for classifying ultrasound medical images

Abu Sayeed Md. Sohail; Prabir Bhattacharya; Sudhir P. Mudur; Srinivasan Krishnamurthy

This paper presents a new approach of extracting local relative texture feature from ultrasound medical images using the Gray Level Run Length Matrix (GLRLM) based global feature. To adapt the traditional global approach of GLfiLM-based feature extraction method, a three level partitioning of images has been proposed that enables capturing of local features in terms of global image properties. Local relative features are then calculated as the absolute difference of the global features of each lower layer partition sub-block and that of its corresponding upper layer partition block. Performance of the proposed local relative feature extraction method has been verified by applying it in classifying ultrasound medical images of ovarian abnormalities. Besides, significant improvement has been noticed by comparing the proposed method with traditional GLRLM-based feature extraction method in terms of image classification performance.


artificial neural networks in pattern recognition | 2010

Content-based retrieval and classification of ultrasound medical images of ovarian cysts

Abu Sayeed Md. Sohail; Prabir Bhattacharya; Sudhir P. Mudur; Srinivasan Krishnamurthy; Lucy Gilbert

This paper presents a combined method of content-based retrieval and classification of ultrasound medical images representing three types of ovarian cysts: Simple Cyst, Endometrioma, and Teratoma. Combination of histogram moments and Gray Level Co-Occurrence Matrix (GLCM) based statistical texture descriptors has been proposed as the features for retrieving and classifying ultrasound images. To retrieve images, relevance between the query image and the target images has been measured using a similarity model based on Gower’s similarity coefficient. Image classification has been performed applying Fuzzy k-Nearest Neighbour (k-NN) classification technique. A database of 478 ultrasound ovarian images has been used to verify the retrieval and classification accuracy of the proposed system. In retrieving ultrasound images, the proposed method has demonstrated above 79% and 75% of average precision considering the first 20 and 40 retrieved images respectively. Further, 88.12% of average classification accuracy has been achieved in classifying ultrasound images using the proposed method.


international symposium on biomedical imaging | 2011

Selection of optimal texture descriptors for retrieving ultrasound medical images

Abu Sayeed Md. Sohail; Prabir Bhattacharya; Sudhir P. Mudur; Srinivasan Krishnamurthy

Although feature selection has been proven to be very effective in machine learning and pattern classification applications, it has not been widely practiced in the area of image annotation and retrieval. This paper presents a method of selecting a near optimal to optimal subset of statistical texture descriptors in efficient representation and retrieval of ultrasound medical images. An objective function combining the concept of between-class distance and within-class divergence among the training dataset has been proposed as the evaluation criteria of optimality. Searching for the selection of optimal subset of image descriptors has been performed using Multi-Objective Genetic Algorithm (MOGA). The proposed feature selection based approach of image annotation and retrieval has been tested using a database of 679 ultrasound ovarian images and satisfactory retrieval performance has been achieved. Besides, performance of ultrasound medical image retrieval with and without applying feature selection based image annotation technique has also been compared.

Collaboration


Dive into the Abu Sayeed Md. Sohail's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Srinivasan Krishnamurthy

McGill University Health Centre

View shared research outputs
Top Co-Authors

Avatar

Md. Mahmudur Rahman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Lucy Gilbert

McGill University Health Centre

View shared research outputs
Researchain Logo
Decentralizing Knowledge