Moula Husain
B.V.B. College of Engineering and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Moula Husain.
computer vision and pattern recognition | 2013
Shankar Setty; Moula Husain; Parisa Beham; Jyothi Gudavalli; Menaka Kandasamy; Radhesyam Vaddi; Vidyagouri Hemadri; J C Karure; Raja Raju; B Rajan; Vijay Kumar; C. V. Jawahar
Recognizing human faces in the wild is emerging as a critically important, and technically challenging computer vision problem. With a few notable exceptions, most previous works in the last several decades have focused on recognizing faces captured in a laboratory setting. However, with the introduction of databases such as LFW and Pubfigs, face recognition community is gradually shifting its focus on much more challenging unconstrained settings. Since its introduction, LFW verification benchmark is getting a lot of attention with various researchers contributing towards state-of-the-results. To further boost the unconstrained face recognition research, we introduce a more challenging Indian Movie Face Database (IMFDB) that has much more variability compared to LFW and Pubfigs. The database consists of 34512 faces of 100 known actors collected from approximately 103 Indian movies. Unlike LFW and Pubfigs which used face detectors to automatically detect the faces from the web collection, faces in IMFDB are detected manually from all the movies. Manual selection of faces from movies resulted in high degree of variability (in scale, pose, expression, illumination, age, occlusion, makeup) which one could ever see in natural world. IMFDB is the first face database that provides a detailed annotation in terms of age, pose, gender, expression, amount of occlusion, for each face which may help other face related applications.
ieee international advance computing conference | 2015
Rohit Kumar; Sneha Manjunath Naik; Vani D Naik; Smita Shiralli; Sunil V.G; Moula Husain
Search engine advertising in the present day is a pronounced component of the Web. Choosing the appropriate and relevant ad for a particular query and positioning of the ad critically impacts the probability of being noticed and clicked. It also strategically impacts the revenue, the search engine shall generate from a particular Ad. Needless to say, showing the user an Ad that is relevant to his/her need greatly improves users satisfaction. For all the aforesaid reasons, its of utmost importance to correctly determine the click-through rate (CTR) of ads in a system. For frequently appearing ads, CTR is empirically measurable, but for the new ads, other means have to be devised. In this paper we propose and establish a model to predict the CTRs of advertisements adopting Logistic Regression as the effective framework for representing and constructing conditions and vulnerabilities among variables. Logistic Regression is a type of probabilistic statistical classification model that predicts a binary response from a binary predictor, based on one or more predictor variables. Advertisements that have the most elevated to be clicked are chosen using supervised machine learning calculation. We tested Logistic Regression algorithm on a one week advertisement data of size around 25 GB by considering position and impression as predictor variables. Using this prescribed model we were able to achieve around 90% accuracy for CTR estimation.
2013 IEEE International Conference in MOOC, Innovation and Technology in Education (MITE) | 2013
Moula Husain; Neha Tarannum; Nirmala S. Patil
Some of the major challenges of teaching programming course as an elective are spending more time on syntax coverage, lack of hands on session or practical sessions and less number of credits allotted for the course. It is extremely hard for any programming elective subject to attain the course outcomes because of huge variation in the students background, constraints of allotting less number of credits and traditional classroom teaching. The solution to this dilemma is provided by hands on session tutorials that transforms and enhances traditional classroom teaching. We proposed, refined the curriculum, and tested for teaching first elective course in programming languages. In this paper we present the effectiveness of teaching programming elective subject by introducing ”Hands on Sessions” and redesigning our curriculum to meet stakeholders requirements. Our results presented in this paper indicate that incorporating ”Hands on Sessions” in teaching programming elective subject is an effective tool to attain the learning outcomes of the course. In addition, new teaching methodology enhances the students problem solving skills and improves self learning ability.
international conference on informatics and analytics | 2016
Ashwini Kinnikar; Moula Husain; S. M. Meena
Face recognition involves the extraction of different features of human face and its classification for discriminating it from other persons. Face recognition is quite challenging task because faces are complex, multidimensional visual stimuli and also face recognition rate depends on variations in pose, expression, occlusion, resolution, and illumination. Most of the existing face recognition algorithms give poor performance in the presence of high degree variations in human face images. Hence we propose an approach based on deep learning which uses Gabor filter of feature extraction and Convolution Neural Network for classification in order to improve the performance of face recognition with mentioned variations. The experiments conducted on AT & T database and we attained the efficiency of 89.50%.It is seen that 2.5% improvement in the efficiency as reported in literature. Future work is dedicated to evaluate the performance of proposed algorithm on different dataset with varying illumination and its classification.
ieee international advance computing conference | 2015
Moula Husain; Meena S M; Akash Sabarad; Harish Hebballi; Shiddu Nagaralli; Sonal Shetty
In recent years, on-line lecture videos are becoming significant pedagogical tool for both course instructors and students. Text present in lecture video will act as an important modality for retrieving videos as it is closely related to its content. In this paper, we present a distributed system for counting occurrences of each textual word from video frames using Apache Hadoop framework. As Hadoop framework is suitable for batch processing operations and, the processing of images is highly concurrent, we can implement batch processing operation of reading text information and counting the occurrence of each word by using MapReduce framework. We tested the working of text recognition and word count algorithms on Hadoop framework of cluster size 1, 5 and 10 nodes. Also we compared the performances of multimode clusters with a single node machine. On a data set of size around 3GB lecture video frames, Hadoop with a cluster size of 10 nodes executes 5 times faster than a single node system. Our results prove the advantage of using Hadoop for improving computational speed of processing image and video processing applications.
Archive | 2016
Moula Husain; S. M. Meena; Manjunath K. Gonal
In recent years, speech based computer interaction has become the most challenging and demanding application in the field of human computer interaction. Speech based Human computer interaction offers a more natural way to interact with computers and does not require special training. In this paper, we have made an attempt to build a human computer interaction system by developing speech based arithmetic calculator using Mel-Frequency Cepstral Coefficients and Gaussian Mixture Models. The system receives arithmetic expression in the form of isolated speech command words. Acoustic features such as Mel-Frequency Cepstral Coefficients features are extracted from the these speech commands. Mel-Frequency Cepstral features are used to train Gaussian mixture model. The model created after iterative training is used to predict input speech command either as a digit or an operator. After successful recognition of operators and digits, arithmetic expression will be evaluated and result of expression will be converted into an audio wave. Our system is tested with a speech database consisting of single digit numbers (0–9) and 5 basic arithmetic operators \( ( + , - , \times ,/\,{\text{and}}\,\% ) \). The recognition accuracy of the system is around 86 %. Our speech based HCI system can provide a great benefit of interacting with machines through multiple modalities. Also it supports in providing assistance to visually impaired and physically challenged people.
international symposium on women in computing and informatics | 2015
Sonal Shetty; Akash Sabarad; Harish Hebballi; Moula Husain; S. M. Meena; Shiddu Nagaralli
In recent years, content based audio indexing has become the key research area, as the audio content defines the content more precisely and has comparatively subservient density. In this paper, we present conversion of audio books into textual information using CMU SPHINX-4 speech transcriber and efficient indexing of audio books using term frequency-inverse document frequency (tf-idf) weights on Apache Hadoop MapReduce framework. In the first phase, audiobook datasets are converted into textual words by training CMU SPHINX-4 speech recognizer with acoustic models. In the next phase, the keywords present in the text file generated from the speech recognizer are filtered using tf-idf weights. Finally, we index audio files based on the keywords extracted from the speech converted text file. As, conversion of speech to text and indexing of audio are space and time intensive tasks, we ported execution of these algorithms on Hadoop MapReduce Framework. Porting content based indexing of audio books on to a Hadoop distributed framework resulted in considerable improvement in time and space utilization. As the amount of data being uploaded and downloaded is escalating, this can be further extended to indexing of image, video and other multimedia forms.
international conference on computing communication control and automation | 2015
Akash Sabarad; Mohamed Humair Kankudti; S. M. Meena; Moula Husain
Multimedia data is expanding exponentially. The rapid growth of technology combined with affordable storage and capabilities has lead to explosion in the availability and applications of multimedia. Most of the data is available in the form of images and videos. Today large amount of image data is produced through digital cameras, mobile phones and other sources. Processing of this large collection of images involve highly complex and repetitive operations on a large database leading to challenges of optimizing the query time and data storage capacity. Many image processing and computer vision algorithms are applicable to large-scale data tasks. It is often desirable to run the image processing algorithms on large data sets (e.g. larger than 1 TB) that are currently limited by the computational power of a single computer system. In order to handle such a huge data, we propose execution of time and space intensive computer vision algorithms on a distributed computing platform by using Apache Hadoop framework. Basically, Hadoop framework works based on divide and conquer strategy. The task of extracting color and texture features will be divided and assigned to multiple nodes of the Hadoop cluster. A significant speedup in computation time and efficient utilizations of memory can be achieved by exploiting the parallelism nature of Apache Hadoop framework. The Most important advantage of using Hadoop is, it is highly economical as whole framework can be implemented on existing commodity machines. Moreover, the system is highly fault tolerant and less vulnerable to node failures.
Journal of Engineering Education Transformations | 2015
Moula Husain; Somashekhar Patil; B. Indira; S. M. Meena; D. G. Narayan
Journal of Engineering Education Transformations | 2016
Moula Husain; Somashekar Patil; Pooja Shettar; Anand S. Meti; Indira Bidari