Nisar Hundewale
Taif University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nisar Hundewale.
ACITY (3) | 2013
Syeda Erfana Zohora; A.M. Khan; Nisar Hundewale
Electronic noses utilizes an array of chemical sensors of different specificities which responds to the volatile organic compounds present in the gases. The use of electronic chemical sensors in an array design with coupled signal conditioning and appropriate pattern recognition system is capable of identifying complex odours.Such an artificial gas sensing system is called ’electronic nose’. The requirement for the sensors in a electronic nose is that they have a partial sensitivity, i.e. that they can respond broadly to a range or class of gases rather than to a specific one. However, The electronic nose will categorize many odours that contain many chemical components. Different types of gas sensors in the sensor array includes metal oxide semiconductors,optical and amperometric gas sensor, surface acoustic sensors,piezoelectric gas sensors. In this review paper,we discuss the operating principle of each chemical sensor type and its use in electronic nose system.
international conference on multimedia computing and systems | 2012
Mohd. Junedul Haque; Khalid W. Magld; Nisar Hundewale
Intrusion Detection system is an active and driving secure technology. Intrusion detection (ID) is the process of examining the events occurring in a computer system or network. Analyzing the system or network for signs of intrusions, defined as attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a network. The focus of this paper is mainly on intrusion detection based on data mining. The main part of Intrusion Detection Systems (IDSs) is to produce huge volumes of alarms. The interesting alarms are always mixed with unwanted, non-interesting and duplicate alarms. The aim of data mining is to improve the detection rate and decrease the false alarm rate. So, here we proposed a framework which detect the intrusion and after that, it will show the improvement of k-means clustering algorithm.
international conference on multimedia computing and systems | 2012
Sultan Aljahdali; Aasif Ansari; Nisar Hundewale
The CBIR term has been widely used to describe the process of retrieving desired images from a large collection on the basis of features (such as color, texture and shape) that can be extracted from the images themselves. In this paper, we have proposed an image retrieval system on the basis of classification using Support Vector Machine (SVM) which is implemented in MATLAB with the help of Gabor Filtered image features. In the proposed system, texture features are found by calculating the Standard Deviation of the Gabor Filtered image. A SVM classifier can be learned from training data of relevance images and irrelevance images marked by users. Using the classifier, the system can retrieve more images relevant to the query in the database efficiently. The proposed CBIR technique is implemented on a database having 1000 images spread across 11 categories and COIL image database having 1080 images spread across 15 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed. The results have shown performance improvement with higher precision and recall values, achieving crossover point as high as 89% with SVM technique as compared to image retrieval using Gabor Magnitude without SVM technique where the maximum crossover point is approximately 79%.
ieee embs international conference on biomedical and health informatics | 2012
Shameem Fathima; Nisar Hundewale
This paper studies classification methods, comparing svm and Naïves Bayes analysis as applied to viral disease medical data mining. The objective of this study is to explore possibility of applying machine learning techniques such as SVM and Naïve Bayes algorithm for classification to predict the susceptibility for complex disease-Dengue. Both of these algorithms were chosen for their simple, amazing and accurate results. The proposed work is to experiment machine learning algorithms to the available arbovirus that is causing frequent recurrent epidemics. In this paper, we discuss the application of machine learning techniques that make a distinction between dengue and other feverish illnesses in the primary care setting and predict severe arboviral disease among population. By investigating the arboviral dataset from one of the largest outbreaks that affected India in recent times, we master the methodology and validate classification performance as a measurement of the salience for the discovered associations. The result of the comparison between the methods showed that SVM outperforms the Naïve Bayes in Dengue disease diagnosis.
bioinformatics and biomedicine | 2011
Shameem Fathima; Nisar Hundewale
In this paper we present the performance analysis of different data mining techniques to predict the Arboviral disease-Dengue. Data set used for the analysis is real time data taken from super specialty hospitals and diagnostic laboratories where the blood samples were collected for diagnostic investigations at study enrolment and again at hospital discharge. This data set consists of 5000 records with 29 parameters. In this paper we have investigated the data mining techniques: SVM and Naive Bayes Classifier. A proficient methodology - randomforest classifier with its associated Gini feature importance allows to identify small sets of parameters to be used for diagnostic purposes in clinical practice; this involves obtaining the smallest possible set of symptoms that can still achieve decent predictive performance for the dengue disease. We combine both the approaches, and evaluate the classifiers performance. The result of the comparison between the methods showed that SVM outperforms the Naïve Bayes in Dengue disease diagnosis.
ieee international conference on computer science and automation engineering | 2012
Sultan Aljahdali; Syed Naimatullah Hussain; Nisar Hundewale; Azeemsha Thacham Poyil
This paper evaluates a system or system component, by manual or automated means to verify that the system satisfies specified set of requirements. The Test Management and Control involves three major activities; test planning, test execution and defect management which needs to be planned and monitored in a structured manner to ensure delivery of quality product adhering to all project timelines. The purpose of this document is to present the methods, procedures, and approach to be used in the verification and validation for Test Management & Control. It consists of important activities like defining all the testing methodologies in detail for the test planning and lay down a process and procedure so that test plan will reflect all the activities which are going to get executed during the execution phase. The Test execution phase is the actual verification phase where you need to monitor the test case execution closely, and make sure that you are well within the defined Upper Control Limit (UCL) and Lower Control Limit (LCL). During the test execution phase which is clearly planned during the test planning phase, any deviation from these UCL & LCL requires a revised plan so that the deliverables are not going to jeopardy. So once we have clearly defined all the rules of the game during the planning phase and work on the same during the execution phase, we can certainly make sure that the product delivered will of high quality product. Managing Defects is also one of the very important phases all along the testing cycle. We need to pay extra attention to the defects getting fixed during the last few days of the testing cycle, so as to maintain the stability of the product. Do not fix the minor defects at the end of the testing cycle, unless they are critical and severe.
Signal Processing and Information Technology. First International Joint Conference, SPIT 2011 and IPC 2011, Amsterdam, The Netherlands, December 1-2, 2011, Revised Selected Papers | 2011
M. Mahaboob Basha; Towfeeq Fairooz; Nisar Hundewale; K. Vasudeva Reddy; B. Pradeep
CMOS stands for Complementary Metal Oxide Semiconductor. It is basically a class of integrated circuits, and is used in a range of applications with digital logic circuits, such as microprocessors, microcontrollers, static RAM, etc. It is also used in applications with analogue circuits, such as in data converters, image sensors, etc. There are quite a few advantages that the CMOS technology has to offer. One of the main advantage that CMOS technology, which makes it the most commonly-used technology for digital circuits today, is the fact that it enables chips that are small in size to have features like high operating speeds and efficient usage of energy. Besides, they have very low static power supply drain most of the time. Besides, devices using CMOS technology also have a high degree of noise immunity. This paper presents the implementation of a LFSR (Linear Feedback Shift Register) counter using the recent CMOS sub-micrometer layout tools. Adding to the advantage of CMOS technology, the LFSR counter can be used as a new trend setter in cryptography and can also be beneficial when compared to GRAY & BINARY counter while not forgetting the variety of other applications LFSR counter has.
ieee embs international conference on biomedical and health informatics | 2012
Gul Shaira Banu; Amjath Fareeth; Nisar Hundewale
Breast cancer is the leading cause of non preventable cancer death among women. A typical mammogram is an intensity X-ray image with gray levels showing levels of contrast inside the breast that which characterize normal tissue and different calcifications and masses. Analyzing an X-ray mammogram is challenging because of the similarities of cancer growth with other tissue growth. Therefore, it poses inaccuracy in identifying the presence of breast cancer. Now a day, detection of calcifications in mammograms has received much attention from researchers and public health practitioners. In this paper, we propose a novel technique that uses continuous wavelet transform (1D - CWT) as feature selection technique and support vector machine (SVM) as classifier. Our experimental result achieved excellent classification accuracy (100%) and compared with the other technique (1D - CWT and Fuzzy-C-mean clustering).
computational science and engineering | 2011
Amera Almas; Nisar Hundewale
For a large training data set, the memory space required to store the kernel matrix is a difficult task for the solution of QP problems for SVR. Support Vector Regression (SVR) is used to approximate the function. So we are proposing the slope based partition algorithm, which automatically evolves the partitions based on change in slope. In this paper we are improving the performance of function approximation with SVM by preprocessing the dataset with Fuzzy c-Mean clustering (FCM) and slope based partition methods. Using FCM and slope, we are portioning the data, and for every partitioned dataset we are finding the function approximation, and aggregating the result. The results indicate that Root Mean Square Error (RMSE) has been reduced with the partitioned data, compare to the processing entire dataset, with the increase in performance approximately 40%
KMO | 2013
Syed Zakir Ali; P. Nagabhushan; R. Pradeep Kumar; Nisar Hundewale
Growing popularity of Learning Management Systems (LMS) coupled with setting up of variety of rubrics to evaluate methods of Learning, Teaching and Assessment Strategies (LTAS) by various accreditation boards has compelled many establishments/universities to run all their courses through one or the other forms of LMS. This has paved way to gather large amount of data on a day to day basis in an incremental way, making LMS data suitable for incremental learning through data mining techniques. The data mining technique which is employed in this research is clustering. This paper focuses on challenges involved in the instantaneous knowledge extraction from such an environment where streams of heterogeneous log records are generated every moment. In obtaining the overall knowledge from such LMS data, we have proposed a novel idea in which instead of reprocessing the entire data from the beginning, we processed only the recent chunk of data (incremental part) and append the obtained knowledge to the knowledge extracted from previous chunk(s). Obtained results when compared with teachers handling the modules/subjects match exactly with the expected results.