Sohail Asghar
COMSATS Institute of Information Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sohail Asghar.
international conference on emerging technologies | 2009
Umm-e-Habiba; Sohail Asghar
Over the past few decades, decision-making has gained popularity due to its frequent implications in managerial domains as it enables decision makers to come up with preeminent decisions. This explicit the importance of improved decision making processes given the competitive and dynamic business environment these days. Multi-criteria decision making (MCDM) — a well known decision making process is based on the progression of using methods and procedures of multiple conflicting criteria into management planning processes, whereas, Decision Support Systems (DSS) are considered powerful tools for decision-making. MCDM is widely used in conjunction with Decision support systems (DSS) by a large number of decision makers in variety of fields, such as financial analysis, flood risk management, housing evaluation, disaster management and Customer relationship management. Apart from several diversified advantages of using MCDM-DSS architecture, certain issues are also attached to this highly useful decision-making methodology. The main objective of this study is to provide critical evaluation by reviewing and synthesizing the available literature on MCDM-DSS architecture. This study will act as a landmark in providing in-depth knowledge on strength and weaknesses of several models and methodologies reviewed for articulation of this study.
international conference on information and emerging technologies | 2010
Tariq Ali; Sohail Asghar; Naseer Ahmed Sajid
DBSCAN is a widely used technique for clustering in spatial databases. DBSCAN needs less knowledge of input parameters. Major advantage of DBSCAN is to identify arbitrary shape objects and removal of noise during the clustering process. Beside its familiarity, DBSCAN has problems with handling large databases and in worst case its complexity reaches to O(n2). Similarly, DBSCAN cannot produce correct result on varied densities. Some variations are proposed to DBSCAN, to show its working in some other domains. In this paper we surveyed some important techniques in which original DBSCAN is modified or enhanced with improvement in complexity or result improvement on varied densities. We define criteria and analyse these variations with complexity (time and space) and output to the original DBSCAN algorithm. We also compare these variations with one another to select the efficient algorithm. In most of the variations partitioning and hybrid methodologies are originated to deal DBSCAN problems. We concluded with some variations which perform better than other variation over defined criteria (objectives).
ieee international conference on information management and engineering | 2009
Sohail Asghar; Khalid Iqbal
Data mining has emerged as one of the major research domain in the recent decades in order to extract implicit and useful knowledge. This knowledge can be comprehended by humans easily. Initially, this knowledge extraction was computed and evaluated manually using statistical techniques. Subsequently,semi-automated data mining techniques emerged because of the advancement in the technology. Such advancement was also in the form of storage which increases the demands of analysis. In such case, semi-automated techniques have become inefficient. Therefore,automated data mining techniques were introduced to synthesis knowledge efficiently. Consequently, in this paper, we focused on automated data mining techniques.We have decided to critically reviewed literature on these techniques. We have highlighted the strengths and limitations of these automated techniques. Consequently,the significance of this paper is to provide a useful resource to academia as well as researchers in the form of concise literature on automated data mining techniques.
International Journal of Information Technology and Decision Making | 2008
Sohail Asghar; Damminda Alahakoon; Leonid Churilov
The wide variety of disasters and the large number of activities involved have resulted in the demand for separate Decision Support System (DSS) models to manage different requirements. The modular approach to model management is to provide a framework in which to focus multidisciplinary research and model integration. A broader view of our approach is to provide the flexibility to organize and adapt a tailored DSS model (or existing modular subroutines) according to the dynamic needs of a disaster. For this purpose, the existing modular subroutines of DSS models are selected and integrated to produce a dynamic integrated model focussed on a given disaster scenario. In order to facilitate the effective integration of these subroutines, it is necessary to select the appropriate modular subroutine beforehand. Therefore, subroutine selection is an important preliminary step towards model integration in developing Disaster Management Decision Support Systems (DMDSS). The ability to identify a modular subroutine for a problem is an important feature before performing model integration. Generally, decision support needs are combined, and encapsulate different requirements of decision-making in the disaster management area. Categorization of decision support needs can provide the basis for such model selection to facilitate effective and efficient decision-making in disaster management. Therefore, our focus in this paper is on developing a methodology to help identify subroutines from existing DSS models developed for disaster management on the basis of needs categorization. The problem of the formulation and execution of such modular subroutines are not addressed here. Since the focus is on the selection of the modular subroutines from the existing DMDSS models on basis of a proposed needs classification scheme.
networked computing and advanced information management | 2009
Muhammad Usman; Sohail Asghar; Simon Fong
Online Analytical Processing (OLAP) was widely used to visualize complex data for efficient, interactive and meaningful analysis. Its power comes in visualizing huge operational data for interactive analysis. On the other hand, data mining techniques (DM) are strong at detecting patterns and mining knowledge from historical data. OLAP and DM is believed to be able to complement each other to analyze large data sets in decision support systems. Some recent researches have shown the benefits of combining OLAP with Data Mining. In this paper, we reviewed the coupling of OLAP and data mining in the literature and identified their limitations. We proposed a conceptual model that overcomes the existing limitations, and provides a way for combining enhanced OLAP with data mining systems. Furthermore, the proposed model offers directions to improving cube construction time and visualization over the data cube.
international conference on digital information management | 2010
Muhammad Usman; Sohail Asghar; Simon Fong
Data mining aims at extraction of previously unidentified information from large databases. It can be viewed as an automated application of algorithms to discover hidden patterns and to extract knowledge from data. Online Analytical Processing (OLAP) systems, on the other hand, allow exploring and querying huge datasets in interactive way. These OLAP systems are the predominant front-end tools used in data warehousing environments and the OLAP systems market has developed rapidly during the last few years. Several works in the past emphasized the integration of OLAP and data mining. More recently, data mining techniques along with OLAP have been applied in decision support applications to analyze large data sets in an efficient manner. However, in order to integrate data mining results with OLAP the data has to be modeled in a particular type of OLAP schema. An OLAP schema is a collection of database objects, including tables, views, indexes and synonyms. Schema generation process was considered a manual task but in the recent years research communities reported their work in automatic schema generation. In this paper, we reviewed literature on the schema generation techniques and highlighted the limitations of the existing works. The review reveals that automatic schema generation has never been integrated with data mining. Hence, we propose a model for data mining and automatic schema generation of three types namely star, snowflake, and galaxy. Hierarchical clustering technique of data mining was used and schema from the clustered data was generated. We have also developed a prototype of the proposed model and validated it via experiments of real-life data set. The proposed model is significant as it supports both integration and automation process.
international conference on electronics, communications, and computers | 2014
Saadia Sultana; Yasir Hafeez Motla; Sohail Asghar; Muhammad Jamal; Romana Azad
CONTEXT -The software industry can be widely seen as a key driver for business improvement and is likely to provide an opportunity to the countries to make dramatic improvements in economic growth. Software industry of Pakistan can also play a major role in strengthening the sluggish economy. A well-organized suitable framework according to the industry needs helps to engineer quality products within budget and time. OBJECTIVE - To identify significant issues present in Pakistani software industry that are considered as barriers in achieving the international standards of development and to propose suitable framework by integrating agile practices to resolve various management, quality and engineering issues. METHOD -Literature is consulted to highlight various issues of Pakistani Software industry, some existing hybrid models are investigated to evaluate their strengths and weaknesses and finally a case study & expert review is presented to validate the effectiveness of our proposed hybrid model. RESULTS -The proposed hybrid model provides effective management, engineering, quality assurance, productivity and maintenance practices to develop quality products which can help the industry to compete and achieve the standards of global software industry. CONCLUSION -The proposed framework contains features of Scrum which provides good management practices, XP which offers engineering practices and DSDM which focuses on providing solid basis to initiate a project. Additional role of technical writer for effective documentation also enhances the understand-ability and maintenance of the software.
international conference on information and emerging technologies | 2010
Naseer Ahmed Sajid; Salman Zafar; Sohail Asghar
To gain the competitive advantage in todays age of technology, growing data and to bear the competitive pressure, making strong decisions according to customers need and market trend has become very important. With huge amount of data on internet, web data mining has become very significant. Web Usage helps companies to produce productive information pertaining to the future of their business function ability by analyzing the usage information from their websites. Sequential Patterns allow the collection of web access data for web pages. This usage data provides the paths leading to accessed web pages and their sequences which can give us valuable information about users behavior and effectiveness of websites. The focus of this survey paper is to review the challenges, issues and techniques for finding sequential patterns in the context of web usage mining. For this, different techniques from the literature have been reviewed. Analysis of the existing techniques related to the sequential pattern finding has been done on the basis of the existing literature and some parameters are proposed. The comparison criteria for the analysis include Graph Structure, Input Data Structure, Input Parameters, Pattern Type, Application, Technique, Method, Algorithm, Scalability and Tool Support. The reviewed techniques are divided into three groups Clustering; Association; Markov Model based techniques. By analyzing all the techniques with the help of above defined parameters, compare and contrast we finally conclude that the two clustering techniques based on Markov Models (Line Prediction and Path Analysis Using Markov Chains & Using Markov Chains for Structural Link Prediction for Adaptive Web Sites) proposed by Sarukkai & Zhu et al. respectively, are the best sequential pattern finding techniques. Analyzing different techniques also show that the hybrids of different techniques can be used to find sequential patterns in web usage mining data. Strength and weaknesses of one technique can complement of the other technique so it is best to use techniques in combination if we want accurate patterns efficiently.
international conference hybrid intelligent systems | 2004
Sohail Asghar; A. Alahakoon; Leonid Churilov
Model integration is one of the most important and widely researched areas in model management of decision support systems (DSS). In disaster management area, independent DSS models handle specific decision-making needs, but it is possible that the need for combination of these models will be required. Therefore, the need for model integration and selection of such models arises. This paper presents the idea of decision support model integration based on software agents in an interactive disaster management domain. In this environment an automated model agent communicates with the other decision support system models and presents the hybrid decision support system model as a solution. This system starts with minimal information about the users preferences, and preferences are elicited and inferred incrementally by analyzing the needs and requirements of the user.
Applied Intelligence | 2017
Sidra Ijaz; Faheel A. Hashmi; Sohail Asghar; Masoom Alam
A new Intrusion Detection System (IDS) for network security is proposed making use of a Vector-Based Genetic Algorithm (VBGA) inspired by evolutionary approaches. The novelty in the algorithm is to represent chromosomes as vectors and training data as matrices. This approach allows multiple pathways to calculate fitness function out of which one particular methodology is used and tested. The proposed method uses the overlap of the matrices with vector chromosomes for model building. The fitness of the chromosomes is calculated from the comparison of true and false positives in test data. The algorithm is flexible to train the chromosomes for one particular attack type or to detect the maximum number of attacks. The VBGA has been tested on two datasets (KDD Cup-99 and CTU-13). The proposed algorithm gives high detection rate and low false positives as compared to traditional Genetic Algorithm. A detailed comparative analysis is given of proposed VBGA with the traditional string-based genetic algorithm on the basis of accuracy and false positive rates. The results show that vector based genetic algorithm provides a significant improvement in detection rates keeping false positives at minimum.