Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adeel Anjum is active.

Publication


Featured researches published by Adeel Anjum.


The first computers | 2017

BangA: An Efficient and Flexible Generalization-Based Algorithm for Privacy Preserving Data Publication

Adeel Anjum; Guillaume Raschia

Privacy-Preserving Data Publishing (PPDP) has become a critical issue for companies and organizations that would release their data. k-Anonymization was proposed as a first generalization model to guarantee against identity disclosure of individual records in a data set. Point access methods (PAMs) are not well studied for the problem of data anonymization. In this article, we propose yet another approximation algorithm for anonymization, coined BangA, that combines useful features from Point Access Methods (PAMs) and clustering. Hence, it achieves fast computation and scalability as a PAM, and very high quality thanks to its density-based clustering step. Extensive experiments show the efficiency and effectiveness of our approach. Furthermore, we provide guidelines for extending BangA to achieve a relaxed form of differential privacy which provides stronger privacy guarantees as compared to traditional privacy definitions.


Journal of Network and Computer Applications | 2017

Trustworthy data: A survey, taxonomy and future trends of secure provenance schemes

Faheem Zafar; Abid Khan; Sabah Suhail; Idrees Ahmed; Khizar Hameed; Hayat Mohammad Khan; Farhana Jabeen; Adeel Anjum

Abstract Data is a valuable asset for the success of business and organizations these days, as it is effectively utilized for decision making, risk assessment, prioritizing the goals and performance evaluation. Extreme reliance on data demands quality assurance and trust on processes. Data Provenance is information that can be used to reason about the current state of a data object. Provenance can be broadly described as the information that explains where a piece of data object came from, how it was derived or created, who was involved in said creation, manipulations involved, processes applied, etc. It consists of information that had an effect on the data, evolving to its present state. Provenance has been used widely for the authenticity of data and processes. Despite having such a wide range of uses and applications, provenance poses vexing privacy and integrity challenges. Provenance data itself is, therefore, critical and it must be secured. Over the years, a number of secure provenance schemes have been proposed. This paper aims to enhance the understanding of secure provenance schemes and its associated security issues. In this paper, we have discussed why secure provenance is needed, what are its essential characteristics, and what objectives it serves? We describe the lifecycle of secure provenance and highlighted how trust is achieved in different domains by its application. Firstly, a detailed taxonomy of existing secure provenance schemes is presented. Then, a comparative analysis of existing secure provenance schemes, which highlights their strengths and weaknesses is also provided. Furthermore; we highlight future trends, which should be focused upon by the research community.


Computers & Security | 2017

τ-safety: A privacy model for sequential publication with arbitrary updates

Adeel Anjum; Guillaume Raschia; Marc Gelgon; Abid Khan; Saif Ur Rehman Malik; Naveed Ahmad; Mansoor Ahmed; Sabah Suhail; Masoom Alam

The dissemination of Electronic Health Records (EHRs) can be extremely beneficial for multidimensional medical research perspectives leveraging patient diagnoses to reliable prescription, clinical trials to disease surveillance and immunization to disease prevention. However, privacy preservation on anonymous release–shared with medical researchers (or intended recipients)-demands a privacy model that must be able to meet three challenges: 1) it should be able to strike a balance between the privacy and utility of released dataset; 2) it should be able to preserve the individual-based privacy; 3) it should be able to thwart the adversary in the presence of arbitrary updates (i.e. with any consistent insert/update/delete sequence) and especially chainable-auxiliary information. The main objective of this work is to propose a privacy model that meets these three criteria. In this work, we propose τ-safety privacy model for sequential publication that is able to meet all above-mentioned challenges. τ (events list) refers to the type of operations (e.g., insert, update, delete) that can be performed on an individuals record in any release. The results of our experiments prove that the proposed scheme achieves better anonymization quality and query accuracy in comparison with m-invariance against τ attacks in external and internal updates.


Computers & Security | 2018

Secure provenance using an authenticated data structure approach

Fuzel Jamil; Abid Khan; Adeel Anjum; Mansoor Ahmed; Farhana Jabeen; Nadeem Javaid

Abstract Data provenance is information used in reasoning about the present state of a data object, providing details such as the inputs used, transformations it underwent, entities responsible, and any other information that had an impact on its evolution. With a plethora of uses consisting of but not limited to provision of trust, gauging of quality, detecting intrusion and system changes, solving attribution problems, regulations compliance and in legal proceedings etc., provenance information needs to be secured. On the other hand use of tampered provenance information could lead to erroneous judgments and serious implications in many situations. The difference in sensitivity levels of provenance and the underlying data coupled with its DAG (Directed Acyclic Graph) structure leads to the need for a tailored security model. To date, proposed secure provenance schemes such as the Onion scheme, PKLC scheme, Mutual agreement scheme, rely on transitive trust; consecutive participating entities do not collude to attack the provenance chain. Furthermore, these schemes suffer from attacks such as ownership and lone attacks on provenance records. We propose a secure provenance scheme that uses the auditor as a witness to the chain build process whereby a verification tree is incrementally built by the auditor which serves as his view of the chain. Our scheme removes the transitive trust dependency hence collusion attacks by consecutive participating entities are successfully detected. Additionally, our scheme captures the DAG structure of provenance information and achieves secure provenance requirements; integrity, availability and confidentiality. Security analysis and empirical results show that the scheme provides better security guarantees than the previously proposed schemes with reasonable overheads involved that can be outweighed by the protection capabilities provided and removal of transitive trust which may not be feasible.


Archive | 2017

m-Skin Doctor: A Mobile Enabled System for Early Melanoma Skin Cancer Detection Using Support Vector Machine

Muhammad Aleem Taufiq; Nazia Hameed; Adeel Anjum; Fozia Hameed

Early detection of skin cancer is very important as it is one of the dangerous form of cancer spreading vigorously among humans. With the advancement of mobile technology; mobile enabled skin cancer detection systems are really demanding but currently very few real time skin cancer detection systems are available for general public and mostly available are the paid. In this paper authors proposed a real time mobile enabled health care system for the detection of skin melanoma for general users. Proposed system is developed using computer vision and image processing techniques. Noise is removed by applying the Gaussian filter. For segmentation Grab Cut algorithm is used. Support Vector Machine (SVM) is applied as a classification technique on the texture features like area, perimeter, eccentricity etc. The sensitivity and specificity rate achieved by the m-Skin Doctor is 80% and 75% respectively. The average time consumed by the application for classifying one image is 14938 ms.


Multimedia Tools and Applications | 2017

Formal modeling and verification of security controls for multimedia systems in the cloud

Masoom Alam; Saif Ur Rehman Malik; Qaisar Javed; Abid Khan; Shamaila Bisma Khan; Adeel Anjum; Nadeem Javed; Adnan Akhunzada; Muhammad Khurram Khan

Organizations deploy the Security Information and Event Management (SIEM) systems for centralized management of security alerts for securing their multimedia content. The SIEM system not only preserves events data, generated by devices and applications, in the form of logs but also performs real-time analysis of the event data. The SIEM works as the Security Operation Centre (SOC) in an organization, therefore, errors in the SIEM may compromise the security of the organization. In addition to focusing on the architecture, features, and the performance of the SIEM, it is imperative to carry out a formal analysis to verify that the system is impeccable. The ensuing research focuses mainly on the formal verification of the OSTORM a SIEM system. We have used High-Level Petri Nets (HLPN) and Z language to model and analyze the system. Moreover, Satisfiability Modulo Theories Library (SMT-Lib) and Z3 solver are used in this research to prove the correctness of the overall working of the OSTORM system. We demonstrate the correctness of the underlying system based on four security properties, namely: a) event data confidentiality, b) authentication, c) event data integrity, and d) alarm integrity. The results reveal that the OSTORM system functions correctly.


The Journal of Supercomputing | 2018

An efficient approach for publishing microdata for multiple sensitive attributes

Adeel Anjum; Naveed Ahmad; Saif Ur Rehman Malik; Samiya Zubair; Basit Shahzad

Abstract The publication of microdata is pivotal for medical research purposes, data analysis and data mining. These published data contain a substantial amount of sensitive information, for example, a hospital may publish many sensitive attributes such as diseases, treatments and symptoms. The release of multiple sensitive attributes is not desirable because it puts the privacy of individuals at risk. The main vulnerability of such approach while releasing data is that if an adversary is successful in identifying a single sensitive attribute, then other sensitive attributes can be identified by co-relation. A whole variety of techniques such as SLOMS, SLAMSA and others already exist for the anonymization of multiple sensitive attributes; however, these techniques have their drawbacks when it comes to preserving privacy and ensuring data utility. The extant framework lacks in terms of preserving privacy for multiple sensitive attributes and ensuring data utility. We propose an efficient approach (p, k)-Angelization for the anonymization of multiple sensitive attributes. Our proposed approach protects the privacy of the individuals and yields promising results compared with currently used techniques in terms of utility. The (p, k)-Angelization approach not only preserves the privacy by eliminating the threat of background join and non-membership attacks but also reduces the information loss thus improving the utility of the released information.


Information Systems | 2018

Performance prediction and adaptation for database management system workload using Case-Based Reasoning approach

Basit Raza; Yogan Jaya Kumar; Ahmad Kamran Malik; Adeel Anjum; Muhammad Faheem

Abstract Workload management in a Database Management System (DBMS) has become difficult and challenging because of workload complexity and heterogeneity. During and after execution of the workload, it is hard to control and handle the workload. Before executing the workload, predicting its performance can help us in workload management. By knowing the type of workload in advance, we can predict its performance in an adaptive way that will enable us to monitor and control the workload, which ultimately leads to performance tuning of the DBMS. This study proposes a predictive and adaptive framework named as the Autonomic Workload Performance Prediction (AWPP) framework. The proposed AWPP framework predicts and adapts the DBMS workload performance on the basis of information available in advance before executing the workload. The Case-Based Reasoning (CBR) approach is used to solve the workload management problem. The proposed CBR approach is compared with other machine learning techniques. To validate the AWPP framework, a number of benchmark workloads of the Decision Support System (DSS) and the Online Transaction Processing (OLTP) are executed on the MySQL DBMS. For preparation of training and testing data, we executed more than 1000 TPC-H and TPC-C like workloads on a standard data set. The results show that our proposed AWPP framework through CBR modeling performs better in predicting and adapting the DBMS workload. DBMSs algorithms can be optimized for this prediction and workload can be controlled and managed in a better way. In the end, the results are validated by performing post-hoc tests.


computational intelligence | 2018

A Parallel Implementation of GHB Tree

Zineddine Kouahla; Adeel Anjum

Searching in a dataset remains a fundamental problem for many applications. The general purpose of many similarity measures is to focus the search on as few elements as possible to find the answer. The current indexing techniques divides the target dataset into subsets. However, in large amounts of data, the volume of these regions explodes, which will affect search algorithms. The research tends to degenerate into a complete analysis of the data set. In this paper, we proposed a new indexing technique called GHB-tree. The first idea, is to limit the volume of the space. The goal is to eliminate some objects without the need to compute their relative distances to a query object. Peer-to-peer networks (P2P) are superimposed networks that connect independent computers (also known as nodes or peers). GHB-tree has been optimized for secondary memory in peer-to-peer networks. We proposed a parallel search algorithm on a set of real machine. We also discussed the effectiveness of construction and search algorithms, as well as the quality of the index.


Knowledge and Information Systems | 2018

Autonomic workload performance tuning in large-scale data repositories

Basit Raza; Asma Sher; Sana Afzal; Ahmad Kamran Malik; Adeel Anjum; Yogan Jaya Kumar; Muhammad Faheem

AbstractThe workload in large-scale data repositories involves concurrent users and contains homogenous and heterogeneous data. The large volume of data, dynamic behavior and versatility of large-scale data repositories is not easy to be managed by humans. This requires computational power for managing the load of current servers. Autonomic technology can support predicting the workload type; decision support system or online transaction processing can help servers to autonomously adapt to the workloads. The intelligent system could be designed by knowing the type of workload in advance and predict the performance of workload that could autonomically adapt the changing behavior of workload. Workload management involves effectively monitoring and controlling the workflow of queries in large-scale data repositories. This work presents a taxonomy through systematic analysis of workload management in large-scale data repositories with respect to autonomic computing (AC) including database management systems and data warehouses. The state-of-the-art practices in large-scale data repositories are reviewed with respect to AC for characterization, performance prediction and adaptation of workload. Current issues are highlighted at the end with future directions.

Collaboration


Dive into the Adeel Anjum's collaboration.

Top Co-Authors

Avatar

Saif Ur Rehman Malik

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Abid Khan

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Basit Raza

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Ahmad Kamran Malik

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Mansoor Ahmed

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Tanzila Saba

Prince Sultan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Basit Shahzad

National University of Modern Languages

View shared research outputs
Researchain Logo
Decentralizing Knowledge