Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Awais Ahmad is active.

Publication


Featured researches published by Awais Ahmad.


PLOS ONE | 2017

Provenance based data integrity checking and verification in cloud environments

Muhammad Imran; Helmut Hlavacs; Inam Ul Haq; Bilal Jan; Fakhri Alam Khan; Awais Ahmad

Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user’s data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user’s data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called “Data Provenance”. Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking.


Information & Software Technology | 2017

Systematic literature review and empirical investigation of barriers to process improvement in global software development

Arif Ali Khan; Jacky Keung; Mahmood Niazi; Shahid Hussain; Awais Ahmad

ContextIncreasingly, software development organizations are adopting global software development (GSD) strategies, mainly because of the significant return on investment they produce. However, there are many challenges associated with GSD, particularly with regards to software process improvement (SPI). SPI can play a significant role in the successful execution of GSD projects. ObjectiveThe aim of the present study was to identify barriers that can negatively affect SPI initiatives in GSD organizations from both client and vendor perspectives. MethodA systematic literature review (SLR) and survey questionnaire were used to identify and validate the barriers. ResultsTwenty-two barriers to successful SPI programs were identified. Results illustrate that the barriers identified using SLR and survey approaches have more similarities However, there were significant differences between the ranking of these barriers in the SLR and survey approaches, as indicated by the results of t-tests (for instance, t=2.28, p=0.011<0.05). Our findings demonstrate that there is a moderate positive correlation between the ranks obtained from the SLR and the empirical study (rs (22)=0.567, p=0.006). ConclusionsThe identified barriers can assist both client and vendor GSD organizations during initiation of an SPI program. Client-vendor classification was used to provide a broad picture of SPI programs, and their respective barriers. The top-ranked barriers can be used as a guide for GSD organizations prior to the initiation of an SPI program. We believe that the results of this study can be useful in tackling the problems associated with the implementation of SPI, which is vital to the success and progression of GSD organizations.


International Journal of Parallel Programming | 2018

Multilevel Data Processing Using Parallel Algorithms for Analyzing Big Data in High-Performance Computing

Awais Ahmad; Anand Paul; Sadia Din; M. Mazhar Rathore; Gyu Sang Choi; Gwanggil Jeon

The growing gap between users and the Big Data analytics requires innovative tools that address the challenges faced by big data volume, variety, and velocity. Therefore, it becomes computationally inefficient to analyze such massive volume of data. Moreover, advancements in the field of Big Data application and data science poses additional challenges, where High-Performance Computing solution has become a key issue and has attracted attention in recent years. However, these systems are either memoryless or computational inefficient. Therefore, keeping in view the aforementioned needs, there is a requirement for a system that can efficiently analyze a stream of Big Data within their requirements. Hence, this paper presents a system architecture that enhances the working of traditional MapReduce by incorporating parallel processing algorithm. Moreover, complete four-tier architecture is also proposed that efficiently aggregate the data, eliminate unnecessary data, and analyze the data by the proposed parallel processing algorithm. The proposed system architecture both read and writes operations that enhance the efficiency of the Input/Output operation. To check the efficiency of the proposed algorithms exploited in the proposed system architecture, we have implemented our proposed system using Hadoop and MapReduce. MapReduce is supported by a parallel algorithm that efficiently processes a huge volume of data sets. The system is implemented using MapReduce tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.


IEEE Access | 2017

A Cluster-Based Data Fusion Technique to Analyze Big Data in Wireless Multi-Sensor System

Sadia Din; Awais Ahmad; Anand Paul; Muhammad Mazhar Ullah Rathore; Gwanggil Jeon

With the development of the latest technologies and changes in market demand, the wireless multi-sensor system is widely used. These multi-sensors are integrated in a way that produces an overwhelming amount of data, termed as big data. The multi-sensor system creates several challenges, which include getting actual information from big data with high accuracy, increasing processing efficiency, reducing power consumption, providing a reliable route toward destination using minimum bandwidth, and so on. Such shortcomings can be overcome by exploiting some novel techniques, such as clustering, data fusion, and coding schemes. Moreover, data fusion and clustering techniques are proven architectures that are used for efficient data processing; resultant data have less uncertainty, providing energy-aware routing protocols. Because of the limited resources of the multi-sensor system, it is a challenging task to reduce the energy consumption to survive a network for a longer period. Keeping challenges above in view, this paper presents a novel technique by using a hybrid algorithm for clustering and cluster member selection in the wireless multi-sensor system. After the selection of cluster heads and member nodes, the proposed data fusion technique is used for partitioning and processing the data. The proposed scheme efficiently reduces the blind broadcast messages but also decreases the signal overhead as the result of cluster formation. Afterward, the routing technique is provided based on the layered architecture. The proposed layered architecture efficiently minimizes the routing paths toward the base station. Comprehensive analysis is performed on the proposed scheme with state-of-the-art centralized clustering and distributed clustering techniques. From the results, it is shown that the proposed scheme outperforms competitive algorithms in terms of energy consumption, packet loss, and cluster formation.


Journal of Medical Systems | 2018

Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform

Rehan Ashraf; Mudassar Ahmed; Sohail Jabbar; Shehzad Khalid; Awais Ahmad; Sadia Din; Gwangil Jeon

Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.


International Journal of Parallel Programming | 2018

Real-Time Big Data Stream Processing Using GPU with Spark Over Hadoop Ecosystem

M. Mazhar Rathore; Hojae Son; Awais Ahmad; Anand Paul; Gwanggil Jeon

In this technological era, every person, authorities, entrepreneurs, businesses, and many things around us are connected to the internet, forming Internet of thing (IoT). This generates a massive amount of diverse data with very high-speed, termed as big data. However, this data is very useful that can be used as an asset for the businesses, organizations, and authorities to predict future in various aspects. However, efficiently processing Big Data while making real-time decisions is a quite challenging task. Some of the tools like Hadoop are used for Big Datasets processing. On the other hand, these tools could not perform well in the case of real-time high-speed stream processing. Therefore, in this paper, we proposed an efficient and real-time Big Data stream processing approach while mapping Hadoop MapReduce equivalent mechanism on graphics processing units (GPUs). We integrated a parallel and distributed environment of Hadoop ecosystem and a real-time streaming processing tool, i.e., Spark with GPU to make the system more powerful in order to handle the overwhelming amount of high-speed streaming. We designed a MapReduce equivalent algorithm for GPUs for a statistical parameter calculation by dividing overall Big Data files into fixed-size blocks. Finally, the system is evaluated while considering the efficiency aspect (processing time and throughput) using (1) large-size city traffic video data captured by static as well as moving vehicles’ cameras while identifying vehicles and (2) large text-based files, like twitter data files, structural data, etc. Results show that the proposed system working with Spark on top and GPUs under the parallel and distributed environment of Hadoop ecosystem is more efficient and real-time as compared to existing standalone CPU-based MapReduce implementation.


Future Generation Computer Systems | 2017

Toward modeling and optimization of features selection in Big Data based social Internet of Things

Awais Ahmad; Murad Khan; Anand Paul; Sadia Din; M. Mazhar Rathore; Gwanggil Jeon; Gyu Sang Choi

Abstract The growing gap between users and the Big Data analytics requires innovative tools that address the challenges faced by big data volume, variety, and velocity. Therefore, it becomes computationally inefficient to analyze and select features from such massive volume of data. Moreover, advancements in the field of Big Data application and data science poses additional challenges, where a selection of appropriate features and High-Performance Computing (HPC) solution has become a key issue and has attracted attention in recent years. Therefore, keeping in view the needs above, there is a requirement for a system that can efficiently select features and analyze a stream of Big Data within their requirements. Hence, this paper presents a system architecture that selects features by using Artificial Bee Colony (ABC). Moreover, a Kalman filter is used in Hadoop ecosystem that is used for removal of noise. Furthermore, traditional MapReduce with ABC is used that enhance the processing efficiency. Moreover, a complete four-tier architecture is also proposed that efficiently aggregate the data, eliminate unnecessary data, and analyze the data by the proposed Hadoop-based ABC algorithm. To check the efficiency of the proposed algorithms exploited in the proposed system architecture, we have implemented our proposed system using Hadoop and MapReduce with the ABC algorithm. ABC algorithm is used to select features, whereas, MapReduce is supported by a parallel algorithm that efficiently processes a huge volume of data sets. The system is implemented using MapReduce tool at the top of the Hadoop parallel nodes with near real-time. Moreover, the proposed system is compared with Swarm approaches and is evaluated regarding efficiency, accuracy and throughput by using ten different data sets. The results show that the proposed system is more scalable and efficient in selecting features.


ACM Transactions on Internet Technology | 2017

Hadoop-Based Intelligent Care System (HICS): Analytical Approach for Big Data in IoT

M. Mazhar Rathore; Anand Paul; Awais Ahmad; Marco Anisetti; Gwanggil Jeon

The Internet of Things (IoT) is increasingly becoming a worldwide network of interconnected things that are uniquely addressable, via standard communication protocols. The use of IoT for continuous monitoring of public health is being rapidly adopted by various countries while generating a massive volume of heterogeneous, multisource, dynamic, and sparse high-velocity data. Handling such an enormous amount of high-speed medical data while integrating, collecting, processing, analyzing, and extracting knowledge constitutes a challenging task. On the other hand, most of the existing IoT devices do not cooperate with one another by using the same medium of communication. For this reason, it is a challenging task to develop healthcare applications for IoT that fulfill all user needs through real-time monitoring of health parameters. Therefore, to address such issues, this article proposed a Hadoop-based intelligent care system (HICS) that demonstrates IoT-based collaborative contextual Big Data sharing among all of the devices in a healthcare system. In particular, the proposed system involves a network architecture with enhanced processing features for data collection generated by millions of connected devices. In the proposed system, various sensors, such as wearable devices, are attached to the human body and measure health parameters and transmit them to a primary mobile device (PMD). The collected data are then forwarded to intelligent building (IB) using the Internet where the data are thoroughly analyzed to identify abnormal and serious health conditions. Intelligent building consists of (1) a Big Data collection unit (used for data collection, filtration, and load balancing); (2) a Hadoop processing unit (HPU) (composed of Hadoop distributed file system (HDFS) and MapReduce); and (3) an analysis and decision unit. The HPU, analysis, and decision unit are equipped with a medical expert system, which reads the sensor data and performs actions in the case of an emergency situation. To demonstrate the feasibility and efficiency of the proposed system, we use publicly available medical sensory datasets and real-time sensor traffic while identifying the serious health conditions of patients by using thresholds, statistical methods, and machine-learning techniques. The results show that the proposed system is very efficient and able to process high-speed WBAN sensory data in real time.


Journal of Sensors | 2016

Context-Aware Mobile Sensors for Sensing Discrete Events in Smart Environment

Awais Ahmad; M. Mazhar Rathore; Anand Paul; Won-Hwa Hong; HyunCheol Seo

Over the last few decades, several advancements in the field of smart environment gained importance, so the experts can analyze ideas for smart building based on embedded systems to minimize the expense and energy conservation. Therefore, propelling the concept of smart home toward smart building, several challenges of power, communication, and sensors’ connectivity can be seen. Such challenges distort the interconnectivity between different technologies, such as Bluetooth and ZigBee, making it possible to provide the continuous connectivity among different objects such as sensors, actuators, home appliances, and cell phones. Therefore, this paper presents the concept of smart building based on embedded systems that enhance low power mobile sensors for sensing discrete events in embedded systems. The proposed scheme comprises system architecture that welcomes all the mobile sensors to communicate with each other using a single platform service. The proposed system enhances the concept of smart building in three stages (i.e., visualization, data analysis, and application). For low power mobile sensors, we propose a communication model, which provides a common medium for communication. Finally, the results show that the proposed system architecture efficiently processes, analyzes, and integrates different datasets efficiently and triggers actions to provide safety measurements for the elderly, patients, and others.


Computers & Electrical Engineering | 2018

Medical images fusion by using weighted least squares filter and sparse representation

Wei Jiang; Xiaomin Yang; Wei Wu; Kai Liu; Awais Ahmad; Arun Kumar Sangaiah; Gwanggil Jeon

Abstract Multi-modal medical image fusion can obtain more comprehensive and high quality image by integrating the complementary information of medical images, which can provide more accurate data for clinical diagnosis and treatment. To preserve the detailed information and structure information of the source image, in this paper, a novel medical image fusion method exploiting multi-scale edge-preserving decomposition and sparse representation is proposed. In our method, medical source images are decomposed into low-frequency (LF) layers and high-frequency (HF) layers by the weighted least squares filter. The rule which combined by Laplacian pyramid and sparse representation is employed to fuse LF layers. The HF layers are merged using max-absolute fusion rule. Finally, the fused LF and HF layers are combined to obtain the fused image. Experimental results prove that our method outperforms many other methods in terms of both visual and quantitative evaluations.

Collaboration


Dive into the Awais Ahmad's collaboration.

Top Co-Authors

Avatar

Gwanggil Jeon

Incheon National University

View shared research outputs
Top Co-Authors

Avatar

Anand Paul

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Sadia Din

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

M. Mazhar Rathore

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Sohail Jabbar

National Textile University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesco Piccialli

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Fakhri Alam Khan

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge