Mohammad Aazam
Carleton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohammad Aazam.
consumer communications and networking conference | 2016
Mohammad Aazam; Marc St-Hilaire; Chung-Horng Lung; Ioannis Lambadaris
Lately, pervasive and ubiquitous computing services have been under focus of not only the research community, but developers as well. Different devices generate different types of data with different frequencies. Emergency, healthcare, and latency sensitive services require real-time responses. Also, it is necessary to decide what type of data has to be uploaded to the cloud, without burdening the core network and the cloud. For this purpose, the cloud on the edge of the network, known as Fog or Micro Datacenter (MDC), plays an important role. Fog resides between the underlying Internet of Things (IoTs) and the mega datacenter cloud. Its purpose is to manage resources, perform data filtration, preprocessing, and security measures. To achieve this, Fog requires an effective and efficient resource management framework, which we propose in this paper. Fog has to deal with mobile nodes and IoTs, which involves objects and devices of different types having a fluctuating connectivity behavior. All such types of service customers have an unpredictable relinquish probability, since any object or device can stop using resources at any moment. In our proposed methodology for resource estimation and management through Fog computing, we take into account these factors and formulate resource management on the basis of fluctuating relinquish probability of the customer, service type, service price, and variance of the relinquish probability. With the intent of showing practical implications of our method, we implemented it on Crawdad real trace and Amazon EC2 pricing. Based on various services, differentiated through Amazons price plans and historical record of Cloud Service Customers (CSCs), the model determines the amount of resources to be allocated. More loyal CSCs get better services, while for the contrary case, the provider reserves resources cautiously.
Sensors | 2014
Asad Masood Khattak; Noman Akbar; Mohammad Aazam; Taqdir Ali; Adil Mehmood Khan; Seokhee Jeon; Myunggwon Hwang; Sungyoung Lee. Lee
The acceptance and usability of context-aware systems have given them the edge of wide use in various domains and has also attracted the attention of researchers in the area of context-aware computing. Making user context information available to such systems is the center of attention. However, there is very little emphasis given to the process of context representation and context fusion which are integral parts of context-aware systems. Context representation and fusion facilitate in recognizing the dependency/relationship of one data source on another to extract a better understanding of user context. The problem is more critical when data is emerging from heterogeneous sources of diverse nature like sensors, user profiles, and social interactions and also at different timestamps. Both the processes of context representation and fusion are followed in one way or another; however, they are not discussed explicitly for the realization of context-aware systems. In other words most of the context-aware systems underestimate the importance context representation and fusion. This research has explicitly focused on the importance of both the processes of context representation and fusion and has streamlined their existence in the overall architecture of context-aware systems’ design and development. Various applications of context representation and fusion in context-aware systems are also highlighted in this research. A detailed review on both the processes is provided in this research with their applications. Future research directions (challenges) are also highlighted which needs proper attention for the purpose of achieving the goal of realizing context-aware systems.
international conference on telecommunications | 2016
Mohammad Aazam; Marc St-Hilaire; Chung-Horng Lung; Ioannis Lambadaris
Internet of Things (IoT) is now transitioning from theory to practice. This means that a lot of data will be generated and the management of this data is going to be a big challenge. To transform IoT into reality and build upon realistic and more useful services, better resource management is required at the perception layer. In this regard, Fog computing plays a very vital role. With the advent of Vehicular Ad hoc Networks (VANET) and remote healthcare and monitoring, quick response time and latency minimization are required. However, the receiving nodes have a very fluctuating behavior in resource consumption especially if they are mobile. Fog, a localized cloud placed close to the underlying IoTs, provides the means to cater such issues by analyzing the behavior of the nodes and estimating resources accordingly. Similarly, Service Level Agreement (SLA) management and meeting the Quality of Service (QoS) requirements also become issues. In this paper, we devise a methodology, referred to as MEdia FOg Resource Estimation (MeFoRE), to provide resource estimation on the basis of service give-up ratio, also called Relinquish Rate (RR), and enhance QoS on the basis of previous Quality of Experience (QoE) and Net Promoter Score (NPS) records. The algorithms are implemented using CloudSim and applied on real IoT traces on the basis of Amazon EC2 resource pricing.
2017 International Conference on Computing, Networking and Communications (ICNC) | 2017
Mohammad Etemad; Mohammad Aazam; Marc St-Hilaire
With the increase in popularity of Internet of Things (IoT), pervasive computing, healthcare services, sensor networks, and mobile devices, a lot of data is being generated at the perception layer. Cloud is the most viable solution for data storage, processing, and management. Cloud also helps in the creation of further services, refined according to the context and requirement. However, being reachable through the Internet, cloud is not efficient enough for latency sensitive multimedia services and other time-sensitive services, like emergency and healthcare. Fog, an extended cloud lying within the proximity of underlying nodes, can mitigate the issues traditional cloud cannot solve being standalone. Fog can provide quick response to the requiring applications. Moreover, it can preprocess and filter data according to the requirements. Trimmed data is then sent to the cloud for further analysis and enhanced service provisioning. However, how much better is it to have a fog in any particular scenario instead of a standalone cloud working without fog is a question right now. In this paper, we provide an answer by analyzing both cloud-only and cloud-fog scenarios in the context of processing delay and power consumption according to increasing number of users, on the basis of varying server load. The simulation is done through Discrete Event System Specification (DEVS). Simulation results demonstrate that by the use of fog networks, users experienced lower waiting times and increased data rates.
Archive | 2016
Mohammad Aazam; Eui-Nam Huh; Marc St-Hilaire; Chung-Horng Lung; Ioannis Lambadaris
With rapidly increasing Wireless Sensor Networks (WSNs) and Internet of Things (IoTs) based services; a lot of data is being generated. It is becoming very difficult to manage power constrained small sensors and other data generating devices. With IoTs, anything can become part of the Internet and generate data. Moreover, data generated needs to be managed according to its requirements, in order to create more valuable services. For this purpose, integration of IoTs with cloud computing is becoming very important. This new paradigm is termed as Cloud of Things (CoTs). CoTs provide means to handle increasing data and other resources of underlying IoTs and WSNs. It also helps in creating an extended portfolio of services that can be provided with this amalgamation. In future, CoTs are going to play a very vital role. In this chapter, the importance of CoT, its architecture, working, and the issues involved are discussed.
computer aided modeling and design of communication links and networks | 2016
Mohammad Aazam; Marc St-Hilaire; Chung-Horng Lung; Ioannis Lambadaris
With the ever increasing population, urbanization, migration issues, and change in lifestyle, municipal solid waste generation levels are increasing significantly. Hence, waste management becomes a challenge faced not only by the developing nations, but also the developed and advanced countries. The overall waste management involves three main types of entities: 1) users who generate waste, 2) waste collectors/city admin., 3) stakeholders. Waste management directly effects the lifestyle, healthcare, environment, recycling and disposal, and several other industries. Current waste management trends are not sophisticated enough to achieve a robust and efficient waste management mechanism. It is very important to have a smart way of managing waste, so that not only the waste status is notified in-time when to be collected, but also, all the stakeholders are made aware in timely fashion that what type of waste in what quantity is coming up at what particular time. This will not only help in attracting and identifying stakeholders, but also aids in creating more effective ways of recycling and minimizing waste also making the overall waste management more efficient and environment friendly. Keeping all this in mind, we propose a cloud-based smart waste management mechanism in which the waste bins are equipped with sensors, capable of notifying their waste level status and upload the status to the cloud. The stakeholders are able to access the desired data from the cloud. Moreover, for city administration and waste management, it will be possible to do route optimization and select path for waste collection according to the statuses of waste bins in a metropolis, helping in fuel and time efficiency.
Archive | 2018
Mohammad Aazam; Marc St-Hilaire; Chung-Horng Lung; Ioannis Lambadaris; Eui-Nam Huh
Internet of Things (IoT) is transitioning from theory to practice. As IoT-based services evolve and the means of connectivity progress, a multitude of devices and objects will become part of it. As a result of which a lot of data will be generated and management of it is going to be a big challenge. In order to build upon realistic and more useful services, better resource management is required at the data perception layer. In this regard, fog computing plays a very vital role. Prevailing Wireless Sensor Networks (WSNs), healthcare, crowdsensing, and smart living related services have made it difficult to handle all the data in an efficient and effective way and create more useful services. Different devices generate different types of data with different frequencies, which cannot be handled by a standalone IoT. Therefore, consolidation of cloud computing with IoT, termed as Cloud of Things (CoT), has recently been under discussion. CoT provides ease of management for the growing media content and other data. Besides this, features like ubiquitous access, service creation, service discovery, and resource provisioning play a significant role which comes with CoT. Emergency, healthcare, and latency sensitive services require real-time response. With the advent of Vehicular Ad hoc Networks (VANETs) and remote healthcare and monitoring, quick response time and latency minimization are required. Fog resides between the underlying IoTs—multiple IoT networks—and the cloud datacenter in a CoT scenario. Its purpose is to manage resources, perform data filtration, preprocess, and take required security measures. To achieve this, fog requires an effective and efficient resource management framework, which we propose in this chapter as an extension of our previous work. Fog has to deal with mobile nodes and IoTs, which involve objects and devices of different types having a fluctuating connectivity behavior. All such types of service customers have an unpredictable service abortion pattern (relinquish probability), since any object or device can stop using resources at any moment. Fog, a localized cloud placed close to the underlying IoTs, provides the means to cater such issues by analyzing the behavior of the nodes and estimating resources accordingly. Similarly, Service Level Agreement (SLA) management and meeting the Quality of Service (QoS) requirements also become issues. QoS directly effects the Quality of Experience (QoE), which plays a key role in influencing the loyalty of the customer. This chapter focuses on estimation of resources for IoT nodes on the basis of their Relinquish Rate (RR) and QoS. This helps in creating a dynamic and rational way of estimating resources according to the requirements with loyalty of customers paying for itself. The devised algorithms are implemented using Java and simulated through CloudSim simulation toolkit to get the evaluation results.
world congress on services | 2018
Qi Hu; Mohammad Aazam; Marc St-Hilaire
Resource allocation is an important problem for all Cloud Service Providers (CSPs). Some recent studies propose interesting resource assignment models based on the historical behavior of customers. However, they have a few limitations. For example, some of the proposed models are not suitable in all situations or server load conditions. In this paper, we address such limitations from the model in [1] and introduce several new resource estimation functions to achieve better resource allocation. More precisely, four new mathematical models are first proposed and analyzed. Then, we used the CloudSim simulation toolkit to compare the mathematical results and the simulation results. Our preliminary analysis indicates that different models should be used for different situations in order to achieve better resource utilization.
international conference on telecommunications | 2017
Huu Dinh; Alexander Dworkin; Christopher O'Neill; Scott Savage; Jimmy Leak; Mohammad Aazam; Marc St-Hilaire
With the advent of cloud computing based technologies, the way to store and manage data has changed completely. With more data being generated every day, especially with other associated paradigms like Internet of Things (IoT), there will be a lot of data to be stored and managed in the cloud. Thus, it will be very important to know which cloud service is more suitable for what kind of data and data-management tasks. To that end, this paper proposes and implements a cloud storage and collaboration tool called OmniBox. The first goal of Omnibox is to provide an evaluation of different cloud providers. The tool monitors different parameters and provides statistics such as upload throughput, download throughput, jitter, single-key and multi-key user accounts, and concurrent download time. The second goal of Omnibox is to provide a unified data management service where users can manage files from different cloud providers within a single interface. Additionally, OmniBox provides a feature called smart upload, where cloud providers are evaluated and suitable service is selected for uploading a file based on Quality of Service (QoS), file-type, file-size, upload throughput, download throughput, available space, jitter, and latency. Our current implementation includes Dropbox and Box Application Programming Interfaces (APIs).
international conference on telecommunications | 2017
Sarabjeet Singh; Mohammad Aazam; Marc St-Hilaire
A lot of work on resource estimation has been carried out in the area of cloud computing. For example, some recent models assign resources based on the history of the users and the utilization of the cloud resource pool. Although these models have generally shown an increase in server utilization, they lack a cost-benefit analysis to know the profit margin obtained by the respective resource allocation schemes. Therefore, a complete model to analyze the cost could be a valuable tool for IaaS cloud service providers (CSP) to compare various resource assignment mechanisms. In this paper, we introduce a Relinquishment-Aware Cloud Economics Model (RACE) to calculate the net profit in a cloud provider environment. Our model includes various parameters such as service price, income from resources used by cloud service customers (CSC), service utilization, number of servers, electricity cost, and service relinquishment cost. The noteworthy contribution of our model is that it includes the cost incurred when users are leaving the cloud provider before their scheduled end time. We consider this loss as relinquishment cost or opportunity cost loss. After implementing our model, we evaluate different resource allocation schemes in a finite resource pool environment. The preliminary results show that blindly assigning more resources does not necessarily generate more profit.