M. Shamim Hossain
King Saud University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M. Shamim Hossain.
International Journal of Distributed Sensor Networks | 2013
Atif Alamri; Wasai Shadab Ansari; Mohammad Mehedi Hassan; M. Shamim Hossain; Abdulhameed Alelaiwi; M. Anwar Hossain
Nowadays, wireless sensor network (WSN) applications have been used in several important areas, such as healthcare, military, critical infrastructure monitoring, environment monitoring, and manufacturing. However, due to the limitations of WSNs in terms of memory, energy, computation, communication, and scalability, efficient management of the large number of WSNs data in these areas is an important issue to deal with. There is a need for a powerful and scalable high-performance computing and massive storage infrastructure for real-time processing and storing of the WSN data as well as analysis (online and offline) of the processed information under context using inherently complex models to extract events of interest. In this scenario, cloud computing is becoming a promising technology to provide a flexible stack of massive computing, storage, and software services in a scalable and virtualized manner at low cost. Therefore, in recent years, Sensor-Cloud infrastructure is becoming popular that can provide an open, flexible, and reconfigurable platform for several monitoring and controlling applications. In this paper, we present a comprehensive study of representative works on Sensor-Cloud infrastructure, which will provide general readers an overview of the Sensor-Cloud platform including its definition, architecture, and applications. The research challenges, existing solutions, and approaches as well as future research directions are also discussed in this paper.
IEEE Systems Journal | 2017
M. Shamim Hossain
The potential of cloud-supported cyber–physical systems (CCPSs) has drawn a great deal of interest from academia and industry. CCPSs facilitate the seamless integration of devices in the physical world (e.g., sensors, cameras, microphones, speakers, and GPS devices) with cyberspace. This enables a range of emerging applications or systems such as patient or health monitoring, which require patient locations to be tracked. These systems integrate a large number of physical devices such as sensors with localization technologies (e.g., GPS and wireless local area networks) to generate, sense, analyze, and share huge quantities of medical and user-location data for complex processing. However, there are a number of challenges regarding these systems in terms of the positioning of patients, ubiquitous access, large-scale computation, and communication. Hence, there is a need for an infrastructure or system that can provide scalability and ubiquity in terms of huge real-time data processing and communications in the cyber or cloud space. To this end, this paper proposes a cloud-supported cyber–physical localization system for patient monitoring using smartphones to acquire voice and electroencephalogram signals in a scalable, real-time, and efficient manner. The proposed approach uses Gaussian mixture modeling for localization and is shown to outperform other similar methods in terms of error estimation.
Information Systems Frontiers | 2014
Mohammad Mehedi Hassan; M. Shamim Hossain; A. M. Sarkar; Eui-Nam Huh
Distributed resource allocation is a very important and complex problem in emerging horizontal dynamic cloud federation (HDCF) platforms, where different cloud providers (CPs) collaborate dynamically to gain economies of scale and enlargements of their virtual machine (VM) infrastructure capabilities in order to meet consumer requirements. HDCF platforms differ from the existing vertical supply chain federation (VSCF) models in terms of establishing federation and dynamic pricing. There is a need to develop algorithms that can capture this complexity and easily solve distributed VM resource allocation problem in a HDCF platform. In this paper, we propose a cooperative game-theoretic solution that is mutually beneficial to the CPs. It is shown that in non-cooperative environment, the optimal aggregated benefit received by the CPs is not guaranteed. We study two utility maximizing cooperative resource allocation games in a HDCF environment. We use price-based resource allocation strategy and present both centralized and distributed algorithms to find optimal solutions to these games. Various simulations were carried out to verify the proposed algorithms. The simulation results demonstrate that the algorithms are effective, showing robust performance for resource allocation and requiring minimal computation time.
international conference on multimedia and expo | 2012
M. Shamim Hossain; Mohammad Mehedi Hassan; M. Al Qurishi; Abdullah Sharaf Alghamdi
Resource allocation play an important role in service composition for cloud-based video surveillance platform. In this platform, the utilization of computational resources is managed through accessing various services from Virtual Machine (VM) resources. A single service accessed from VMs running inside such a cloud platform may not cater the application demands of all surveillance users. Services require to be modeled as a value added composite service. In order to provide such a composite service to the customer, VM resources need to be utilized optimally so that QoS requirements is fulfilled. In order to optimize the VM resource allocation, we have used linear programming approach as well as heuristics. The simulation results show that our approach outperforms the existing VM allocation schemes in a cloud-based video surveillance environment, in terms of cost and response time.
Mobile Networks and Applications | 2015
M. Shamim Hossain; Ghulam Muhammad
The increasing demand for the remote monitoring of patients combined with the promising potential of cloud computing has enabled the design and development of a number of cloud-based systems and services for healthcare. The cloud computing, in combination with the popularity of smart handheld devices, has inspired healthcare professionals to remotely monitor patients’ health while the patient is at home. To this end, this paper proposes a cloud-assisted speech and face recognition framework for elderly health monitoring, where handheld devices or video cameras collect speech along with face images and deliver to the cloud server for possible analysis and classification. In the framework, a patient’s state such as pain, tensed, and so forth is recognized from his or her speech and face images. The patient state recognition system extracts local features from speech, and texture descriptors from face images. Then it classifies using support vector machines. The recognized state is later sent to the remote care center, healthcare professionals and providers for necessary services in order to provide seamless health monitoring. Experiments have been performed to validate the approach and to evaluate the suitability of this framework in terms of accuracy and time requirements. The results demonstrate the effectiveness of the proposed approach with regards to face and speech processing.
Mobile Networks and Applications | 2016
M. Shamim Hossain; Ghulam Muhammad; Mohammed F. Alhamid; Biao Song; Khalid Al-Mutib
With the advent of future generation mobile communication technologies (5G), there is the potential to allow mobile users to have access to big data processing over different clouds and networks. The increasing numbers of mobile users come with additional expectations for personalized services (e.g., social networking, smart home, health monitoring) at any time, from anywhere, and through any means of connectivity. Because of the expected massive amount of complex data generated by such services and networks from heterogeneous multiple sources, an infrastructure is required to recognize a user’s sentiments (e.g., emotion) and behavioral patterns to provide a high quality mobile user experience. To this end, this paper proposes an infrastructure that combines the potential of emotion-aware big data and cloud technology towards 5G. With this proposed infrastructure, a bimodal system of big data emotion recognition is proposed, where the modalities consist of speech and face video. Experimental results show that the proposed approach achieves 83.10 % emotion recognition accuracy using bimodal inputs. To show the suitability and validity of the proposed approach, Hadoop-based distributed processing is used to speed up the processing for heterogeneous mobile clients.
Computers in Human Behavior | 2014
Atif Alamri; Mohammad Mehedi Hassan; M. Anwar Hossain; Muhammad Al-Qurishi; Yousuf Aldukhayyil; M. Shamim Hossain
This paper describes the process of monitoring obese people through a cloud-based serious game that promotes them to engage in physical exercises in a playful manner. The monitoring process focuses on obtaining various health and exercise-related parameters of obese during game-play, such as heart rate, weight, step count and calorie burn, which contributes to their weight loss. While the obese are engaged in the game session, therapists/caregivers can access their health data anytime, anywhere and from any device to change the game complexity level and accordingly provide on the spot recommendation. In our study, we evaluate how the different physical activities performed through this game impact their cognitive behavior in terms of attention, relevance, confidence and satisfaction. The evaluation was based on the participation of 150 undergraduate obese and overweight students who were asked to play the game and fill a questionnaire after game-play. The data analysis conducted on their feedback showed that they were self-aware and motivated to play the game for weight loss.
IEEE Wireless Communications | 2015
Long Hu; Meikang Qiu; Jeungeun Song; M. Shamim Hossain; Ahmed Ghoneim
With the increasingly serious problem of the aging population, creating an efficient and real-time health management and feedback system based on the healthcare Internet of Things (HealthIoT) is an urgent need. Specifically, wearable technology and robotics can enable a user to collect the required human signals in a comfortable way. HealthIoT is the basic infrastructure for realizing health surveillance, and should be flexible to support multiple application demands and facilitate the management of infrastructure. Therefore, enlightened by the software defined network, we put forward a smart healthcare oriented control method to software define health monitoring in order to make the network more elastic. In this article, we design a centralized controller to manage physical devices and provide an interface for data collection, transmission, and processing to develop a more flexible health surveillance application that is full of personalization. With these distinguished characteristics, various applications can coexist in the shared infrastructure, and each application can demand that the controller customize its own data collection, transmission, and processing as required, and pass the specific configuration of the physical device. This article discusses the background, advantages, and design details of the architecture proposed, which is achieved by an open-ended question and a potential solution. It opens a new research direction of HealthIoT and smart homes.
Journal of Medical Systems | 2016
M. Shamim Hossain
Smart, interactive healthcare is necessary in the modern age. Several issues, such as accurate diagnosis, low-cost modeling, low-complexity design, seamless transmission, and sufficient storage, should be addressed while developing a complete healthcare framework. In this paper, we propose a patient state recognition system for the healthcare framework. We design the system in such a way that it provides good recognition accuracy, provides low-cost modeling, and is scalable. The system takes two main types of input, video and audio, which are captured in a multi-sensory environment. Speech and video input are processed separately during feature extraction and modeling; these two input modalities are merged at score level, where the scores are obtained from the models of different patients’ states. For the experiments, 100 people were recruited to mimic a patient’s states of normal, pain, and tensed. The experimental results show that the proposed system can achieve an average 98.2 % recognition accuracy.
IEEE Transactions on Circuits and Systems for Video Technology | 2015
M. Shamim Hossain; Ghulam Muhammad; Biao Song; Mohammad Mehedi Hassan; Abdulhameed Alelaiwi; Atif Alamri
The promising potential and emerging applications of cloud gaming have drawn increasing interest from academia, industry, and the general public. However, providing a high-quality gaming experience in the cloud gaming framework is a challenging task because of the tradeoff between resource consumption and player emotion, which is affected by the game screen. We tackle this problem by leveraging emotion-aware screen effects in the cloud gaming framework and combining them with remote display technology. The first stage in the framework is the learning or training stage, which establishes a relationship between screen features and emotions using Gaussian mixture model-based classifiers. In the operating stage, a linear programming model provides appropriate screen changes based on the real-time user emotion obtained in the first stage. Our experiments demonstrate the effectiveness of the proposed framework. The results show that our proposed framework can provide a high quality gaming experience while generating an acceptable amount of workload for the cloud server in terms of resource consumption.