Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moustafa AbdelBaky is active.

Publication


Featured researches published by Moustafa AbdelBaky.


Archives of Physical Medicine and Rehabilitation | 2010

In-Home Virtual Reality Videogame Telerehabilitation in Adolescents With Hemiplegic Cerebral Palsy

Meredith R. Golomb; Brenna C. McDonald; Stuart J. Warden; Janell Yonkman; Andrew J. Saykin; Bridget Shirley; Meghan Huber; Bryan Rabin; Moustafa AbdelBaky; Michelle E. Nwosu; Monica Barkat-Masih; Grigore C. Burdea

UNLABELLED Golomb MR, McDonald BC, Warden SJ, Yonkman J, Saykin AJ, Shirley B, Huber M, Rabin B, AbdelBaky M, Nwosu ME, Barkat-Masih M, Burdea GC. In-home virtual reality videogame telerehabilitation in adolescents with hemiplegic cerebral palsy. OBJECTIVE To investigate whether in-home remotely monitored virtual reality videogame-based telerehabilitation in adolescents with hemiplegic cerebral palsy can improve hand function and forearm bone health, and demonstrate alterations in motor circuitry activation. DESIGN A 3-month proof-of-concept pilot study. SETTING Virtual reality videogame-based rehabilitation systems were installed in the homes of 3 participants and networked via secure Internet connections to the collaborating engineering school and childrens hospital. PARTICIPANTS Adolescents (N=3) with severe hemiplegic cerebral palsy. INTERVENTION Participants were asked to exercise the plegic hand 30 minutes a day, 5 days a week using a sensor glove fitted to the plegic hand and attached to a remotely monitored videogame console installed in their home. Games were custom developed, focused on finger movement, and included a screen avatar of the hand. MAIN OUTCOME MEASURES Standardized occupational therapy assessments, remote assessment of finger range of motion (ROM) based on sensor glove readings, assessment of plegic forearm bone health with dual-energy x-ray absorptiometry (DXA) and peripheral quantitative computed tomography (pQCT), and functional magnetic resonance imaging (fMRI) of hand grip task. RESULTS All 3 adolescents showed improved function of the plegic hand on occupational therapy testing, including increased ability to lift objects, and improved finger ROM based on remote measurements. The 2 adolescents who were most compliant showed improvements in radial bone mineral content and area in the plegic arm. For all 3 adolescents, fMRI during grip task contrasting the plegic and nonplegic hand showed expanded spatial extent of activation at posttreatment relative to baseline in brain motor circuitry (eg, primary motor cortex and cerebellum). CONCLUSIONS Use of remotely monitored virtual reality videogame telerehabilitation appears to produce improved hand function and forearm bone health (as measured by DXA and pQCT) in adolescents with chronic disability who practice regularly. Improved hand function appears to be reflected in functional brain changes.


2008 Virtual Rehabilitation | 2008

PlayStation 3-based tele-rehabilitation for children with hemiplegia

Meghan Huber; Bryan Rabin; Ciprian Docan; Grigore C. Burdea; Michelle E. Nwosu; Moustafa AbdelBaky; Meredith R. Golomb

The convergence of game technology (software and hardware), the Internet, and rehabilitation science forms the second-generation virtual rehabilitation framework. This reduced-cost and patient/therapist familiarity facilitate adoption in clinical practice. This paper presents a PlayStation 3-based hand physical rehabilitation system for children with hemiplegia due to perinatal brain injury (hemiplegic cerebral palsy) or later childhood stroke. Unlike precursor systems aimed at providing hand training for post-stroke adults in a clinical setting, the experimental system described here was developed for in-home tele-rehabilitation on a game console for children and adults with chronic hemiplegia after stroke or other focal brain injury. Significant improvements in Activities of Daily Living function followed three months of training at home on the system. Clinical trials are ongoing at this time.


Computing in Science and Engineering | 2013

Cloud Paradigms and Practices for Computational and Data-Enabled Science and Engineering

Manish Parashar; Moustafa AbdelBaky; Ivan Rodero; Aditya Devarakonda

Clouds are rapidly joining high-performance computing (HPC) systems, clusters, and grids as viable platforms for scientific exploration and discovery. As a result, understanding application formulations and usage modes that are meaningful in such a hybrid infrastructure, and how application workflows can effectively utilize it, is critical. Here, three hybrid HPC/grid and cloud cyber infrastructure usage modes are explored: HPC in the Cloud, HPC plus Cloud, and HPC as a Service, presenting illustrative scenarios in each case and outlining benefits, limitations, and research challenges.


2009 Virtual Rehabilitation International Conference | 2009

Eleven Months of home virtual reality telerehabilitation - Lessons learned

Meredith R. Golomb; Monica Barkat-Masih; Brian Rabin; Moustafa AbdelBaky; Meghan Huber; Grigore C. Burdea

Indiana University School of Medicine and the Rutgers Tele-rehabilitation Institute have collaborated for over a year on a clinical pilot study of in-home hand telerehabiltation. Virtual reality videogames were used to train three adolescents with hemiplegic cerebral palsy. Training duration varied between 6 and 11 months. The investigators summarize medical, technological, legal, safety, social, and economic issues that arose during this lengthy study. Solutions to deal with these multitude of issues are proposed. The authors stress the importance of choosing multiple outcome measures to detect clinically meaningful change. The authors believe that in-home telerehabilitation is the future of rehabilitation.


ieee acm international conference utility and cloud computing | 2015

Docker containers across multiple clouds and data centers

Moustafa AbdelBaky; Javier Diaz-Montes; Manish Parashar; Merve Unuvar; Malgorzata Steinder

Emerging lightweight cloud technologies, such as Docker containers, are gaining wide traction in IT due to the fact that they allow users to deploy applications in any environment faster and more efficiently than using virtual machines. However, current Docker-based container deployment solutions are aimed at managing containers in a single-site, which limits their capabilities. As more users look to adopt Docker containers in dynamic, heterogenous environments, the ability to deploy and effectively manage containers across multiple clouds and data centers becomes of utmost importance. In this paper, we propose a prototype framework, called C-Ports, that enables the deployment and management of Docker containers across multiple hybrid clouds and traditional clusters while taking into consideration user and resource provider objectives and constraints. The framework leverages a constraint-programming model for resource selection and uses CometCloud to allocate/deallocate resources as well as to deploy containers on top of these resources. Our prototype has been effectively used to deploy and manage containers in a dynamic federation composed of five clouds and two clusters.


international conference on cloud computing | 2012

Accelerating MapReduce Analytics Using CometCloud

Moustafa AbdelBaky; Hyunjoo Kim; Ivan Rodero; Manish Parashar

MapReduce-Hadoop has emerged as an effective framework for large-scale data analytics, providing support for executing jobs and storing data in a parallel and distributed manner. MapReduce has been shown to perform very well on large datacenters running applications where the data can be effectively divided into homogeneous chunks running across homogeneous hardware. However, the performance of MapReduceHadoop is far from ideal when either or both hardware and datasets are heterogeneous. Such heterogeneity is unavoidable in many academic computing environments that use multiple generations of hardware, and share resources among users. Heterogeneity is also unavoidable in scientific applications that process a varying number of datasets of different sizes. In these cases, the performance of MapReduce-Hadoop can be a concern. In this paper, we implement MapReduce on top of CometCloud to address the issue of heterogeneity and support applications classes that involve irregular datasets (e.g. large number of small data files or datasets of varying sizes). Furthermore, we develop an autonomic manager that can schedule MapReduce tasks based on user objective, provision resources accordingly, and support on-demand scale up and cloudbursts. These resources can be selected from a hybrid infrastructure such as local clusters, data centers, and public clouds. The performance of the developed solution is verified using a protein data mining application operating on data from the Protein Data Bank. The application is deployed, based on deadline and budget constraints, on a cluster at Rutgers and/or Amazon EC2 resources. The experimental results show that the MapReduce-CometCloud framework can effectively support applications operating on large numbers of small data files on a heterogeneous and distributed environment, and satisfy user objective autonomically using cloudbursts.


International Journal of High Performance Computing Applications | 2018

Software-defined environments for science and engineering

Moustafa AbdelBaky; Javier Diaz-Montes; Manish Parashar

Service-based access models coupled with recent advances in application deployment technologies are enabling opportunities for realizing highly customized software-defined environments that can achieve new levels of efficiencies and can support emerging dynamic and data-driven applications. However, achieving this vision requires new models that can support dynamic (and opportunistic) compositions of infrastructure services, which can adapt to evolving application needs and the state of resources. In this article, we present a programmable dynamic infrastructure service composition approach that uses software-defined environment concepts to control the composition process. The resulting software-defined infrastructure service composition adapts to meet objectives and constraints set by the users, applications, and/or resource providers. We present and compare two different approaches for programming resources and controlling the service composition, one that is based on a rule engine and another that leverages a constraint programming model for resource description. We present the design and prototype implementation of such software-defined service composition and demonstrate its operation through a use case where multiple views of heterogeneous, geographically distributed services are aggregated on demand based on user and resource provider specifications. The resulting compositions are used to run different bioinformatics workloads, which are encapsulated inside Docker containers. Each view independently adapts to various constraints and events that are imposed on the system while minimizing the workload completion time.


international conference on cloud computing | 2015

Realizing the Potential of IoT Using Software-Defined Ecosystems

Manish Parashar; Moustafa AbdelBaky; Mengsong Zou; Ali Reza Zamani; Javier Diaz-Montes

Pervasive computational ecosystems that combine data sources and computing/communication resources in self-managed environments, such as the ones powered by Internet of Things (IoT) devices, have the potential to automate and facilitate many aspects of our lives, and impact a variety of applications, from the management of extreme events to the optimization of everyday processes. However, this vision remains mostly unrealized despite the fact that the technology to achieve it exists, largely because of the gap between our ability to collect data and our ability to gain insight from it. In this paper, we discuss the challenges associated with providing a pervasive computational ecosystem. We then present our vision of how to best support data-driven computational ecosystems and propose a conceptual architecture that leverages ideas from software-defined environments in order to combine data, computing, and communication resources. In addition, we show how this proposed architecture enables the execution of data-driven workflows on top of these resources.


Proceedings of the 2nd International Workshop on Software-Defined Ecosystems | 2015

A Framework for Realizing Software-Defined Federations for Scientific Workflows

Moustafa AbdelBaky; Javier Diaz-Montes; Mengsong Zou; Manish Parashar

Federated computing has been shown to be an effective model for harnessing the capabilities and capacities of geographically- distributed resources in order to solve large science and en- gineering problems. However, traditional High Performance Computing (HPC) based federation models can be restrictive as they present users with a pre-defined set of resources and do not allow federations to evolve in response to changing resources or application needs. As emerging application workflows and the underlying resources become increasingly dynamic and exhibit changing requirements and constraints, they cannot be easily supported by such federation models. Instead, new federation models that are capable of dynamically adapting to these emerging needs are required. In this paper, we present a programmable dynamic federation model that uses software-defined environment concepts to drive the federation process and seamlessly adapt resource compositions at runtime. The resulting software-defined federation adapts to meet both requirements and constraints set by the user, application, and/or resource providers. In this paper we present the design and prototype implementation of such software-defined federation model, and demonstrate its operation and performance through a use case where heterogeneous, geographically distributed resources are federated based on user specifications, and the federation evolves over time following the requirements and constraints defined by the user.


international conference on conceptual structures | 2016

Kepler + CometCloud

Jianwu Wang; Moustafa AbdelBaky; Javier Diaz-Montes; Shweta Purawat; Manish Parashar; Ilkay Altintas

The widespread availability and variety of cloud offerings and their associated access models has drastically grown over the past few years. It is now common for users to have access to multiple infrastructures (e.g., campus clusters, cloud resources), however, deploying complex application workflows on top of these resources remains a challenge. In this paper we propose an approach that allows users to build and run scientific workflows on top of a federation of multiple clouds and traditional resources (e.g., clusters). We achieve this by integrating the Kepler scientific workflow platform with the CometCloud framework. This allows us to: 1) dynamically and programmatically provision and aggregate resources, 2) easily compose complex workflows, and 3) dynamically schedule and execute these workflows based on provenance and overall objectives on the resulting federation of resources. We demonstrate our approach and evaluate its capabilities by running a bioinformatics workflow on top of a federation composed of a campus cluster and two clouds.

Collaboration


Dive into the Moustafa AbdelBaky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grigore C. Burdea

Business International Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meredith R. Golomb

Indiana University – Purdue University Indianapolis

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge