Ghalem Belalem
University of Oran
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ghalem Belalem.
international conference on information computing and applications | 2010
Ghalem Belalem; Fatima Zohra Tayeb; Wieme Zaoui
In Cloud Computing, service availability and performance are two significant aspects to be dealt with. These two aspects can deteriorate or even stopping the services of Cloud Computing, if they are not taken into account. Users see that cloud computing delivers elastic computing services to users on the basis of their needs. This paper aims at improving operation of service of the Cloud Computing environment. Cloud service must be available some is the situations and powerful being by a response time reduced at a users request. To meet this aim, we propose, in this paper, two approaches which aim at returning a better availability of Datacenters without deteriorating the performances for the answers of the users. The first uses the principle of the messages of availability and the second uses the principle of reservation in advance.
International Journal of Web and Grid Services | 2007
Ghalem Belalem; Yahya Slimani
Data grids are current solutions suggested to meet the needs of large-scale systems. They provide a set of various resources which are geographically distributed. A data grid allows quick and efficient access to data, enhancement of availability, and fault tolerance. In large-scale systems, these advantages are possible only by the use of replication. The latter, however, poses the problem of the maintenance of the consistency of replicas of the same data set. In this paper, we propose a hybrid method combining the pessimistic and optimistic approaches conceived for data consistency maintenance. Our approach is based on a hierarchical representation model with two levels, which facilitates the management of replica consistency in the large-scale systems.
Journal of Computers | 2011
Ghalem Belalem; Samah Bouamama; Larbi Sekhri
In Cloud computing, the availability and performance of services are two important aspects to be lifted, because users require a certain level of quality service in terms of timeliness of their duties in a lower cost. Several studies have overcome this problem by the proposed algorithms borrowed from economic models of real world economy to ensure that quality of service, our job is to extend and enrich the simulator CloudSim by auction algorithms inherited from GridSim simulator, but its algorithms do not support the virtualization which is an important part of Cloud Computing, why we introduced several parameters and functions adapted to the environment of cloud computing as well as users to meet their requirements.
multimedia and ubiquitous engineering | 2007
Ghalem Belalem; Yahya Slimani
One of the principal motivations to use the grids computing and data grids comes from the applications using of large sets from data, for example, in high-energy physics or life science to improve the total output of the software environments used to carry these applications on the grids, data replication are deposited on various selected sites. In the field of the grids the majority of the strategies of replication of the data and scheduling of the jobs were tested by simulation. Several simulators of grids were born. One of the most simulators interesting for our study is the OptorSim tool. In this paper, we present an extension of the OptorSim simulator by a consistency management module of the replicas in the Data Grids. This extension corresponds to a hybrid approach of consistency, it inspired by the pessimistic and optimistic approaches of consistency. This suggested approach has two vocations, in the first time, it makes it possible to reduce the response times compared with the completely pessimistic approach, in the second time, it gives a good quality of service compared with the optimistic approach.
Journal of Information Processing Systems | 2012
Bakhta Meroufel; Ghalem Belalem
The data grid provides geographically distributed resources for large-scale applications. It generates a large set of data. The replication of this data in several sites of the grid is an effective solution for achieving good performance. In this paper we propose an approach of dynamic replication in a hierarchical grid that takes into account crash failures in the system. The replication decision is taken based on two parameters: the availability and popularity of the data. The administrator requires a minimum rate of availability for each piece of data according to its access history in previous periods, but this availability may increase if the demand is high on this data. We also proposed a strategy to keep the desired availability respected even in case of a failure or rarity (no- popularity) of the data. The simulation results show the effectiveness of our replication strategy in terms of response time, the unavailability of requests, and availability
International Journal of Ambient Computing and Intelligence | 2017
Houcine Matallah; Ghalem Belalem; Karim Bouamrane
Thetechnologicalrevolutionintegratingmultipleinformationsourcesandextensionofcomputer scienceindifferentsectorsledtotheexplosionofthedataquantities,whichreflectsthescalingofvolumes,numbersandtypes.Thesemassiveincreaseshaveresultedinthedevelopmentofnewlocation techniquesandaccesstodata.Thefinalstepsinthisevolutionhaveemergednewtechnologies:Cloud andBigData.ThereferenceimplementationoftheCloudsandBigDatastorageisincontestablythe HadoopDistributedFileSystem(HDFS).Thislatterisbasedontheseparationofmetadatatodatathat consistsinthecentralizationandisolationofthemetadataofstorageservers.Inthispaper,theauthors proposeanapproachtoimprovetheservicemetadataforHadooptomaintainconsistencywithout muchcompromisingperformanceandscalabilityofmetadatabysuggestingamixedsolutionbetween centralizationanddistributionofmetadatatoenhancetheperformanceandscalabilityofthemodel. KeywoRDS Big Data, Clouds of Storage, Hadoop, HDFS, MapReduce, Metadata
International Journal of Intelligent Information Technologies | 2016
Ghalem Belalem; Ahmed Abbache; Fatma Zohra Belkredim; Farid Meziane
Query expansion is the process of adding additional relevant terms to the original queries to improve the performance of information retrieval systems. However, previous studies showed that automatic query expansion using WordNet do not lead to an improvement in the performance. One of the main challenges of query expansion is the selection of appropriate terms. In this paper, the authors review this problem using Arabic WordNet and Association Rules within the context of Arabic Language. The results obtained confirmed that with an appropriate selection method, the authors are able to exploit Arabic WordNet to improve the retrieval performance. Their empirical results on a sub-corpus from the Xinhua collection showed that their automatic selection method has achieved a significant performance improvement in terms of MAP and recall and a better precision with the first top retrieved documents.
ieee international conference on high performance computing data and analytics | 2014
Said Limam; Ghalem Belalem
Cloud computing has become a significant technology and a great solution for providing a flexible, on-demand, and dynamically scalable computing infrastructure for many applications. Cloud computing also presents a significant technology trends. With the cloud computing technology, users use a variety of devices to access programs, storage, and application-development platforms over the Internet, via services offered by cloud computing providers. The probability of failure occur during the execution becomes stronger when the number of node increases; since it is impossible to fully prevent failures, one solution is to implement fault tolerance mechanisms. Fault tolerance has become a major task for computer engineers and software developers because the occurrence of faults increases the cost of using resources. In this paper, the authors have proposed an approach that is a combination of migration and checkpoint mechanism. The checkpoint mechanism minimizes the time lost and reduces the effect of failures on application execution while the migration mechanism guarantee the continuity of application execution and avoid any loss due to hardware failure in a way transparent and efficient. The results obtained by the simulation show the effectiveness of our approaches to fault tolerance in term of execution time and masking effects of failures.
Journal of High Speed Networks | 2014
Bakhta Meroufel; Ghalem Belalem
Fault tolerance is actually an essential issue in cloud computing to face failures and minimize their damages. The checkpointing is a powerful fault tolerance technique that consists of saving the transient state of a computation system on a persistent storage from which the execution state can be restarted in case of failure. The coordinated checkpointing is an efficient checkpointing strategy because it is domino effect-free and it needs only the last stored checkpoint to ensure a consistent state. In this paper we propose a lightweight coordinated checkpointing for cloud computing that minimizes the overload of classical coordinated checkpointing by minimizing the number of the participating virtual machines (VMs) in each checkpointing interval. The experimental results prove that our proposal reduces the overload and improves the system performances.
international conference on algorithms and architectures for parallel processing | 2013
Esma Insaf Djebbar; Ghalem Belalem
The Cloud Computing systems are in the process of becoming an important platform for scientific applications. Optimization problems of data placement and task scheduling in a heterogeneous environment such as cloud are difficult problems. Approaches for scheduling and data placement is often highly correlated, which take into account a few factors at the same time, and what are the most often adapted to applications data medium and therefore goes not to scale. The objective of this work is to propose an optimization approach that takes into account an effective data placement and scheduling of tasks by replication in Cloud environments.