Simone A. Ludwig
North Dakota State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Simone A. Ludwig.
grid computing | 2011
Simone A. Ludwig; Azin Moallem
With the rapid growth of data and computational needs, distributed systems and computational Grids are gaining more and more attention. The huge amount of computations a Grid can fulfill in a specific amount of time cannot be performed by the best supercomputers. However, Grid performance can still be improved by making sure all the resources available in the Grid are utilized optimally using a good load balancing algorithm. This research proposes two new distributed swarm intelligence inspired load balancing algorithms. One algorithm is based on ant colony optimization and the other algorithm is based on particle swarm optimization. A simulation of the proposed approaches using a Grid simulation toolkit (GridSim) is conducted. The performance of the algorithms are evaluated using performance criteria such as makespan and load balancing level. A comparison of our proposed approaches with a classical approach called State Broadcast Algorithm and two random approaches is provided. Experimental results show the proposed algorithms perform very well in a Grid environment. Especially the application of particle swarm optimization, can yield better performance results in many scenarios than the ant colony approach.
european conference on web services | 2005
Ali Shaikh Ali; Simone A. Ludwig; Omer Farooq Rana
Provision of services within a virtual framework for resource sharing across institutional boundaries has become an active research area. Many such services encode access to computational and data resources, comprising single machines to computational clusters. Such services can also be informational, and integrate different resources within an institution. Consequently, we envision a service rich environment in the future, where service consumers are represented by intelligent agents. If interaction between agents is automated, it is necessary for these agents to be able to automatically discover services and choose between a set of equivalent (or similar) services. In such a scenario trust serves as a benchmark to differentiate between services. In this paper we introduce a novel framework for automated service discovery and selection of Web Services based on a users trust policy. The framework is validated by a case study of data mining Web Services and is evaluated by an empirical experiment.
Journal of Web Semantics | 2006
Simone A. Ludwig; S. M. S. Reyhani
The fundamental problem that the Grid research and development community is seeking to solve is how to coordinate distributed resources amongst a dynamic set of individuals and organisations in order to solve a common collaborative goal. The problem arises through the heterogeneity, distribution and sharing of the resources in different virtual organisations. Interoperability is a main issue for applications to function with the Grid. This paper proposes a matchmaking framework for service discovery in Grid environments based on three selection stages which are context, semantic and registry selection. It provides a better service discovery process by using semantic descriptions stored in ontologies which specify both the Grid services and the application knowledge. The framework permits Grid applications to specify the criteria a service request is matched with and enables interoperability for the matchmaking process. A proof of concept is done with a prototype implementation, and an enhancement of the matchmaking process is achieved with a similarity metric which allows quantifying the quality of a match. A qualitative and quantitative evaluation of the prototype system is given with an analysis and performance measurements to quantify the scalability of the prototype.
nature and biologically inspired computing | 2012
Ibrahim Aljarah; Simone A. Ludwig
Large scale data sets are difficult to manage. Difficulties include capture, storage, search, analysis, and visualization of large data. In particular, clustering of large scale data has received considerable attention in the last few years and many application areas such as bioinformatics and social networking are in urgent need of scalable approaches. The new techniques need to make use of parallel computing concepts in order to be able to scale with increasing data set sizes. In this paper, we propose a parallel particle swarm optimization clustering (MR-CPSO) algorithm that is based on MapReduce. The experimental results reveal that MR-CPSO scales very well with increasing data set sizes and achieves a very close to the linear speedup while maintaining the clustering quality. The results also demonstrate that the proposed MR-CPSO algorithm can efficiently process large data sets on commodity hardware.
congress on evolutionary computation | 2013
Ibrahim Aljarah; Simone A. Ludwig
The increasing volume of data in large networks to be analyzed imposes new challenges to an intrusion detection system. Since data in computer networks is growing rapidly, the analysis of these large amounts of data to discover anomaly fragments has to be done within a reasonable amount of time. Some of the past and current intrusion detection systems are based on a clustering approach. However, in order to cope with the increasing amount of data, new parallel methods need to be developed in order to make the algorithms scalable. In this paper, we propose an intrusion detection system based on a parallel particle swarm optimization clustering algorithm using the MapReduce methodology. The use of particle swarm optimization for the clustering task is a very efficient way since particle swarm optimization avoids the sensitivity problem of initial cluster centroids as well as premature convergence. The proposed intrusion detection system processes large data sets on commodity hardware. The experimental results on a real intrusion data set demonstrate that the proposed intrusion detection system scales very well with increasing data set sizes. Moreover, it achieves close to the linear speedup by improving the intrusion detection and false alarm rates.
Journal of Parallel and Distributed Computing | 2005
Simone A. Ludwig; S. M. S. Reyhani
The fundamental problem the Grid research and development community is seeking to solve is how to coordinate distributed resources amongst a dynamic set of individuals and organizations in order to solve a common collaborative goal. The problem of service discovery in a Grid environment arises through the heterogeneity, distribution and sharing of the resources in different virtual organizations. This paper proposes a service discovery framework which is based on semantics. It gives an example of the Grid Job Submission Service written in DAML-S in order to show how service ontologies are implemented. This semantic approach allows a more flexible and dynamic matching mechanism based on semantic descriptions stored in ontologies.
Journal of Grid Computing | 2006
Simone A. Ludwig; Omer Farooq Rana; Julian Padget; William Naylor
Service discovery and matchmaking in a distributed environment has been an active research issue for some time now. Previous work on matchmaking has typically presented the problem and service descriptions as free or structured (marked-up) text, so that keyword searches, tree-matching or simple constraint solving are sufficient to identify matches. In this paper, we discuss the problem of matchmaking for mathematical services, where the semantics play a critical role in determining the applicability or otherwise of a service and for which we use OpenMath descriptions of pre- and post-conditions. We describe a matchmaking architecture supporting the use of match plug-ins and describe five kinds of plug-in that we have developed to date: (i) A basic structural match, (ii) a syntax and ontology match, (iii) a value substitution match, (iv) an algebraic equivalence match and (v) a decomposition match. The matchmaker uses the individual match scores from the plug-ins to compute a ranking by applicability of the services. We consider the effect of pre- and post-conditions of mathematical service descriptions on matching, and how and why to reduce queries into Disjunctive Normal Form (DNF) before matching. A case study demonstrates in detail how the matching process works for all four algorithms.
congress on evolutionary computation | 2013
Ibrahim Aljarah; Simone A. Ludwig
High-quality clustering techniques are required for the effective analysis of the growing data. Clustering is a common data mining technique used to analyze homogeneous data instance groups based on their specifications. The clustering based nature-inspired optimization algorithms have received much attention as they have the ability to find better solutions for clustering analysis problems. Glowworm Swarm Optimization (GSO) is a recent nature-inspired optimization algorithm that simulates the behavior of the lighting worms. GSO algorithm is useful for a simultaneous search of multiple solutions, having different or equal objective function values. In this paper, a clustering based GSO is proposed (CGSO), where the GSO is adjusted to solve the data clustering problem to locate multiple optimal centroids based on the multimodal search capability of the GSO. The CGSO process ensures that the similarity between the cluster members is maximized and the similarity among members from different clusters is minimized. Furthermore, three special fitness functions are proposed to evaluate the goodness of the GSO individuals in achieving high quality clusters. The proposed algorithm is tested by artificial and real-world data sets. The better performance of our proposed algorithm over four popular clustering algorithms is demonstrated on most data sets. The results reveal that CGSO can efficiently be used for data clustering.
ieee international conference on fuzzy systems | 2009
Simone A. Ludwig; Venkat Pulimi; Andriy Hnativ
A service-oriented environment has special characteristics that distinguishes it from other computing environments: (i) the environment is dynamic; (ii) the number of service providers is unbounded; (iii) services are owned by various stakeholders with different aims and objectives; (iv) there is no central authority that can control all the service providers and consumers; (v) service providers and consumers are self-interested. Given these special characteristics, the evaluation of trust and reputation is very important in such an open, dynamic and distributed environment. Therefore, a fuzzy-based trust and reputation approach using three trust sources was developed. Simulating the real world in which deception happens, an evaluation is performed showing the usefulness and robustness of the fuzzy approach by a comparison with a weighted approach.
acm symposium on applied computing | 2009
Azin Moallem; Simone A. Ludwig
Grids are an emerging infrastructure providing distributed access to computational and storage resources. Handling many incoming requests at the same time and distributing the workload efficiently is a challenge which load balancing algorithms address. Current load balancing implementations for the Grid are central in nature and therefore prone to the single point of failure problem. This paper introduces two distributed artificial life-inspired load balancing algorithms using Ant Colony Optimization and Particle Swarm Optimization. Distributed load balancing stands out as a robust algorithm in regard to any topology changes in the network. The implementation details are given and evaluation results show the efficiency of the two distributed load balancing algorithms.