Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jens Simon is active.

Publication


Featured researches published by Jens Simon.


Archive | 1993

Problem Independent Distributed Simulated Annealing and its Applications

Ralf Diekmann; Reinhard Lüling; Jens Simon

Simulated annealing has proven to be a good technique for solving hard combinatorial optimization problems. Some attempts at speeding up annealing algorithms have been based on shared memory multiprocessor systems. Also parallelizations for certain problems on distributed memory multiprocessor systems are known.


Proceedings Sixth Heterogeneous Computing Workshop (HCW'97) | 1997

The MOL project: an open, extensible metacomputer

Alexander Reinefeld; Ranieri Baraglia; Thomas Decker; Jörn Gehring; Domenico Laforenza; Friedhelm Ramme; Thomas Römke; Jens Simon

Distributed high-performance computing (so-called metacomputing) refers to the coordinated use of a pool of geographically distributed high-performance computers. The users view of an ideal metacomputer is that of a powerful monolithic virtual machine. The implementors view, on the other hand, is that of a variety of interacting services implemented in a scalable and extensible manner. We present MOL, the Metacomputer Online environment. In contrast to other metacomputing environments, MOL is not based on specific programming models or tools. It has rather been designed as an open, extensible software system comprising a variety of software modules, each of them specialized in serving one specific task such as resource scheduling, job control, task communication, task migration, user interface, and much more. All of these modules exist and are working. The main challenge in the design of MOL lies in the specification of suitable, generic interfaces for the effective interaction between the modules.


european conference on parallel processing | 1996

Accurate Performance Prediction for Assively Parallel Systems and Its Applications

Jens Simon; Jens-Michael Wierum

A performance prediction method is presented, which accurately predicts the expected program execution time on massively parallel systems. We consider distributed-memory architectures with SMD nodes and a fast communication network. The method is based on a relaxed task graph model, a queuing model, and a memory hierarchy model. The relaxed task graph is a compact representation of communicating processes of an application mapped onto the target machine. Simultaneous accesses to the resources of a multi-processor node are modeled by a queuing network. The execution time of the application is computed by an evaluation algorithm. An example application implemented on a massively parallel computer demonstrates the high accuracy of our model. Furthermore, two applications of our accurate prediction method are presented.


international conference on cluster computing | 2010

Enforcing SLAs in Scientific Clouds

Oliver Niehörster; André Brinkmann; Gregor Fels; Jens Krüger; Jens Simon

Software as a Service (SaaS) providers enable the on-demand use of software, which is an intriguing concept for business and scientific applications. Typically, service level agreements (SLAs) are specified between the provider and the user, defining the required quality of service (QoS). Today SLA aware solutions only exist for business applications. We present a general SaaS architecture for scientific software that offers an easy-to-use web interface. Scientists define their problem description, the QoS requirements and can access the results through this portal. Our algorithms autonomously test the feasibility of the SLA and, if accepted, guarantee its fulfillment. This approach is independent of the underlying cloud infrastructure and successfully deals with performance fluctuations of cloud instances. Experiments are done with a scientific application in private and public clouds and we also present the implementation of a high-performance computing (HPC) cloud dedicated for scientific applications.


grid computing | 2011

Autonomic Resource Management with Support Vector Machines

Oliver Niehörster; Alexander Krieger; Jens Simon; André Brinkmann

The use of virtualization technology makes data centers more dynamic and easier to administrate. Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments and the implification on the user side leads to many challenges for the provider. He has to find cost-efficient configurations and has to deal with dynamic environments to ensure service guarantees. In this paper, we introduce a software solution that reduces the degree of human intervention to manage cloud services. We present a multi-agent system located in the Software as a Service (SaaS) layer. Agents allocate resources, configure applications, check the feasibility of requests, and generate cost estimates. The agents learn behavior models of the services via Support Vector Machines (SVMs) and share their experiences via a global knowledge base. We evaluate our approach on real cloud systems with three different applications, a brokerage system, a high-performance computing software, and a web server.


Software - Practice and Experience | 2012

Virtualized HPC: a contradiction in terms?

Georg Birkenheuer; André Brinkmann; Jürgen Kaiser; Axel Keller; M. Keller; Christoph Kleineweber; Christoph Konersmann; Oliver Niehörster; Thorsten Schäfer; Jens Simon; Maximilian Wilhelm

System virtualization has become the enabling technology to manage the increasing number of different applications inside data centers. The abstraction from the underlying hardware and the provision of multiple virtual machines (VM) on a single physical server have led to a consolidation and more efficient usage of physical servers. The abstraction from the hardware also eases the provision of applications on different data centers, as applied in several cloud computing environments. In this case, the application need not adapt to the environment of the cloud computing provider, but can travel around with its own VM image, including its own operating system and libraries. System virtualization and cloud computing could also be very attractive in the context of high‐performance computing (HPC). Today, HPC centers have to cope with both, the management of the infrastructure and also the applications. Virtualization technology would enable these centers to focus on the infrastructure, while the users, collaborating inside their virtual organizations (VOs), would be able to provide the software. Nevertheless, there seems to be a contradiction between HPC and cloud computing, as there are very few successful approaches to virtualize HPC centers. This work discusses the underlying reasons, including the management and performance, and presents solutions to overcome the contradiction, including a set of new libraries. The viability of the presented approach is shown based on evaluating a selected parallel, scientific application in a virtualized HPC environment. Copyright


international parallel and distributed processing symposium | 1992

A general purpose distributed implementation of simulated annealing

Ralf Diekmann; Reinhard Lüling; Jens Simon

The authors present a problem-independent general-purpose parallel implementation of simulated annealing (SA) on distributed message-passing multiprocessor systems. The sequential algorithm is studied, and a classification of combinatorial optimization problems together with their neighborhood structures is given. Several parallelization approaches are examined, considering their suitability for problems of the various classes. For typical representatives of the different classes, good parallel SA implementations are presented. A novel parallel SA algorithm that works simultaneously on several Markov chains and decreases the number of chains dynamically is presented. This method yields good results with a parallel self-adapting cooling schedule. All algorithms are implemented in OCCAM-2 on a free configurable transputer system. Measurements on various numbers of processors up to 128 transputers are presented.<<ETX>>


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 1997

Embedding SCI into PVM

Markus Fischer; Jens Simon

The extremely low latencies and high bandwidth results achievable with the Scalable Coherent Interface (SCI) at lowest level encourages its integration into existing Message Passing Environments (MPEs). In combination with Network of Workstations, it can be seen as an alternative for traditional parallel computing with tightly coupled processors with comparable performance at low cost. This paper describes the ongoing implementation of PVM using SCI hardware devices and with Linux and WindowsNT as the operating systems. It gives an overview of SCI, its performance and the possibilities for PVM to make use of the superior features of SCI compared with the conventional Ethernet.


grid computing | 2012

Cost-Aware and SLO-Fulfilling Software as a Service

Oliver Niehörster; André Brinkmann; Axel Keller; Christoph Kleineweber; Jens Krüger; Jens Simon

Virtualization technology makes data centers more dynamic and easier to administrate. Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments and the simplification on the user side leads to many challenges for the provider. He has to find cost-efficient configurations and has to deal with dynamic environments to ensure service level objectives (SLOs). We introduce a software solution that reduces the degree of human intervention to manage clouds. It is designed as a multi-agent system (MAS) and placed on top of the Infrastructure as a Service (IaaS) layer. Worker agents allocate resources, configure applications, check the feasibility of requests, and generate cost estimates. They are equipped with application specific knowledge allowing it to estimate the type and number of necessary resources. During runtime, a worker agent monitors the job and adapts its resources to ensure the specified quality of service—even in noisy clouds where the job instances are influenced by other jobs. They interact with a scheduler agent, which takes care of limited resources and does a cost-aware scheduling by assigning jobs to times with low costs. The whole architecture is self-optimizing and able to use public or private clouds. Building a private cloud needs to face the challenge to find a mapping of virtual machines (VMs) to hosts. We present a rule-based mapping algorithm for VMs. It offers an interface where policies can be defined and combined in a generic way. The algorithm performs the initial mapping at request time as well as a remapping during runtime. It deals with policy and infrastructure changes. An energy-aware scheduler and the availability of cheap resources provided by a spot market are analyzed. We evaluated our approach by building up an SaaS stack, which assigns resources in consideration of an energy function and that ensures SLOs of two different applications, a brokerage system and a high-performance computing software. Experiments were done on a real cloud system and by simulations.


international conference on computational science | 2001

A Cache Simulator for Shared Memory Systems

Florian Schintke; Jens Simon; Alexander Reinefeld

Due to the increasing gap between processor speed and memory access time, a large fraction of a programs execution time is spent in accesses to the various levels in the memory hierarchy. Hence, cache-aware programming is of prime importance. For efficiently utilizing the memory subsystem, many architecture-specific characteristics must be taken into account: cache size, replacement strategy, access latency, number of memory levels, etc.In this paper, we present a simulator for the accurate performance prediction of sequential and parallel programs on shared memory systems. It assists the programmer in locating the critical parts of the code that have the greatest impact on the overall performance. Our simulator is based on the Latency-of-Data-Access Model, that focuses on the modeling of the access times to different memory levels.We describe the design of our simulator, its configuration and its usage in an example application.

Collaboration


Dive into the Jens Simon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Axel Keller

University of Paderborn

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Krüger

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge