Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amnon Barak is active.

Publication


Featured researches published by Amnon Barak.


Future Generation Computer Systems | 1998

The MOSIX multicomputer operating system for high performance cluster computing

Amnon Barak; Oren La'adan

Abstract The scalable computing cluster at Hebrew University consists of 88 Pentium II and Pentium-Pro servers that are connected by fast Ethernet and the Myrinet LANs. It is running the MOSIX operating system, an enhancement of BSD/OS with algorithms for adaptive resource sharing, that are geared for performance scalability in a scalable computing cluster. These algorithms use a preemptive process migration for load-balancing and memory ushering, in order to create a convenient multiuser time-sharing execution environment for HPC, particularly for applications that are written in PVM or MPI. This paper begins with a brief overview of MOSIX and its resource sharing algorithms. Then the paper presents the performance of these algorithms as well as the performance of several large-scale, parallel applications.


Software - Practice and Experience | 1985

A distributed load-balancing policy for a multicomputer

Amnon Barak; Amnon Shiloh

This paper deals with the organization of a distributed load‐balancing policy for a multicomputer system which consists of a cluster of independent computers that are interconnected by a local area communication network. We introduce three algorithms necessary to maintain load balancing in this system: the local load algorithm, used by each processor to monitor its own load; the exchange algorithm, for exchanging load information between the processors, and the process migration algorithm that uses this information to dynamically migrate processes from overloaded to underloaded processors.


international conference on cluster computing | 2010

A package for OpenCL based heterogeneous computing on clusters with many GPU devices

Amnon Barak; Tal Ben-Nun; Ely Levy; Amnon Shiloh

Heterogeneous systems provide new opportunities to increase the performance of parallel applications on clusters with CPU and GPU architectures. Currently, applications that utilize GPU devices run their device-executable code on local devices in their respective hosting-nodes. This paper presents a package for running OpenMP, C++ and unmodified OpenCL applications on clusters with many GPU devices. This Many GPUs Package (MGP) includes an implementation of the OpenCL specifications and extensions of the OpenMP API that allow applications on one hosting-node to transparently utilize cluster-wide devices (CPUs and/or GPUs). MGP provides means for reducing the complexity of programming and running parallel applications on clusters, including scheduling based on task dependencies and buffer management. The paper presents MGP and the performance of its internals.


Software - Practice and Experience | 1985

MOS: a multicomputer distributed operating system

Amnon Barak; Ami Litman

This paper describes the goals and the internal structure of MOS, a Multicomputer distributed Operating System. MOS is a general‐purpose time‐sharing operating system which makes a cluster of loosely connected independent homogeneous computers behave as a single‐machine UNIX system. The main goals of the system include network transparency, decentralized control, site autonomy and dynamic process migration. The main objective in the design of the system was to reduce the complexity of the system, while maintaining good performance. The internal structure of the system can be characterized by modularity, a high degree of information hiding, hierarchical organization and remote procedure calls.


IEEE Transactions on Parallel and Distributed Systems | 2003

Opportunity cost algorithms for reduction of I/O and interprocess communication overhead in a computing cluster

Arie Keren; Amnon Barak

Computing clusters (CC) consisting of several connected machines, could provide a high-performance, multiuser, timesharing environment for executing parallel and sequential jobs. In order to achieve good performance in such an environment, it is necessary to assign processes to machines in a manner that ensures efficient allocation of resources among the jobs. The paper presents opportunity cost algorithms for online assignment of jobs to machines in a CC. These algorithms are designed to improve the overall CPU utilization of the cluster and to reduce the I/O and the interprocess communication (IPC) overhead. Our approach is based on known theoretical results on competitive algorithms. The main contribution of the paper is how to adapt this theory into working algorithms that can assign jobs to machines in a manner that guarantees near-optimal utilization of the CPU resource for jobs that perform I/O and IPC operations. The developed algorithms are easy to implement. We tested the algorithms by means of simulations and executions in a real system and show that they outperform existing methods for process allocation that are based on ad hoc heuristics.


cluster computing and the grid | 2005

An organizational grid of federated MOSIX clusters

Amnon Barak; Amnon Shiloh; Lior Amar

MOSIX is a cluster management system that uses process migration to allow a Linux cluster to perform like a parallel computer. Recently it has been extended with new features that could make a grid of Linux clusters run as a cooperative system of federated clusters. On one hand, it supports automatic workload distribution among connected clusters that belong to different owners, while still preserving the autonomy of each owner to disconnect its cluster from the grid at any time, without sacrificing migrated processes from other clusters. Other new features of MOSIX include grid-wide automatic resource discovery; a precedence scheme for local processes and among guest processes (from other clusters); flood control; a secure run-time environment (sandbox) which prevents guest processes from accessing local resources in a hosting system, and support of cluster partitions. The resulting grid management system is suitable to create an intra-organizational high-performance computational grid, e.g., in an enterprise or in a campus. The paper presents enhanced and new features of MOSIX and their performance.


Microprocessors and Microsystems | 1998

Memory ushering in a scalable computing cluster

Amnon Barak; Avner Braverman

Abstract Scalable computing clusters (SCC) are becoming an alternative to mainframes and MPPs for the execution of high performance, demanding applications in multi-user, time-sharing environments. In order to better utilize the multiple resources of such systems, it is necessary to develop means for cluster wide resource allocation and sharing, that will make an SCC easy to program and use. This paper presents the details of a memory ushering algorithm among the nodes of an SCC. This algorithm allows a node which has exhausted its main memory to use available memory in other nodes. The paper first presents results of simulations of several algorithms for process placement to nodes. It then describes the memory ushering algorithm of the MOSIX multicomputer operating system for an SCC and its performance.


Proceedings of the International Workshop on Experiences with Distributed Systems | 1987

Design Principles of Operating Systems for Large Scale Multicomputers

Amnon Barak; Yoram Kornatzky

Future multicomputer systems are expected to consist of thousands of interconnected computers. To simplify the usage of these systems, multicomputer operating systems must be developed to integrate a cluster of computers into a unified and coherent environment. Using existing multicomputer operating systems is inappropriate as many commonly used techniques get clogged and lead to congestion, once the system is enlarged over a certain size. This paper deals with the various issues involved with designing an operating system for a large scale multicomputer. We identify the difficulties of using existing operating systems in large multicomputer configurations. Then, based on insight gained in the design of several algorithms, we present eight principles which should serve as guidelines for the designer of such systems. These principles include symmetry, customer-server protocols, and partiality. Another component of our approach is the use of randomness in the systems control. We present probabilistic algorithms for information scattering and load estimation. Tolerating node failures, and garbage collection due to node failures, are part of a distributed operating system routine operations. We present a robust algorithm for locating processes, and an efficient algorithm for garbage collection in a large scale system, which are in line with our principles.


international conference on cluster computing | 2008

Combining Virtual Machine migration with process migration for HPC on multi-clusters and Grids

Tal Maoz; Amnon Barak; Lior Amar

The renewed interest in virtualization gives rise to new opportunities for running high performance computing (HPC) applications on clusters and grids. These include the ability to create a uniform (virtual) run-time environment on top of a multitude of hardware and software platforms, and the possibility for dynamic resource allocation towards the improvement of process performance, e.g., by virtual machine (VM) migration as a means for load-balancing. This paper deals with issues related to running HPC applications on multi-clusters and grids using VMware, a virtualization package running on Windows, Linux and OS X. The paper presents the ldquoJobrunrdquo system for transparent, on-demand VM launching upon job submission, and its integration with the MOSIX cluster and grid management system. We present a novel approach to job migration, combining VM migration with process migration using Jobrun, by which it is possible to migrate groups of processes and parallel jobs among different clusters in a multi-cluster or in a grid. We use four real HPC applications to evaluate the overheads of VMware (both on Linux and Windows), the MOSIX cluster extensions and their combination, and present detailed measurements of the performance of Jobrun.


Proceedings of the Seventh Israeli Conference on Computer Systems and Software Engineering | 1996

Performance of PVM with the MOSIX preemptive process migration scheme

Amnon Barak; Avner Braverman; Ilia Gilderman; Oren Laden

With the increased interest in workstation networks for parallel and high performance computing it is necessary to reexamine the use of process migration algorithms, to improve the overall utilization of the system, to achieve high performance and to allow flexible use of idle workstations. Currently, almost all programming environments for parallel systems do not use process migration for task assignments. Instead, a static process assignment is used, with sub-optimal performance, especially when several users execute multiple processes simultaneously. The paper highlights the advantages of a process migration scheme for better utilization of the computing resources as well as to gain substantial speedups in the execution of parallel and multi-tasking applications. The authors executed several CPU and communication bound benchmarks under PVM, a popular programming environment for parallel computing that uses static process assignment. These benchmarks were executed under the MOSIX multicomputer operating system, with and without its preemptive process migration scheme. The results of these benchmarks prove the advantages of using preemptive process migrations. The paper begins with an overview of MOSIX, a multicomputer enhancement of UNIX that supports transparent process migration for load-balancing, and PVM. They then present the performance of the executions of the benchmarks. Their results show that in some cases the improvements in the performance of PVM with the MOSIX process migration can reach tens or even hundreds of percents.

Collaboration


Dive into the Amnon Barak's collaboration.

Top Co-Authors

Avatar

Amnon Shiloh

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shai Guday

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Ely Levy

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Lior Amar

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Zvi Drezner

California State University

View shared research outputs
Top Co-Authors

Avatar

Michael Okun

University College London

View shared research outputs
Top Co-Authors

Avatar

Tal Ben-Nun

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Arie Keren

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Gad Aharoni

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Ron Ben-Natan

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge