Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luc Bougé is active.

Publication


Featured researches published by Luc Bougé.


parallel computing | 2001

The hyperion system: compiling multithreaded java bytecode for distributed execution

Gabriel Antoniu; Luc Bougé; Philip J. Hatcher; Mark MacBeth; Keith McGuigan; Raymond Namyst

Our work combines Java compilation to native code with a runtime library that executes Java threads in a distributed memory environment. This allows a Java programmer to view a cluster of processors as executing a single JAVA virtual machine. The separate processors are simply resources for executing Java threads with true parallelism, and the run-time system provides the illusion of a shared memory on top of the private memories of the processors. The environment we present is available on top of several UNIX systems and can use a large variety of communication interfaces thanks to the high portability of its run time system. To evaluate our approach, we compare serial C, serial Java, and multithreaded Java implementations of a branch and-bound solution to the minimal-cost map-coloring problem. All measurements have been carried out on two platforms using two different communication interfaces: SISCI/SCI and MPI BIP/Myrinet.


international parallel processing symposium | 1999

An Efficient and Transparent Thread Migration Scheme in the PM2 Runtime System

Gabriel Antoniu; Luc Bougé; Raymond Namyst

This paper describes a new iso-address approach to the dynamic allocation of data in a multithreaded runtime system with thread migration capability. The system guarantees that the migrated threads and their associated static data are relocated exactly at the same virtual address on the destination nodes, so that no post-migration processing is needed to keep pointers valid. In the experiments reported, a thread can be migrated in less than 75μs.


international parallel and distributed processing symposium | 2001

DSM-PM2: a portable implementation platform for multithreaded DSM consistency protocols

Gabriel Antoniu; Luc Bougé

AbstractDSM-PM2 is a platform for designing, implementing and experimenting mul-tithreaded DSM consistency protocols. It provides a generic toolbox whichfacilitates protocol design and allows for easy experimentation with alterna-tive protocols for a given consistency model. DSM-PM2 is portable across awide range of clusters. We illustrate its power with figures o btained for dif-ferent protocols implementing sequential consistency, release consistency andJava consistency, on top of Myrinet, Fast-Ethernet and SCI clusters.Citation: This report has been published in the Proceedings of the 6th In-ternational Workshop on High-Level Parallel Programming Models and Sup-portive Environments (HIPS ’01) [4]. Please mention this reference in anycitation.Keywords: DSM, multithreading, consistency protocols, DSM-PM2, PM2.ResumeDSM-PM2 est une platforme pour la conception, l’implementation et l’experi-mentation de protocoles de coherence multithread pour des environnements amemoire distribuee virtuellement partagee. DSM-PM2 fournit une boite a ou-tils generique qui facilite la conception de protocoles et en permet facilementl’implementation. Il est disponible sur une large variete de clusters, compre-nant differents types de reseaux d’interconnexion. Nous illustrons ses perfor-mances pour differents protocoles de coherence qui implementent la coherencesequentielle, la coherence relâchee et la coherence Java sur trois platformes:BIP/Myrinet, TCP/Myrinet et SISCI/SCI.Citation: Ce rapport a ete publiee dans les actes du 6th International Work-shop on High-Level Parallel Programming Models and Supportive Environ-ments (HIPS ’01) [4]. Merci de mentionner cette reference dans les citations.Mots-cles: DSM, memoire virtuellement partagee, multithreading, protocolesde coherence, DSM-PM2, PM2.


international parallel processing symposium | 1999

Efficient Communications in Multithreaded Runtime Systems

Luc Bougé; Jean-François Méhaut; Raymond Namyst

Most of existing multithreaded environments have an implementation built on top of standard communication interfaces such as MPI which ensures a high level of portability. However, such interfaces do not meet the efficiencyneeds of RPC-like communications which are extensively used in multithreaded environments. We propose a new portable and efficient communication interface for RPC-based multithreaded environments, called Madeleine. We describe its programming interface and its implementation on top of low-level network protocols such as VIA. We also report performance results that demonstrate the efficiency of our approach.


Communications of The ACM | 2001

Enabling Java for high-performance computing

Thilo Kielmann; Philip J. Hatcher; Luc Bougé; Henri E. Bal

ava has become increasingly popular as a general-purpose programming language. Current Java implementations focus mainly on the portability and interoperability required for Internet-centric client/server computing. Key to Java’s success is its intermediate “bytecode’’ representation, which can be exchanged and executed by Java Virtual Machines (JVMs) on almost any computing platform. However, along with that popularity has come an increasing need for an efficient execution mode. For sequential execution, just-in-time compilers improve application performance [4]. But high-performance computing applications typically require multipleprocessor systems, so efficient interprocessor communication is also needed, in addition to efficient sequential execution. As an OO language, Java uses method invocation as its main communication concept; for example, inside a single JVM, concurrent threads of control can communicate through synchronized method invocations. On a multiprocessor system with shared memory (SMP), this approach allows for some limited form of parallelism by mapping threads to different physical processors. For distributed-memory systems, Java offers the concept of a remote method invocation (RMI). With RMI, the method invocation, along with its parameters and results, is transferred across a network to and from the serving object on a remote JVM (see the sidebar “Remote Method Invocation”). With these built-in concepts for concurrency and distributed-memory communication, Java provides a unique opportunity for a widely accepted general-purpose language with a large base of existing code and programmers to also suit the needs of parallel (high-performance) computing. Unfortunately, Java is not yet widely perceived by programmers as such, due to the


Advances in Computers | 1996

The Data Parallel Programming Model: A Semantic Perspective

Luc Bougé

We provide a short introduction to the data parallel programming model. We argue that parallel computing often makes little distinction between the execution model and the programming model. This results in poor programming and low portability. Using the “GOTO considered harmful” analogy, we show that data parallelism can be seen as a way out of this difficulty. We show that important aspects of the data parallel model were already present in earlier approaches to parallel programming, and demonstrate that the data parallel programming model can be characterized by a small number of concepts with simple semantics.


parallel computing | 2002

Madeleine II: a portable and efficient communication library for high-performance cluster computing

Olivier Aumage; Luc Bougé; Jean-François Méhaut; Raymond Namyst

This paper introduces Madeleine II, an adaptive and portable multiprotocol communication library for high-performance multithreaded applications. Madeleine II has the ability to control multiple network protocols (BIP, SISCI, VIA) and multiple network adapters (ETHERNET, MYRINET, SCI). Moreover, it includes advanced mechanisms to dynamically select the most appropriate transfer method for a given network protocol according to various parameters such as data size or responsiveness user requirements. We report on performance measurements obtained using various protocols and we present preliminary results about porting the MPICH and the NEXUS communication libraries on top of Madeleine II.


european conference on parallel processing | 2000

Compiling Multithreaded Java Bytecode for Distributed Execution

Gabriel Antoniu; Luc Bougé; Philip J. Hatcher; Mark MacBeth; Keith McGuigan; Raymond Namyst

Our work combines Java compilation to native code with a run-time library that executes Java threads in a distributed-memory environment. This allows a Java programmer to view a cluster of processors as executing a single Java virtual machine. The separate processors are simply resources for executing Java threads with true concurrency and the run-time system provides the illusion of a shared memory on top of the private memories of the processors. The environment we present is available on top of several UNIX systems and can use a large variety of network protocols thanks to the high portability of its run-time system. To evaluate our approach, we compare serial C, serial Java, and multithreaded Java implementations of a branch-and-bound solution to the minimal-cost map-coloring problem. All measurements have been carried out on two platforms using two different network protocols: SISCI/SCI and MPI-BIP/Myrinet.


distributed event-based systems | 2014

JetStream: enabling high performance event streaming across cloud data-centers

Radu Tudoran; Olivier Nano; Ivo Santos; Alexandru Costan; Hakan Soncu; Luc Bougé; Gabriel Antoniu

The easily-accessible computation power offered by cloud infrastructures coupled with the revolution of Big Data are expanding the scale and speed at which data analysis is performed. In their quest for finding the Value in the 3 Vs of Big Data, applications process larger data sets, within and across clouds. Enabling fast data transfers across geographically distributed sites becomes particularly important for applications which manage continuous streams of events in real time. Scientific applications (e.g. the Ocean Observatory Initiative or the ATLAS experiment) as well as commercial ones (e.g. Microsofts Bing and Office 365 large-scale services) operate on tens of data-centers around the globe and follow similar patterns: they aggregate monitoring data, assess the QoS or run global data mining queries based on inter site event stream processing. In this paper, we propose a set of strategies for efficient transfers of events between cloud data-centers and we introduce JetStream: a prototype implementing these strategies as a high performance batch-based streaming middleware. JetStream is able to self-adapt to the streaming conditions by modeling and monitoring a set of context parameters. It further aggregates the available bandwidth by enabling multi-route streaming across cloud sites. The prototype was validated on tens of nodes from US and Europe data-centers of the Windows Azure cloud using synthetic benchmarks and with application code from the context of the Alice experiment at CERN. The results show an increase in transfer rate of 250 times over individual event streaming. Besides, introducing an adaptive transfer strategy brings an additional 25% gain. Finally, the transfer rate can further be tripled thanks to the use of multi-route streaming.


Future Generation Computer Systems | 1992

Control structures for data-parallel SIMD languages: semantics and implementation

Luc Bougé; Jean-Luc Levaire

Abstract We define a simple language which encapsulates the main concepts of SIMD data-parallel programming, and we give its operational semantics. This language includes a unique data-parallel control structure called multitype conditioning and escape. We show that it suffices to express all data-parallel extensions of usual scalar control structures of C, as found in C∗, MPL, POMPC, etc. Moreover, we give a formal correctness proof for two different implementations of this new statement, respectively by a single context stack, and by a set of counters. Thus, this simple language appears as an interesting basis to study data-parallel SIMD programming methodology.

Collaboration


Dive into the Luc Bougé's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip J. Hatcher

University of New Hampshire

View shared research outputs
Top Co-Authors

Avatar

David Cachera

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

Jean-François Méhaut

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

Gil Utard

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

Keith McGuigan

University of New Hampshire

View shared research outputs
Top Co-Authors

Avatar

Mark MacBeth

University of New Hampshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Pérez

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar

Lokman Rahmani

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge