Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jerome Lauret is active.

Publication


Featured researches published by Jerome Lauret.


Lawrence Berkeley National Laboratory | 2009

FastBit: interactively searching massive data

Kesheng Wu; Sean Ahern; Edward W Bethel; Jacqueline H. Chen; Hank Childs; E. Cormier-Michel; Cameron Geddes; Junmin Gu; Hans Hagen; Bernd Hamann; Wendy S. Koegler; Jerome Lauret; Jeremy S. Meredith; Peter Messmer; Ekow J. Otoo; V Perevoztchikov; A. M. Poskanzer; Prabhat; Oliver Rübel; Arie Shoshani; Alexander Sim; Kurt Stockinger; Gunther H. Weber; W. M. Zhang

As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.


Journal of Physics: Conference Series | 2007

Virtual workspaces for scientific applications

Kate Keahey; Timothy Freeman; Jerome Lauret; Doug Olson

Scientists often face the need for more computing power than is available locally, but are constrained by the fact that even if the required resources were available remotely, their complex software stack would not be easy to port to those resources. Many applications are dependency-rich and complex, making it hard to run them on anything but a dedicated platform. Worse, even if the applications do run on another system, the results they produce may not be consistent across different runs. As part of the Center for Enabling Distributed Petascale Science (CEDPS) project we have been developing the workspace service which allows authorized clients to dynamically provision execution environments, using virtual machine technology, on remote computers. Virtual machines provide an excellent implementation of a portable environment as they allow users to configure an environment and then deploy it on a variety of platforms. This paper describes a proof-of-concept of this strategy developed for the High-Energy and Nuclear Physics (HENP) applications such as STARs. We are currently building on this work to enable production STAR runs in virtual machines.


Journal of Physics: Conference Series | 2012

Offloading peak processing to virtual farm by STAR experiment at RHIC

J. Balewski; Jerome Lauret; Doug Olson; Iwona Sakrejda; D. Arkhipkin; John Bresnahan; Kate Keahey; Jeff Porter; Justin Stevens; Matthew Walker

The Virtual Machine framework was used to assemble the STAR-computing environment, validated once, deployed on over 100 8-core VMs at NERSC and Argonne National Lab, and used as a homogeneous Virtual Farm processing events acquired in real time by STAR detector located at Brookhaven National Lab. To provide time dependent calibration, a database snapshot scheme was devised. The two high capacity filesystems, localized at the opposite coasts of US and interconnected via Globus-Online protocol, were used in this setup, which resulted with a highly scalable Cloud-based extension of STAR computing resources. The system was in continuous operation for over 3 months.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2006

Correcting for distortions due to ionization in the STAR TPC

G. Van Buren; L. Didenko; J. C. Dunlop; Y. Fisyak; Jerome Lauret; A. Lebedev; B. Stringfellow; J. H. Thomas; H. Wieman

Physics goals of the STAR Experiment at RHIC in recent (and future) years drive the need to operate the STAR TPC at ever higher luminosities, leading to increased ionization levels in the TPC gas. The resulting ionic space charge introduces field distortions in the detector which impact tracking performance. Further complications arise from ionic charge leakage into the main TPC volume from the high gain anode region. STAR has implemented corrections for these distortions based on measures of luminosity, which we present here. Additionally, we highlight a novel approach to applying the corrections on an event-by-event basis applicable in conditions of rapidly varying ionization sources.


international conference on tools with artificial intelligence | 2009

Using Constraint Programming to Plan Efficient Data Movement on the Grid

Michal Zerola; Michal umbera; Roman Barták; Jerome Lauret

Efficient data transfers and placements are paramount to optimizing geographically distributed resources and minimizing the time data intensive experiments’s processing tasks would take. We present a technique for planning data transfers to single destination using a Constraint Programming approach. We study enhancements of the model using symmetry breaking, branch cutting, well studied principles from scheduling field, and several heuristics. Real-life wide area network characteristic is explained and the realization of the computed formal schedule is proposed with an emphasis on bandwidth saturation. Results will include comparison of performance and trade-off between CP techniques and Peer-2-Peer model.


Journal of Physics: Conference Series | 2008

Overview of the inner silicon detector alignment procedure and techniques in the RHIC/STAR experiment

Y. Fisyak; Jerome Lauret; S Margetis; Gene van Buren; J Bouchet; Victor Perevoztchikov; I Kotov; R D d Souza

The STAR experiment was primarily designed to detect signals of a possible phase transition in nuclear matter. Its layout, typical for a collider experiment, contains a large Time Projection Chamber (TPC) in a solenoid magnet, a set of four layers of combined silicon strip and silicon drift detectors for secondary vertex reconstruction, plus other detectors. In this presentation, we will report on recent global and individual detector element alignment as well as drift velocity calibration work performed on this STAR inner silicon tracking system. We will show how attention to details positively impacts the physics capabilities of STAR and explain the iterative procedure conducted to reach such results in low, medium and high track density and detector occupancy.


Journal of Physics: Conference Series | 2012

One click dataset transfer: toward efficient coupling of distributed storage resources and CPUs

Michal Zerola; Jerome Lauret; Roman Barták; Michal Sumbera

The massive data processing in a multi-collaboration environment with geographically spread diverse facilities will be hardly fair to users and hardly using network bandwidth efficiently unless we address and deal with planning and reasoning related to data movement and placement. The needs for coordinated data resource sharing and efficient plans solving the data transfer paradigm in a dynamic way are being more required. We will present the work which purpose is to design and develop an automated planning system acting as a centralized decision making component with emphasis on optimization, coordination and load-balancing. We will describe the most important optimization characteristic and modeling approach based on constraints. Constraint-based approach allows for a natural declarative formulation of what must be satisfied, without expressing how. The architecture of the system, communication between components and execution of the plan by underlying data transfer tools will be shown. We will emphasize the separation of the planner from the executors and explain how to keep the proper balance between being deliberative and reactive. The extension of the model covering full coupling and reasoning about computing resources will be shown. The system has been deployed within STAR experiment over several Tier sites and has been used for data movement in the favour of user analyses or production processing. We will present several real use-case scenario and performance of the system with a comparison to the traditional - solved by hands methods. The benefits in terms of indispensable shorter data delivery time due to leveraging available network paths and intermediate caches will be revealed. Finally, we will outline several possible enhancements and avenues for future work.


grid computing | 2009

Efficient multi-site data movement in distributed environment

Michal Zerola; Michal Sumbera; Jerome Lauret; Roman Barták

In order to optimize all available resources (geographically spread) and minimize the processing time for experiments in high energy and nuclear physics, it is necessary to face the question of efficient data transfers and placements in computer networks. In this paper we present the extension of our on-going work for automated planning of data transfers along with multi-source multi-destination criteria.


Lawrence Berkeley National Laboratory | 2006

Science-Driven Network Requirements for ESnet

Paul D. Adams; Shane Canon; Steven Carter; Brent Draney; M. Greenwald; Jason Hodges; Jerome Lauret; George Michaels; Larry Rahn; David P. Schissel; Gary Strand; Howard Walter; Michael F. Wehner; Dean N. Williams

The Energy Sciences Network (ESnet) is the primary providerof network connectivity for the US Department of Energy Office ofScience, the single largest supporter of basic research in the physicalsciences in the United States. In support of the Office of Scienceprograms, ESnet regularly updates and refreshes its understanding of thenetworking requirements of the instruments, facilities and scientiststhat it serves. This focus has helped ESnet to be a highly successfulenabler of scientific discovery for over 20 years. In August, 2002 theDOE Office of Science organized a workshop to characterize the networkingrequirements for Office of Science programs. Networking and middlewarerequirements were solicited from a representative group of scienceprograms. The workshop was summarized in two documents the workshop finalreport and a set of appendixes. This document updates the networkingrequirements for ESnet as put forward by the science programs listed inthe 2002 workshop report. In addition, three new programs have beenadded. Theinformation was gathered through interviews with knowledgeablescientists in each particular program or field.


arXiv: Performance | 2009

Using constraint programing to resolve the multi-source / multi-site data movement paradigm on the Grid

Michal Zerola; Jerome Lauret; Roman Barták; Michal Sumbera

In order to achieve both fast and coordinated data transfer t o collaborative sites as well as to create a distribution of data over multiple sites, efficient data mo vement is one of the most essential aspects in distributed environment. With such capabilities a t hand, truly distributed task scheduling with minimal latencies would be reachable by internationally distributed collaborations (such as ones in HENP) seeking for scavenging or maximizing on geographically spread computational resources. But it is often not all clear (a) how to move data when available from multiple sources or (b) how to move data to multiple compute resources to achieve an optimal usage of available resources. Constraint programming (CP) is a technique from artificial intelligence and operations research allowing to find solutions in a multi-dimensi onal space of variables. We present a method of creating a CP model consisting of sites, links and their attributes such as bandwidth for grid network data transfer also considering user tasks a s part of the objective function for an optimal solution. We will explore and explain trade-off between schedule generation time and divergence from the optimal solution and show how to improve and render viable the solution’s finding time by using search tree time limit, approximations , restrictions such as symmetry breaking or grouping similar tasks together, or generating sequence of optimal schedules by splitting the input problem. Results of data transfer simulation for e ach case will also include a well known Peer-2-Peer model, and time taken to generate a schedule as well as time needed for a schedule execution will be compared to a CP optimal solution. We will additionally present a possible implementation aimed to bring a distributed datasets (multiple sources) to a given site in a minimal time.

Collaboration


Dive into the Jerome Lauret's collaboration.

Top Co-Authors

Avatar

Michal Sumbera

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Victor Perevoztchikov

Brookhaven National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michal Zerola

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Dzmitry Makatun

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Roman Barták

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arie Shoshani

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kesheng Wu

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

G. Van Buren

Brookhaven National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gene van Buren

Brookhaven National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge