Matthieu Lefebvre
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthieu Lefebvre.
arXiv: Instrumentation and Detectors | 2016
G. B. Cerati; D. Riley; Kevin Mcdermott; P. Wittich; P. Elmer; Matevž Tadel; Steven R. Lantz; Slava Krutelyov; Matthieu Lefebvre; F. Würthwein; Avi Yagil
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.
arXiv: Computational Physics | 2018
G. B. Cerati; P. Elmer; Slava Krutelyov; Steven R. Lantz; Matthieu Lefebvre; M. Masciovecchio; Kevin Mcdermott; D. Riley; Matevž Tadel; P. Wittich; F. Würthwein; Avi Yagil
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.
Computers & Geosciences | 2018
Ryan Modrak; Dmitry Borisov; Matthieu Lefebvre; Jeroen Tromp
Abstract SeisFlows is an open source Python package that provides a customizable waveform inversion workflow and framework for research in oil and gas exploration, earthquake tomography, medical imaging, and other areas. New methods can be rapidly prototyped in SeisFlows by inheriting from default inversion or migration classes, and code can be tested on 2D examples before application to more expensive 3D problems. Wave simulations must be performed using an external software package such as SPECFEM3D. The ability to interface with external solvers lends flexibility, and the choice of SPECFEM3D as a default option provides optional GPU acceleration and other useful capabilities. Through support for massively parallel solvers and interfaces for high-performance computing (HPC) systems, inversions with thousands of seismic traces and billions of model parameters can be performed. So far, SeisFlows has run on clusters managed by the Department of Defense, Chevron Corp., Total S.A., Princeton University, and the University of Alaska, Fairbanks.
Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact | 2017
David Pugmire; Ebru Bozdağ; Matthieu Lefebvre; Jeroen Tromp; Dimitri Komatitsch; Daniel Peter; Norbert Podhorszki; Judith C. Hill
In this work, we investigate global seismic tomographic models obtained by spectral-element simulations of seismic wave propagation and adjoint methods. Global crustal and mantle models are obtained based on an iterative conjugate-gradient type of optimization scheme. Forward and adjoint seismic wave propagation simulations, which result in synthetic seismic data to make measurements and data sensitivity kernels to compute gradient for model updates, respectively, are performed by the SPECFEM3D_GLOBE package [1] [2] at the Oak Ridge Leadership Computing Facility (OLCF) to study the structure of the Earth at unprecedented levels. Using advances in solver techniques that run on the GPUs on Titan at the OLCF, scientists are able to perform large-scale seismic inverse modeling and imaging. Using seismic data from global and regional networks from global CMT earthquakes, scientists are using SPECFEM3D_GLOBE to understand the structure of the mantle layer of the Earth. Visualization of the generated data sets provide an effective way to understand the computed wave perturbations which define the structure of mantle in the Earth.
Exascale Scientific Applications: Scalability and Performance Portability | 2017
Matthieu Lefebvre; Yangkang Chen; Wenjie Lei; David Luet; Youyi Ruan; Ebru Bozdağ; Judith C. Hill; Dimitri Komatitsch; Lion Krischer; Daniel Peter; Norbert Podhorszki; James A. Smith; Jeroen Tromp
Author(s): Straatsma, TP; Antypas, KB; Williams, TJ | Abstract:
EPJ Web of Conferences | 2017
G. B. Cerati; P. Elmer; Slava Krutelyov; Steven R. Lantz; Matthieu Lefebvre; M. Masciovecchio; Kevin Mcdermott; D. Riley; Matevž Tadel; P. Wittich; F. Würthwein; Avi Yagil
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.
Geophysical Journal International | 2016
Ebru Bozdağ; Daniel Peter; Matthieu Lefebvre; Dimitri Komatitsch; Jeroen Tromp; Judith C. Hill; Norbert Podhorszki; David Pugmire
Geophysical Journal International | 2016
Lion Krischer; James A. Smith; Wenjie Lei; Matthieu Lefebvre; Youyi Ruan; Elliott Sales de Andrade; Norbert Podhorszki; Ebru Bozdağ; Jeroen Tromp
international parallel and distributed processing symposium | 2018
Vivek Balasubramanian; Matteo Turilli; Weiming Hu; Matthieu Lefebvre; Wenjie Lei; Ryan Modrak; Guido Cervone; Jeroen Tromp; Shantenu Jha
11th World Congress on Computational Mechanics (WCCM XI) | 2014
Matthieu Lefebvre; Ebru Bozdağ; Henri Calandra; Dimitri Komatitsch; Wenjei Lei; Daniel Peter; Herurisa Rusmanugroho; James A. Smith; Jeroen Tromp