Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Avi Yagil is active.

Publication


Featured researches published by Avi Yagil.


Journal of Physics: Conference Series | 2015

Kalman filter tracking on parallel architectures

G. B. Cerati; P. Elmer; Steven R. Lantz; Kevin Mcdermott; D. Riley; Matevž Tadel; P. Wittich; F. Würthwein; Avi Yagil

We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance on manycore architectures. The combinatorial structure of these algorithms is not immediately compatible with an efficient SIMD (or SIMT) implementation; the challenge for us is to recast the existing software so it can readily generate hundreds of shared-memory threads that exploit the underlying instruction set of modern processors. We show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization. We demonstrate very good performance on Intel Xeon and Xeon Phi architectures, as well as promising first results on Nvidia GPUs.


Journal of Physics: Conference Series | 2014

XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

L. A. T. Bauerdick; Kenneth Bloom; Brian Bockelman; D C Bradley; Sridhara Dasu; J M Dost; I. Sfiligoi; A Tadel; M. Tadel; Frank Wuerthwein; Avi Yagil

Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.


arXiv: Instrumentation and Detectors | 2015

Traditional Tracking with Kalman Filter on Parallel Architectures

G. B. Cerati; P. Elmer; Steven R. Lantz; I. Macneill; Kevin Mcdermott; D. Riley; Matevž Tadel; P. Wittich; F. Würthwein; Avi Yagil

Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moores Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intels Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.


arXiv: Instrumentation and Detectors | 2016

Kalman Filter Tracking on Parallel Architectures

G. B. Cerati; D. Riley; Kevin Mcdermott; P. Wittich; P. Elmer; Matevž Tadel; Steven R. Lantz; Slava Krutelyov; Matthieu Lefebvre; F. Würthwein; Avi Yagil

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.


Journal of Physics: Conference Series | 2012

Xrootd monitoring for the CMS experiment

L. A. T. Bauerdick; Kenneth Bloom; Brian Bockelman; D C Bradley; Sridhara Dasu; I. Sfiligoi; A Tadel; M. Tadel; Frank Wuerthwein; Avi Yagil

During spring and summer of 2011, CMS deployed Xrootd-based access for all US T1 and T2 sites. This allows for remote access to all experiment data on disk in the US. It is used for user analysis, visualization, running of jobs at computing sites when data is not available at local sites, and as a fail-over mechanism for data access in jobs. Monitoring of this Xrootd infrastructure is implemented on three levels. Basic service and data availability checks are performed by Nagios probes. The second level uses Xrootds summary data stream; this data is aggregated from all sites and fed into a MonALISA service providing visualization and storage. The third level uses Xrootds detailed monitoring stream, which includes detailed information about users, opened files and individual data transfers. A custom application was developed to process this information. It currently provides a real-time view of the system usage and can store data into ROOT files for detailed analysis. Detailed monitoring allows us to determine dataset popularity and to detect abuses of the system, including sub-optimal usage of the Xrootd protocol and the ROOT prefetching mechanism.


arXiv: Computational Physics | 2018

Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures

G. B. Cerati; P. Elmer; Slava Krutelyov; Steven R. Lantz; Matthieu Lefebvre; M. Masciovecchio; Kevin Mcdermott; D. Riley; Matevž Tadel; P. Wittich; F. Würthwein; Avi Yagil

Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.


Proceedings of 38th International Conference on High Energy Physics — PoS(ICHEP2016) | 2017

Impact of tracker layout and algorithmic choices on cost of computing at high pileup.

Slava Krutelyov; G. B. Cerati; M. Tadel; Frank Wuerthwein; Avi Yagil

High luminosity operation of the LHC is expected to deliver proton-proton collisions to experiments with average number of proton-proton interactions reaching 200 every bunch crossing. Reconstruction of charged particle tracks with current algorithms, in this environment, dominates reconstruction time and is increasingly computationally challenging. We discuss the importance of taking computing costs into account as a critical part of future tracker designs in HEP as well as the importance of algorithms used.


EPJ Web of Conferences | 2017

Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

G. B. Cerati; P. Elmer; Slava Krutelyov; Steven R. Lantz; Matthieu Lefebvre; M. Masciovecchio; Kevin Mcdermott; D. Riley; Matevž Tadel; P. Wittich; F. Würthwein; Avi Yagil

For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.


Big Data Computing (BDC), 2015 IEEE/ACM 2nd International Symposium on | 2016

Any Data, Any Time, Anywhere: Global Data Access for Science

Kenneth Bloom; T. Boccali; Brian Bockelman; D C Bradley; Sridhara Dasu; J M Dost; Federica Fanzago; I. Sfiligoi; A Tadel; M. Tadel; C. Vuosalo; F. Würthwein; Avi Yagil; M. Zvada

Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data access becomes quite difficult when many independent storage sites are involved because users are burdened with learning the intricacies of accessing each system and keeping careful track of data location. We present an alternate approach: the Any Data, Any Time, Anywhere infrastructure. Combining several existing software products, AAA presents a global, unified view of storage systems - a data federation, a global filesystem for software delivery, and a workflow management system. We present how one HEP experiment, the Compact Muon Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance metrics.


Journal of Physics: Conference Series | 2012

Multiple-view, Multiple-selection Visualization of Simulation Geometry in CMS

L A T Bauerdick; G Eulisse; C Jones; D Kovalskyi; T McCauley; A Mrak Tadel; I Osborne; M. Tadel; Avi Yagil

Fireworks, the event-display program of CMS, was extended with an advanced geometry visualization package. ROOTs TGeo geometry is used as internal representation, shared among several geometry views. Each view is represented by a GUI list-tree widget, implemented as a flat vector to allow for fast searching, selection, and filtering by material type, node name, and shape type. Display of logical and physical volumes is supported. Color, transparency, and visibility flags can be modified for each node or for a selection of nodes. Further operations, like opening of a new view or changing of the root node, can be performed via a context menu. Node selection and graphical properties determined by the list-tree view can be visualized in any 3D graphics view of Fireworks. As each 3D view can display any number of geometry views, a user is free to combine different geometry-view selections within the same 3D view. Node-selection by proximity to a given point is possible. A visual clipping box can be set for each geometry view to limit geometry drawing into a specified region. Visualization of geometric overlaps, as detected by TGeo, is also supported. The geometry visualization package is used for detailed inspection and display of simulation geometry with or without the event data. It also serves as a tool for geometry debugging and inspection, facilitating development of geometries for CMS detector upgrades and for SLHC.

Collaboration


Dive into the Avi Yagil's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Tadel

University of California

View shared research outputs
Top Co-Authors

Avatar

F. Würthwein

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matevž Tadel

University of California

View shared research outputs
Top Co-Authors

Avatar

P. Elmer

Princeton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge