Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. Kordas is active.

Publication


Featured researches published by K. Kordas.


IEEE Transactions on Nuclear Science | 2014

A Multi-Core FPGA-Based 2D-Clustering Implementation for Real-Time Image Processing

Calliope-Louisa Sotiropoulou; S. Gkaitatzis; A. Annovi; M. Beretta; P. Giannetti; K. Kordas; P. Luciano; Spiridon Nikolaidis; Ch. Petridou; G. Volpi

A multi-core FPGA-based 2D-clustering implementation for real-time image processing is presented in this paper. The clustering algorithm is using a moving window technique to reduce the time and data required for the cluster identification process. The implementation is fully generic, with an adjustable detection window size. A fundamental characteristic of the implementation is that multiple clustering cores can be instantiated. Each core can work on a different identification window that processes data of independent “images” in parallel, thus, increasing performance by exploiting more FPGA resources. The algorithm and implementation are developed for the Fast TracKer processor for the trigger upgrade of the ATLAS experiment but their generic design makes them easily adjustable to other demanding image processing applications that require real-time pixel clustering.


Journal of Instrumentation | 2014

A parallel FPGA implementation for real-time 2D pixel clustering for the ATLAS Fast Tracker Processor

Calliope-Louisa Sotiropoulou; S. Gkaitatzis; A. Annovi; M Beretta; K. Kordas; Spyridon Nikolaidis; C. Petridou; G. Volpi

The parallel 2D pixel clustering FPGA implementation used for the input system of the ATLAS Fast TracKer (FTK) processor is presented. The input system for the FTK processor will receive data from the Pixel and micro-strip detectors from inner ATLAS read out drivers (RODs) at full rate, for total of 760Gbs, as sent by the RODs after level-1 triggers. Clustering serves two purposes, the first is to reduce the high rate of the received data before further processing, the second is to determine the cluster centroid to obtain the best spatial measurement. For the pixel detectors the clustering is implemented by using a 2D-clustering algorithm that takes advantage of a moving window technique to minimize the logic required for cluster identification. The cluster detection window size can be adjusted for optimizing the cluster identification process. Additionally, the implementation can be parallelized by instantiating multiple cores to identify different clusters independently thus exploiting more FPGA resources. This flexibility makes the implementation suitable for a variety of demanding image processing applications. The implementation is robust against bit errors in the input data stream and drops all data that cannot be identified. In the unlikely event of missing control words, the implementation will ensure stable data processing by inserting the missing control words in the data stream. The 2D pixel clustering implementation is developed and tested in both single flow and parallel versions. The first parallel version with 16 parallel cluster identification engines is presented. The input data from the RODs are received through S-Links and the processing units that follow the clustering implementation also require a single data stream, therefore data parallelizing (demultiplexing) and serializing (multiplexing) modules are introduced in order to accommodate the parallelized version and restore the data stream afterwards. The results of the first hardware tests of the single flow implementation on the custom FTK input mezzanine (IM) board are presented. We report on the integration of 16 parallel engines in the same FPGA and the resulting performances. The parallel 2D-clustering implementation has sufficient processing power to meet the specification for the Pixel layers of ATLAS, for up to 80 overlapping pp collisions that correspond to the maximum LHC luminosity planned until 2022.


Journal of Instrumentation | 2015

ATCA-based ATLAS FTK input interface system

Yasuyuki Okumura; T. Liu; Jamieson Olsen; T. Iizawa; T. Mitani; T. Korikawa; K. Yorita; A. Annovi; M. Beretta; M. Gatta; Calliope-Louisa Sotiropoulou; S. Gkaitatzis; K. Kordas; Naoki Kimura; M. Cremonesi; H. Yin; Zijun Xu

High luminosity conditions at the LHC pose many unique challenges for potential silicon based track trigger systems. One of the major challenges is data formatting, where hits from thousands of silicon modules must first be shared and organized into overlapping eta-phi trigger towers. Communication between nodes requires high bandwidth, low latency, and flexible real time data sharing, for which a full mesh backplane is a natural solution. A custom Advanced Telecommunications Computing Architecture data processing board is designed with the goal of creating a scalable architecture abundant in flexible, non-blocking, high bandwidth board to board communication channels while keeping the design as simple as possible. We have performed the first prototype board testing and our first attempt at designing the prototype system has proven to be successful. Leveraging the experience we gained through designing, building and testing the prototype board system we are in the final stages of laying out the next generation board, which will be used in the ATLAS Level-2 Fast TracKer as Data Formatter, as well as in the CMS Level-1 tracking trigger R&D for early technical demonstrations.The first stage of the ATLAS Fast TracKer (FTK) is an ATCA-based input interface system, where hits from the entire silicon tracker are clustered and organized into overlapping eta-phi trigger towers before being sent to the tracking engines. First, FTK Input Mezzanine cards receive hit data and perform clustering to reduce data volume. Then, the ATCA-based Data Formatter system will organize the trigger tower data, sharing data among boards over full mesh backplanes and optic fibers. The board and system level design concepts and implementation details, as well as the operation experiences from the FTK full-chain testing, will be presented.


international conference on modern circuits and systems technologies | 2017

A software demonstrator for cognitive image processing using the Associative Memory chip

S. Gkaitatzis; Calliope Louisa Sotiropoulou; P. Luciano; P. Giannetti; K. Kordas

This paper presents the design of a software demonstrator to be used in conjunction an embedded system for real-time pattern matching. The demonstrator was designed to verify the proper hardware operation and to calculate the various constants used, thus the operations on the underlying model are bit-accurate. The embedded hardware is based on systems that have been developed for use in the field of High Energy Physics (HEP) and, in particular, in the trigger system of the ATLAS Experiment. The algorithm which is implemented is based on the learning process of the human vision and acts as an edge detector. The demonstrator is using the Qt application framework and the underlying model is written in C++. This separation allows the application to be used as an image viewer or as a command line tool. The latter allows the fast and efficient use of the application for the parallel processing of multiple images, the generation of Pattern Banks and the calculation of the constants used in the hardware.


IEEE Transactions on Nuclear Science | 2017

The Associative Memory System Infrastructures for the ATLAS Fast Tracker

Calliope Louisa Sotiropoulou; I. Maznas; S. Citraro; A. Annovi; L. S. Ancu; R. Beccherle; F. Bertolucci; Nicolo Vladi Biesuz; D. Calabro; Francesco Crescioli; D. Dimas; Mauro Dell'Orso; S. Donati; Christos Gentsos; P. Giannetti; S. Gkaitatzis; J. Gramling; V. Greco; P. Kalaitzidis; K. Kordas; N. Kimura; Takashi Kubota; A. Iovene; A. Lanza; P. Luciano; B. Magnin; K. Mermikli; H. Nasimi; A. Negri; S. Nikolaidis

The associative memory (AM) system of fast tracker (FTK) processor has been designed for the tracking trigger upgrade to the ATLAS detector at the Conseil Europeen Pour La Recherche Nucleaire large hadron collider. The system performs pattern matching (PM) using the detector hits of particles in the ATLAS silicon tracker. The AM system is the main processing element of FTK and is mainly based on the use of application-specified integrated circuits (ASICs) (AM chips) designed to execute PM with a high degree of parallelism. It finds track candidates at low resolution which become seeds for a full resolution track fitting. The AM system implementation is based on a collection of large 9U Versa Module Europa (VME) boards, named “serial link processors” (AMBSLPs). On these boards, a huge traffic of data is implemented on a network of 900 2-Gb/s serial links. The complete AM-based processor consumes much less power (~50 kW) than its CPU equivalent and its size is much smaller. The AMBSLP has a power consumption of ~250 W and there will be 16 of them in a crate. This results in unusually large power consumption for a VME crate and the need for complex custom infrastructure in order to have sufficient cooling. This paper reports on the design and testing of the infrastructures needed to run and cool the system which will include 16 AMBSLPs in the same crate, the integration of the AMBSLP inside a first FTK slice, the performance of the produced prototypes (both hardware and firmware), as well as their tests in the global FTK integration. This is an important milestone to be satisfied before the FTK production.


IEEE Transactions on Nuclear Science | 2017

A Coprocessor for the Fast Tracker Simulation

Christos Gentsos; G. Volpi; S. Gkaitatzis; P. Giannetti; S. Citraro; Francesco Crescioli; K. Kordas; Spiridon Nikolaidis

The Fast Tracker (FTK) executes real-time tracking for online event selection in the ATLAS experiment. Data processing speed is achieved by exploiting pipelining and parallel processing. Track reconstruction is executed in two stages. The first stage, implemented on custom application-specific integrated circuit (ASICs) called associative memory (AM) chips, performs pattern matching (PM) to identify track candidates in low resolution. The second stage, implemented on field programmable gate arrays (FPGAs), builds on the PM results, performing track fitting in full resolution. The use of such a parallelized architecture for real-time event selection opens up a new, huge computing problem related to the analysis of the acquired samples. Millions of events have to be simulated to determine the efficiency and the properties of the reconstructed tracks with a small statistical error. The AM chip emulation is a computationally intensive task when implemented in software running on commercial resources. This paper proposes the use of a hardware coprocessor to solve this problem efficiently. We report on the implementation and performance of all the functions requiring massive computing power in a modern, compact embedded system for track reconstruction. That system is the miniaturization of the complex FTK processing unit, which is also well suited for powering applications outside the realm of high energy physics.


nuclear science symposium and medical imaging conference | 2015

Highly parallelized pattern matching execution for the ATLAS experiment

A. Annovi; F. Bertolucci; N. Biesuz; D. Calabro; G. Calderini; S. Citraro; Francesco Crescioli; D. Dimas; Mauro Dell'Orso; S. Donati; Christos Gentsos; P. Giannetti; S. Gkaitatzis; V. Greco; P. Kalaitzidis; K. Kordas; N. Kimura; T. Kubota; A. Lanza; P. Luciano; B. Magnin; I. Maznas; K. Mermikli; H. Nasimi; Spyridon Nikolaidis; M. Piendibene; A. Sakellariou; D. Sampsonidis; C.-L. Sotiropoulou; G. Volpi

The Associative Memory (AM) system of the Fast TracKer (FTK) processor has been designed to perform pattern matching using as input the data from the silicon tracker in the ATLAS experiment. The AM is the primary component of the FTK system and is designed using ASIC technology (the AM chip) to execute pattern matching with a high degree of parallelism. The FTK system finds track candidates at low resolution that are seeds for a full resolution track fitting. The AM system implementation is named “Serial Link Processor” and is based on an extremely powerful network of 2 Gb/s serial links to sustain a huge traffic of data. This paper reports on the design of the Serial Link Processor consisting of two types of boards, the Little Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME motherboard which hosts four LAMB daughterboards. We also report on the performance of the prototypes (both hardware and firmware) produced and tested in the global FTK integration, an important milestone to be satisfied before the FTK production.


nuclear science symposium and medical imaging conference | 2015

The future evolution of the Fast Tracker processing unit

Christos Gentsos; Francesco Crescioli; F. Bertolucci; Daniel Magalotti; S. Citraro; K. Kordas; Spiridon Nikolaidis

Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron colliders. We present the future evolution of this technology, for applications to the High Luminosity runs at the Large Hadron Collider where Data processing speed will be achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executes track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories. The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, will remain, but we plan to increase the FPGA parallelism by associating one FPGA to each AM chip. Implementing the two devices in a single package would achieve further performance improvements, plus miniaturization and integration of the state of the art prototypes. We present the new architecture, the design of the FPGA logic performing all the complementary functions of the pattern matching inside the AM, the tests performed on hardware.


ieee-npss real-time conference | 2014

A highly parallel FPGA implementation of a 2D-clustering algorithm for the ATLAS Fast TracKer (FTK) processor

Naoki Kimura; A. Annovi; M. Beretta; M. Gatta; S. Gkaitatzis; T. Iizawa; K. Kordas; T. Korikawa; Spyridon Nikolaidis; Ch. Petridou; Calliope-Louisa Sotiropoulou; K. Yorita; G. Volpi

The highly parallel 2D-clustering FPGA implementation used for the input system of the Fast TracKer (FTK) processor for the ATLAS experiment of the Large Hadron Collider (LHC) at CERN is presented. The LHC after the 2013-2014 shutdown periods is planned to have increased luminosity, which will make it more difficult to have efficient online selection of rare events due to the increase of the overlapping collisions. FTK is a highly-parallelized hardware system that improves the online selection by executing real time track finding using the information from the silicon inner detector. The FTK system requires fast and robust clustering of the hits retrieved from the silicon detector on FPGA devices. We show the development of the original input boards and the implemented clustering algorithm. For the complicated 2D-clustering, a moving window technique is used to minimize the use of FPGA resources. The combination of custom developed boards and implementation of the clustering algorithm provides sufficient processing power to meet the specifications for the silicon inner detector of ATLAS up to the maximum LHC luminosity planned until 2022. The developed algorithm is easily adjustable to other image processing applications that require real-time 2D-clustering.

Collaboration


Dive into the K. Kordas's collaboration.

Top Co-Authors

Avatar

S. Gkaitatzis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Calliope-Louisa Sotiropoulou

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Christos Gentsos

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesco Crescioli

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Spiridon Nikolaidis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Spyridon Nikolaidis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge