Mateusz Komorkiewicz
AGH University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mateusz Komorkiewicz.
field programmable logic and applications | 2012
Mateusz Komorkiewicz; Maciej Kluczewski; Marek Gorgon
Object detection and localization in a video stream is an important requirement for almost all vision systems. In the article a design embedded into a reconfigurable device which is using the Histogram of Oriented Gradients for feature extraction and SVM classification for detecting multiple objects is presented. Superior accuracy is achieved by making all computations using single precision 32-bit floating point values in all stages of image processing. The resulting implementation is fully pipelined and there is no need for external memory. Finally a working system able to detect and localize three different classes of objects in color images with resolution 640×480 @ 60fps is presented with a computational performance above 9 GFLOPS.
conference on design and architectures for signal and image processing | 2011
Tomasz Kryjak; Mateusz Komorkiewicz; Marek Gorgon
FPGA devices are a perfect platform for implementing image processing algorithms. In the article, an advanced video system is presented, which is able to detect moving objects in video sequences. The detection method is using two algorithms. First of all, a multimodal background generation method allows reliable scene modelling in case of rapid changes in lighting conditions and small background movement. Finally a segmentation based on three parameters lightness, colour and texture is applied. This approach allows to remove shadows from the processed image. Authors proposed some improvements and modifications to existing algorithms in order to make them suitable for reconfigurable platforms. In the final system only one, low cost, FPGA device is able to receive data from high speed digital camera, perform a Bayer transform, RGB to CIE Lab colour space conversion, generate a moving object mask and present results to the operator in real-time.
Sensors | 2014
Mateusz Komorkiewicz; Tomasz Kryjak; Marek Gorgon
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.
conference on design and architectures for signal and image processing | 2014
Tomasz Kryjak; Mateusz Komorkiewicz; Marek Gorgon
In this paper a hardware-software system for vehicle detection and counting at intersections is presented. It has been implemented and evaluated on the heterogeneous Zynq platform. The vehicle detection is based on the concept of virtual detection lines (VDL). Differences in colour, horizontal edges and Census transform results between areas around the VDL for two consecutive frames are used to disclosure particular vehicles. This part of the system is designed in hardware description language and implemented in the reconfigurable resources of the Zynq platform. Information about the vehicle presence, along with the time-spatial image obtained at the VDL are transmitted and processed on the ARM core integrated in the Zynq device. This allows to perform further analysis and eliminate false or multiple detections. The solution has been evaluated on several video sequences recorded in various conditions: sunny day (the occurrence of deep shadows), cloudy day, rain and night-time. The system could be an important element of an intelligent transportation system (ITS) i.e. applied in a smart camera.
Image Processing and Communications | 2012
Mateusz Komorkiewicz; Jaromir Przybylo
Abstract Because the amount of various video streams recorded by video surveillance systems is increasing, the new approach, where human operator analyzing the video is replaced by artificial intelligence system is gaining new followers. The algorithm have to meet several requirements: must be accurate and not produce too many false alarms, moreover it must be able to process the received video stream in real-time to provide sufficient response time. In the article a system is presented which is able to detect and analyze walking pedestrians. It is based on two algorithms: scale space and matching contours using distance transform. The information can be used by other parts of the advanced video surveillance system, namely object tracking by detection, detecting heavy equipment only zone intrusion or for sorting out possible suspicious persons (pickpocket, homeless etc.).
applied reconfigurable computing | 2016
Tomasz Kryjak; Marek Gorgon; Mateusz Komorkiewicz
In this paper ai¾?novel method for data reordering in ai¾?video stream is presented. It can be used in high performance vision algorithms that require block based image processing. For case study ai¾?block based optical flow histogram computation application was selected. The proposed solution allows for ai¾?2.3x and 6.3x speed-up for fixed-point and floating-point calculations respectively. This enables real-time operations for
Image Processing and Communications | 2013
Tomasz Kryjak; Mateusz Komorkiewicz
Image Processing and Communications | 2012
Mateusz Komorkiewicz
1920 \times 1080
Journal of Real-time Image Processing | 2014
Tomasz Kryjak; Mateusz Komorkiewicz; Marek Gorgon
international conference mixed design of integrated circuits and systems | 2013
Tomasz Kryjak; Mateusz Komorkiewicz; Marek Gorgon
pixels or higher resolutions.