Michał Fularz
Poznań University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michał Fularz.
advanced concepts for intelligent vision systems | 2013
Adam Schmidt; Michał Fularz; Marek Kraft; Andrzej J. Kasinski; Michał Nowicki
The paper presents a RGB-D dataset for development and evaluation of mobile robot navigation systems. The dataset was registered using a WiFiBot robot equipped with a Kinect sensor. Unlike the presently available datasets, the environment was specifically designed for the registration with the Kinect sensor. Moreover, it was ensured that the registered data is synchronized with the ground truth position of the robot. The presented dataset will be made publicly available for research purposes.
international conference on image analysis and recognition | 2014
Michał Fularz; Michał Nowicki; Piotr Skrzypczyński
In many practical applications of mobile devices self-localization of the user in a GPS-denied indoor environment is required. Among the available approaches the visual odometry concept enables continuous, precise egomotion estimation in previously unknown environments. In this paper we examine the usual pipeline of a monocular visual odometry system, identifying the bottlenecks and demonstrating how to circumvent the resource constrains, to implement a real-time visual odometry system on a smartphone or tablet.
International Journal of Advanced Robotic Systems | 2014
Adam Schmidt; Andrzej J. Kasinski; Marek Kraft; Michał Fularz; Zuzanna Domagała
This paper presents the complete calibration procedure of a multi-camera system for mobile robot motion registration. Optimization-based, purely visual methods for the estimation of the relative poses of the motion registration system cameras, as well as the relative poses of the cameras and markers placed on the mobile robot were proposed. The introduced methods were applied to the calibration of the system and the quality of the obtained results was evaluated. The obtained results compare favourably with the state of the art solutions, allowing the use of the considered motion registration system for the accurate reconstruction of the mobile robot trajectory and to register new datasets suitable for the benchmarking of indoor, visual-based navigation algorithms.
advanced concepts for intelligent vision systems | 2011
Marek Kraft; Michał Fularz; Andrzej J. Kasinski
Successful establishing of point correspondences between consecutive image frames is important in tasks such as visual odometry, structure from motion or simultaneous localization and mapping. In this paper, we describe the architecture of the compact, energy-efficient dedicated hardware processors, enabling fast feature detection and matching.
International Journal of Advanced Robotic Systems | 2015
Michał Fularz; Marek Kraft; Adam Schmidt; Andrzej J. Kasinski
Image feature detection and matching is a fundamental operation in image processing. As the detected and matched features are used as input data for high-level computer vision algorithms, the matching accuracy directly influences the quality of the results of the whole computer vision system. Moreover, as the algorithms are frequently used as a part of a real-time processing pipeline, the speed at which the input image data are handled is also a concern. The paper proposes an embedded system architecture for feature detection and matching. The architecture implements the FAST feature detector and the BRIEF feature descriptor and is capable of establishing key point correspondences in the input image data stream coming from either an external sensor or memory at a speed of hundreds of frames per second, so that it can cope with most demanding applications. Moreover, the proposed design is highly flexible and configurable, and facilitates the trade-off between the processing speed and programmable logic resource utilization. All the designed hardware blocks are designed to use standard, widely adopted hardware interfaces based on the AMBA AXI4 interface protocol and are connected using an underlying direct memory access (DMA) architecture, enabling bottleneck-free inter-component data transfers.
machine vision applications | 2017
Marek Kraft; Michał Nowicki; Adam Schmidt; Michał Fularz; Piotr Skrzypczyński
Although the introduction of commercial RGB-D sensors has enabled significant progress in the visual navigation methods for mobile robots, the structured-light-based sensors, like Microsoft Kinect and Asus Xtion Pro Live, have some important limitations with respect to their range, field of view, and depth measurements accuracy. The recent introduction of the second- generation Kinect, which is based on the time-of-flight measurement principle, brought to the robotics and computer vision researchers a sensor that overcomes some of these limitations. However, as the new Kinect is, just like the older one, intended for computer games and human motion capture rather than for navigation, it is unclear how much the navigation methods, such as visual odometry and SLAM, can benefit from the improved parameters. While there are many publicly available RGB-D data sets, only few of them provide ground truth information necessary for evaluating navigation methods, and to the best of our knowledge, none of them contains sequences registered with the new version of Kinect. Therefore, this paper describes a new RGB-D data set, which is a first attempt to systematically evaluate the indoor navigation algorithms on data from two different sensors in the same environment and along the same trajectories. This data set contains synchronized RGB-D frames from both sensors and the appropriate ground truth from an external motion capture system based on distributed cameras. We describe in details the data registration procedure and then evaluate our RGB-D visual odometry algorithm on the obtained sequences, investigating how the specific properties and limitations of both sensors influence the performance of this navigation method.
Progress in Automation, Robotics and Measuring Techniques | 2015
Michał Fularz; Marek Kraft; Adam Schmidt; Andrzej J. Kasinski
Real time video surveillance and inspection is complex task, requiring processing large amount of image data. Performing this task in each node of a multi-camera system requires high performance and power efficient architecture of the smart camera. Such solution, based on a Xilinx Zynq heterogeneous FPGA (Field Programmable Logic Array) is presented in this paper. The proposed architecture is a general foundation, which allows easy and flexible prototyping and implementation of a range of image and video processing algorithms. Two example algorithm implementations using the described architecture are presented for illustration – moving object detection and feature points detection, description and matching.
international workshop on robot motion and control | 2013
Krzysztof Walas; Adam Schmidt; Marek Kraft; Michał Fularz
The mobile robot requires the knowledge about the ground type in front of it to efficiently negotiate diverse terrain types while working outdoors. This article presents the implementation of ground classification algorithms in a Field Programmable Gate Array structure. The terrain type classification is based on the signals acquired with force/torque sensor mounted on the walking robot foot. The hardware implementation allows for offloading of the resource demanding computations. The paper begins with a short presentation of the experimental setup. Then the classification algorithms are described. Finally the description of hardware implementation of the algorithms is given followed by the test results.
advanced concepts for intelligent vision systems | 2015
Marek Kraft; Michał Fularz; Adam Schmidt
In this paper, a collaborative method for activity control of a network of cameras is presented. The method adjusts the activation level of all nodes in the network according to the observed scene activity, so that no vital information is missed, and the rate of communication and power consumption can be reduced. The proposed method is very flexible as an arbitrary number of activity levels can be defined, and it is easily adapted to the performed task. The method can be used either as a standalone solution, or integrated with other algorithms, due to its relatively low computational cost. The results of preliminary small scale test confirm its correct operation.
ICMMI | 2014
Adam Schmidt; Marek Kraft; Michał Fularz; Zuzanna Domagała
This paper presents an extension of the visual simultaneous localization and mapping (VSLAM) system with the direct measurements of the robot’s orientation change. Four different sources of the additional measurements were considered: visual odometry using both the 5-point [10, 15] and 8-point algorithm [9], wheel odometry and Inertial Measurement Unit (IMU) measurements. The accuracy of the proposed system was compared with the accuracy of the canonical MonoSLAM [7]. The introduction of the additional measurements allowed to reduce the mean error by 17%.