Jan Wietrzykowski
Poznań University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Wietrzykowski.
Progress in Automation, Robotics and Measuring Techniques (2) | 2015
Dominik Belter; Michał Nowicki; Piotr Skrzypczyński; Krzysztof Walas; Jan Wietrzykowski
Search and rescue robots ought to be autonomous, as it enables to keep the human personnel out of dangerous areas. To achieve desirable level of the autonomy both environment mapping and reliable self-localization have to be implemented. In this paper we analyse the application of a fast, lightweight RGB-D Simultaneous Localization and Mapping (SLAM) system for robots involved in indoor/outdoor search and rescue missions. We demonstrate that under some conditions the RGB-D sensors provide data reliable enough even for outdoor, real-time SLAM. Experiments are performed on a legged robot and a wheeled robot, using two representative RGB-D sensors: the Asus Xtion Pro Live and the recently introduced Microsoft Kinect ver. 2.
2015 IEEE 2nd International Conference on Cybernetics (CYBCONF) | 2015
Michał Nowicki; Jan Wietrzykowski; Piotr Skrzypczyński
Contemporary mobile devices can be used as navigation aids. The embedded gyroscope, accelerometer and magnetometer used together may form a reliable AHRS (Attitude and Heading Reference System), which estimates the orientation of the device with respect to the global reference frame. However, a question arises: which framework to use in order to integrate the noisy data under the tight computing power and energy limitations of a mobile device? While the Extended Kalman Filter (EKF) is considered the standard framework to solve estimation problems in navigation, in practice the much simpler Complementary Filter is often applied in systems of limited resources. In this paper we compare the strengths and drawbacks of both frameworks in the application context of Android-based mobile devices. The comparison is focused on the assessment of accuracy and reliability in several real-world motion scenarios.
international conference on indoor positioning and indoor navigation | 2016
Michał Nowicki; Jan Wietrzykowski; Piotr Skrzypczyński
The paper presents a thorough evaluation of two representative visual place recognition algorithms that can be applied to the problem of indoor localization of a person equipped with a modern smartphone. The evaluation focuses on comparing two different state-of-the-art approaches: single image-based place recognition, represented by the FAB-MAP algorithm, and recognition based on a sequence of images, represented by the ABLE-M algorithm. The evaluation focuses on real-life localization examples in buildings of different structure and the influence of the presence of people in the environment on the recognition results. Moreover, the paper demonstrates feasibility and real-time performance of the visual place recognition methods implemented on an Android smartphone.
arXiv: Robotics | 2017
Michał Nowicki; Jan Wietrzykowski
Using WiFi signals for indoor localization is the main localization modality of the existing personal indoor localization systems operating on mobile devices. WiFi fingerprinting is also used for mobile robots, as WiFi signals are usually available indoors and can provide rough initial position estimate or can be used together with other positioning systems. Currently, the best solutions rely on filtering, manual data analysis, and time-consuming parameter tuning to achieve reliable and accurate localization. In this work, we propose to use deep neural networks to significantly lower the work-force burden of the localization system design, while still achieving satisfactory results. Assuming the state-of-the-art hierarchical approach, we employ the DNN system for building/floor classification. We show that stacked autoencoders allow to efficiently reduce the feature space in order to achieve robust and precise classification. The proposed architecture is verified on the publicly available UJIIndoorLoc dataset and the results are compared with other solutions.
arXiv: Robotics | 2017
Jan Wietrzykowski; Michał Nowicki; Piotr Skrzypczyński
Personal indoor localization is usually accomplished by fusing information from various sensors. A common choice is to use the WiFi adapter that provides information about Access Points that can be found in the vicinity. Unfortunately, state-of-the-art approaches to WiFi-based localization often employ very dense maps of the WiFi signal distribution, and require a time-consuming process of parameter selection. On the other hand, camera images are commonly used for visual place recognition, detecting whenever the user observes a scene similar to the one already recorded in a database. Visual place recognition algorithms can work with sparse databases of recorded scenes and are in general simple to parametrize. Therefore, we propose a WiFi-based global localization method employing the structure of the well-known FAB-MAP visual place recognition algorithm. Similarly to FAB-MAP our method uses Chow-Liu trees to estimate a joint probability distribution of re-observation of a place given a set of features extracted at places visited so far. However, we are the first who apply this idea to recorded WiFi scans instead of visual words. The new method is evaluated on the UJIIndoorLoc dataset used in the EvAAL competition, allowing fair comparison with other solutions.
Progress in Automation, Robotics and Measuring Techniques (2) | 2015
Adam Bondyra; Michał Nowicki; Jan Wietrzykowski
Nowadays robotic researches are concerned about autonomous and robust operation outdoors in order to perform a variety of practical applications. Therefore, we present a robotic platform TAPAS designed for autonomous navigation in the man-made environments, like parks, and capable of transporting 5 kg payload. The article presents the hardware design and sensory system that allowed to create a fully autonomous vehicle unique due to its low cost, light weight and long battery duration. Presented solution was already thoroughly evaluated at the international robotic competition Robotour 2014, where TAPAS took ex aequo 4th place out of 13 robots. Taking part in the competition provided feedback that is discussed in the article and will be used for further developments.
Progress in Automation, Robotics and Measuring Techniques (2) | 2015
Jan Wietrzykowski; Michał Nowicki; Adam Bondyra
Autonomous outdoor navigation had been a topic researched for years, but there is still a lack of affordable robots that can efficiently navigate in man-made outdoor environments. Therefore, we present a navigation method developed for TAPAS robot, which was designed for outdoor perception, localization and navigation using fusion of data from multiple sensors. The novelty of the presented approach lies in the usage of publicly available OpenStreetMap information. The proposed system was used in Robotour 2014 competition and allowed to achieve ex aequo 4th place out of 13 teams. The article contains also the summary of experience gained during the competition and future enhancements that can be applied to proposed solution.
Journal of Intelligent and Robotic Systems | 2018
Dominik Belter; Jan Wietrzykowski; Piotr Skrzypczyński
This paper considers motion planning for a six-legged walking robot in rough terrain, considering both the geometry of the terrain and its semantic labeling. The semantic labels allow the robot to distinguish between different types of surfaces it can walk on, and identify areas that cannot be negotiated due to their physical nature. The proposed environment map provides to the planner information about the shape of the terrain, and the terrain class labels. Such labels as “wall” and “plant” denote areas that have to be avoided, whereas other labels, “grass”, “sand”, “concrete”, etc. represent negotiable areas of different properties. We test popular classification algorithms: Support Vector Machine and Random Trees in the task of producing proper terrain labeling from RGB-D data acquired by the robot. The motion planner uses the A∗ algorithm to guide the RRT-Connect method, which yields detailed motion plans for the multi-d.o.f. legged robot. As the A∗ planner takes into account the terrain semantic labels, the robot avoids areas which are potentially risky and chooses paths crossing mostly the preferred terrain types. We report experimental results that show the ability of the new approach to avoid areas that are considered risky for legged locomotion.
european conference on mobile robots | 2017
Jan Wietrzykowski; Piotr Skrzypczyński
This paper proposes a novel approach to global localization using high-level features. The new probabilistic framework enables to incorporate uncertain localization cues into a probability distribution that describes the likelihood of the current robot pose. We use multiple triplets of planes segmented from RGB-D data to generate this probability distribution and to find the robot pose with respect to a global map of planar segments. The algorithm can be used for global localization with a known map or for closing loops with RGB- D data. The approach is validated in experiments using the publicly available NYUv2 RGB-D dataset and our new dataset prepared for testing localization on plane-rich scenes.
Journal of Automation, Mobile Robotics and Intelligent Systems | 2016
Jan Wietrzykowski