Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andras Majdik is active.

Publication


Featured researches published by Andras Majdik.


intelligent robots and systems | 2013

MAV urban localization from Google street view data

Andras Majdik; Yves Albers-Schoenberg; Davide Scaramuzza

We tackle the problem of globally localizing a camera-equipped micro aerial vehicle flying within urban environments for which a Google Street View image database exists. To avoid the caveats of current image-search algorithms in case of severe viewpoint changes between the query and the database images, we propose to generate virtual views of the scene, which exploit the air-ground geometry of the system. To limit the computational complexity of the algorithm, we rely on a histogram-voting scheme to select the best putative image correspondences. The proposed approach is tested on a 2 km image dataset captured with a small quadroctopter flying in the streets of Zurich. The success of our approach shows that our new air-ground matching algorithm can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus, outperforming conventional visual place-recognition approaches.


Journal of Field Robotics | 2015

Air-ground Matching: Appearance-based GPS-denied Urban Localization of Micro Aerial Vehicles

Andras Majdik; Damiano Verda; Yves Albers-Schoenberg; Davide Scaramuzza

In this paper, we address the problem of globally localizing and tracking the pose of a camera-equipped micro aerial vehicle MAV flying in urban streets at low altitudes without GPS. An image-based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel air-ground image-matching algorithm to search the airborne image of the MAV within a ground-level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back-projecting the corresponding image points onto a cadastral three-dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground-level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision-based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus outperforming conventional visual place-recognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera.


intelligent robots and systems | 2011

Adaptive appearance based loop-closing in heterogeneous environments

Andras Majdik; Dorian Gálvez-López; Gheorghe Lazea; José A. Castellanos

The work described in this paper concerns the problem of detecting loop-closure situations whenever an autonomous vehicle returns to previously visited places in the navigation area. An appearance-based perspective is considered by using images gathered by the on-board vision sensors for navigation tasks in heterogeneous environments characterized by the presence of buildings and urban furniture together with pedestrians and different types of vegetation. We propose a novel probabilistic on-line weight updating algorithm for the bag-of-words description of the gathered images which takes into account both prior knowledge derived from an off-line learning stage and the accuracy of the decisions taken by the algorithm along time. An intuitive measure of the ability of a certain word to contribute to the detection of a correct loop-closure is presented. The proposed strategy is extensively tested using well-known datasets obtained from challenging large-scale environments which emphasize the large improvement on its performance over previously reported works in the literature.


international conference on robotics and automation | 2014

Micro air vehicle localization and position tracking from textured 3D cadastral models

Andras Majdik; Damiano Verda; Yves Albers-Schoenberg; Davide Scaramuzza

In this paper, we address the problem of localizing a camera-equipped Micro Aerial Vehicle (MAV) flying in urban streets at low altitudes. An appearance-based global positioning system to localize MAVs with respect to the surrounding buildings is introduced. We rely on an air-ground image matching algorithm to search the airborne image of the MAV within a ground-level Street View image database and to detect image matching points. Based on the image matching points, we infer the global position of the MAV by back-projecting the corresponding image points onto a cadastral 3D city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry, whenever a good match is detected between the airborne MAV and the street-level images. The proposed approach is tested on a dataset captured with a small quadroctopter flying in the streets of Zurich.


international conference on multisensor fusion and integration for intelligent systems | 2012

Heterogeneous feature based correspondence estimation

Levente Tamas; Andras Majdik

This paper gives an insight in the preliminary results of an ongoing work about heterogeneous point feature estimation acquired from different type of sensors including structured light camera, stereo camera and a custom 3D laser range finder. The main goal of the paper is to compare the performance of the different type of local descriptors for indoor office environment. Several type of 3D features were evaluated on different datasets including the output of an enhanced stereo image processing algorithm too. From the extracted features the correspondences were determined between two different recording positions for each type of sensor. These correspondences were filtered and the final benchmarking of the extracted feature correspondences were compared for the different data sets. Further on, there is proposed an open access dataset for public evaluation of the proposed algorithms.


2009 Advanced Technologies for Enhanced Quality of Life | 2009

Laser Based Localization Techniques for Indoor Mobile Robots

Levente Tamas; Gheorghe Lazea; Mircea Popa; Istvan Szoke; Andras Majdik

The localization problem in indoor environment based on LIDAR measurements is analyzed in this paper. Practical aspects of the localization are discussed including the implementations of the state estimation and registration algorithms. The localization framework developed is sufficient generic to be used in a variety of other autonomous vehicles. The results of the proposed navigation algorithms demonstrate a reliable and accurate position estimation for autonomous vehicles operating in a variety of environments.


2016 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM) | 2016

Photogrammetric 3D reconstruction of the old slaughterhouse in budapest

Andras Majdik; Laszlo Tizedes; Mate Bartus; Tamás Szirányi

In this paper we address the problem of photo grammetric 3D reconstructions of an industrial cultural heritage site in Budapest, namely the Old Slaughterhouse. We perform an extensive comparison and evaluation of the state-of-the-art online visual SLAM (Simultaneous Localization and Mapping) and offline visual SFM (Structure from Motion) methods in order to obtain the 3D model of the building. We show results obtained using a dataset recorded with a camera-equipped Micro Air Vehicle.


mediterranean conference on control and automation | 2010

Visual odometer system to build feature based maps for mobile robot navigation

Andras Majdik; Levente Tamas; Mircea Popa; Istvan Szoke; Gheorghe Lazea

This paper presents a visual odometer system for mobile robot position correction. The developed algorithm detects the same Speeded Up Robust Features (SURF) on the stereo pair images to obtain three dimensional point clouds at every robot location. The algorithm tracks the displacement of the identical features viewed from different positions to compute the robots positions. The displacements between the point clouds are computed with the use of the Iterative Closest Point (ICP) algorithm. The ICP is used also to register the landmarks in the feature based map of the entire environment. The results of an indoor office environment experiments are shown.


international conference on intelligent computer communication and processing | 2009

An effective method for people detection in grayscale image sequences

Mircea Popa; Gheorghe Lazea; Andras Majdik; Levente Tamas; Istvan Szoke

This paper presents a method for detecting people from images taken with a camera mounted on a robot. The purpose of the detection is avoiding people collision while robot is moving within an unknown environment. It combines two algorithms for this purpose. First, the appearance of people is learned using a set of Haar-like features and the Adaboost algorithm. This information is embedded by building a classifier to differentiate people appearances by other structures. When an image is analyzed for detecting people, regions which contain vertical structures are determined using image gradients. Those regions which have a specific aspect-ratio are selected and the classifier is applied on them. The classifier marks the regions which contain people-like structures. Because this method is desired to be integrated in an autonomous robot navigation system for a dynamic environment, particular attention is paid to increase the speed of the detection as much as possible.


11th International Conference on Multimedia and Network Information Systems, MISSI 2018 | 2019

A hybrid CNN approach for single image depth estimation: A case study

Károly Harsányi; Attila Kiss; Andras Majdik; Tamás Szirányi

Three-dimensional scene understanding is an emerging field in many real-world applications. Autonomous driving, robotics, and continuous real-time tracking are hot topics within the engineering society. One essential component of this is to develop faster and more reliable algorithms being capable of predicting depths from RGB images. Generally, it is easier to install a system with fewer cameras because it requires less calibration. Thus, our aim is to develop a strategy for predicting the depth on a single image as precisely as possible from one point of view. There are existing methods for this problem with promising results. The goal of this paper is to advance the state-of-the-art in the field of single-image depth prediction using convolutional neural networks. In order to do so, we modified an existing deep neural network to get improved results. The proposed architecture contains additional side-to-side connections between the encoding and decoding branches.

Collaboration


Dive into the Andras Majdik's collaboration.

Top Co-Authors

Avatar

Gheorghe Lazea

Technical University of Cluj-Napoca

View shared research outputs
Top Co-Authors

Avatar

Tamás Szirányi

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Levente Tamas

Technical University of Cluj-Napoca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Attila Kiss

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Károly Harsányi

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Istvan Szoke

Technical University of Cluj-Napoca

View shared research outputs
Top Co-Authors

Avatar

Mircea Popa

Technical University of Cluj-Napoca

View shared research outputs
Top Co-Authors

Avatar

Diana Lupea

Technical University of Cluj-Napoca

View shared research outputs
Researchain Logo
Decentralizing Knowledge