Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iván Eichhardt is active.

Publication


Featured researches published by Iván Eichhardt.


machine vision applications | 2017

Image-guided ToF depth upsampling: a survey

Iván Eichhardt; Dmitry Chetverikov; Zsolt Jankó

Recently, there has been remarkable growth of interest in the development and applications of time-of-flight (ToF) depth cameras. Despite the permanent improvement of their characteristics, the practical applicability of ToF cameras is still limited by low resolution and quality of depth measurements. This has motivated many researchers to combine ToF cameras with other sensors in order to enhance and upsample depth images. In this paper, we review the approaches that couple ToF depth images with high-resolution optical images. Other classes of upsampling methods are also briefly discussed. Finally, we provide an overview of performance evaluation tests presented in the related studies.


systems, man and cybernetics | 2016

Novel methods for image-guided ToF depth upsampling

Iván Eichhardt; Zsolt Jankó; Dmitry Chetverikov

Sensor fusion is an important part of modern cyber-physical systems that observe and analyse real-world environments. Time-of-Flight depth cameras provide high frame rate low-resolution depth data that can be efficiently used in many applications related to cyber-physical systems. In this paper, we address the critical issue of upsampling and enhancing the low-quality depth data using a calibrated and registered high-resolution colour image or video. Two novel algorithms for image-guided depth upsampling are proposed based on different principles. A new method for video-guided upsampling is also presented. Initial test results on synthetic and real data are shown and discussed.


Computer Vision and Image Understanding | 2018

A differential geometry approach to camera-independent image correspondence

József Molnár; Iván Eichhardt

Abstract Projective geometry is a standard mathematical tool for image-based 3D reconstruction. Most reconstruction methods establish pointwise image correspondences using projective geometry. We present an alternative approach based on differential geometry using oriented patches rather than points. Our approach assumes that the scene to be reconstructed is observed by any camera, existing or potential, that satisfies very general conditions, namely, the differentiability of the surface and the bijective projection functions. We show how the notions of the differential geometry such as diffeomorphism, pushforward and pullback are related to the reconstruction problem. A unified theory applicable to various 3D reconstruction problems is presented. Considering two views of the surface, we derive reconstruction equations for oriented patches and pose equations to determine the relative pose of the two cameras. Then we discuss the generalized epipolar geometry and derive the generalized epipolar constraint (compatibility equation) along the epipolar curves. Applying the proposed theory to the projective camera and assuming that affine mapping between small corresponding regions has been estimated, we obtain the minimal pose equation for the case when a fully calibrated camera is moved with its internal parameters unchanged. Equations for the projective epipolar constraints and the fundamental matrix are also derived. Finally, two important nonlinear camera types, the axial and the spherical, are examined.


international conference on pattern recognition | 2016

Improvement of camera calibration using surface normals

Iván Eichhardt; Levente Hajder

A new camera calibration approach is proposed that can utilize the affine transformations and surface normals of small spatial patches. Even though classical calibration algorithms use only point locations, images contain more information than simple 2D point coordinates. New methods are presented in this paper for the calibration problem with their closed-form solutions, then the estimated parameters are numerically refined. The accuracy of our novel methods is validated on synthesized testing data, and the real-world applicability is presented on the calibration of a 3D structured-light scanner.


international conference on computer vision theory and applications | 2016

A Novel Technique for Point-wise Surface Normal Estimation

Daniel Barath; Iván Eichhardt

Nowadays multi-view stereo reconstruction algorithms can achieve impressive results using many views of the scene. Our primary objective is to robustly extract more information about the underlying surface from fewer images. We present a method for point-wise surface normal and tangent plane estimation in stereo case to reconstruct real-world scenes. The proposed algorithm works for general camera model, however, we choose the pinhole-camera in order to demonstrate its efficiency. The presented method uses particle swarm optimization under geometric and epipolar constraints in order to achieve suitable speed and quality. An oriented point cloud is generated using a single point correspondence for each oriented 3D point and a cost function based on photo-consistency. It can straightforwardly be extended to multi-view reconstruction. Our method is validated in both synthesized and real tests. The proposed algorithm is compared to one of the state-of-the-art patch-based multi-view reconstruction algorithms.


Sensors | 2018

Accurate Calibration of Multi-LiDAR-Multi-Camera Systems

Zoltán Pusztai; Iván Eichhardt; Levente Hajder

As autonomous driving attracts more and more attention these days, the algorithms and sensors used for machine perception become popular in research, as well. This paper investigates the extrinsic calibration of two frequently-applied sensors: the camera and Light Detection and Ranging (LiDAR). The calibration can be done with the help of ordinary boxes. It contains an iterative refinement step, which is proven to converge to the box in the LiDAR point cloud, and can be used for system calibration containing multiple LiDARs and cameras. For that purpose, a bundle adjustment-like minimization is also presented. The accuracy of the method is evaluated on both synthetic and real-world data, outperforming the state-of-the-art techniques. The method is general in the sense that it is both LiDAR and camera-type independent, and only the intrinsic camera parameters have to be known. Finally, a method for determining the 2D bounding box of the car chassis from LiDAR point clouds is also presented in order to determine the car body border with respect to the calibrated sensors.


Archive | 2018

Affine Correspondences Between Central Cameras for Rapid Relative Pose Estimation

Iván Eichhardt; Dmitry Chetverikov

This paper presents a novel algorithm to estimate the relative pose, i.e. the 3D rotation and translation of two cameras, from two affine correspondences (ACs) considering any central camera model. The solver is built on new epipolar constraints describing the relationship of an AC and any central views. We also show that the pinhole case is a specialization of the proposed approach. Benefiting from the low number of required correspondences, robust estimators like LO-RANSAC need fewer samples, and thus terminate earlier than using the five-point method. Tests on publicly available datasets containing pinhole, fisheye and catadioptric camera images confirmed that the method often leads to results superior to the state-of-the-art in terms of geometric accuracy.


Archive | 2014

Method and system for generating a three-dimensional model

Csaba Benedek; Dmitrij Csetverikov; Zsolt Jankó; Tamás Szirányi; Attila Börcs; Oszkár Józsa; Iván Eichhardt


international conference on computer vision | 2017

Computer Vision Meets Geometric Modeling: Multi-view Reconstruction of Surface Points and Normals Using Affine Correspondences

Levente Hajder; Iván Eichhardt


Archive | 2015

A Brief Survey of Image-Based Depth Upsampling

Dmitrij Csetverikov; Iván Eichhardt; Zsolt Jankó

Collaboration


Dive into the Iván Eichhardt's collaboration.

Top Co-Authors

Avatar

Zsolt Jankó

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Attila Börcs

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Csaba Benedek

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dmitry Chetverikov

Eötvös Loránd University

View shared research outputs
Top Co-Authors

Avatar

Levente Hajder

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar

Tamás Szirányi

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Daniel Barath

Eötvös Loránd University

View shared research outputs
Top Co-Authors

Avatar

József Molnár

Eötvös Loránd University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge