Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kostas Daniilidis is active.

Publication


Featured researches published by Kostas Daniilidis.


computer vision and pattern recognition | 2006

Fully Automatic Registration of 3D Point Clouds

Ameesh Makadia; Alexander Patterson; Kostas Daniilidis

We propose a novel technique for the registration of 3D point clouds which makes very few assumptions: we avoid any manual rough alignment or the use of landmarks, displacement can be arbitrarily large, and the two point sets can have very little overlap. Crude alignment is achieved by estimation of the 3D-rotation from two Extended Gaussian Images even when the data sets inducing them have partial overlap. The technique is based on the correlation of the two EGIs in the Fourier domain and makes use of the spherical and rotational harmonic transforms. For pairs with low overlap which fail a critical verification step, the rotational alignment can be obtained by the alignment of constellation images generated from the EGIs. Rotationally aligned sets are matched by correlation using the Fourier transform of volumetric functions. A fine alignment is acquired in the final step by running Iterative Closest Points with just few iterations.


intelligent robots and systems | 2008

Monocular visual odometry in urban environments using an omnidirectional camera

Jean-Philippe Tardif; Yanis Pavlidis; Kostas Daniilidis

We present a system for monocular simultaneous localization and mapping (mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the camera trajectory without relying on any motion model. The estimation is completely incremental: at a given time frame, only the current location is estimated while the previous camera positions are never modified. In particular, we do not perform any simultaneous iterative optimization of the camera positions and estimated 3D structure (local bundle adjustment). The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint. We show that the latter leads to a much more stable estimation of the camera trajectory than the conventional approach. We perform high precision camera trajectory estimation in urban scenes with a large amount of clutter. Using an omnidirectional camera placed on a vehicle, we cover one of the longest distance ever reported, up to 2.5 kilometers.


IEEE Transactions on Robotics | 2009

Vision-Based, Distributed Control Laws for Motion Coordination of Nonholonomic Robots

Nima Moshtagh; Nathan Michael; Ali Jadbabaie; Kostas Daniilidis

In this paper, we study the problem of distributed motion coordination among a group of nonholonomic ground robots. We develop vision-based control laws for parallel and balanced circular formations using a consensus approach. The proposed control laws are distributed in the sense that they require information only from neighboring robots. Furthermore, the control laws are coordinate-free and do not rely on measurement or communication of heading information among neighbors but instead require measurements of bearing, optical flow, and time to collision, all of which can be measured using visual sensors. Collision-avoidance capabilities are added to the team members, and the effectiveness of the control laws are demonstrated on a group of mobile robots.


IEEE Transactions on Robotics | 2009

Vision-Based Localization for Leader–Follower Formation Control

Gian Luca Mariottini; Fabio Morbidi; Domenico Prattichizzo; N. Vander Valk; Nathan Michael; George J. Pappas; Kostas Daniilidis

This paper deals with vision-based localization for leader-follower formation control. Each unicycle robot is equipped with a panoramic camera that only provides the view angle to the other robots. The localization problem is studied using a new observability condition valid for general nonlinear systems and based on the extended output Jacobian. This allows us to identify those robot motions that preserve the system observability and those that render it nonobservable. The state of the leader-follower system is estimated via the extended Kalman filter, and an input-state feedback control law is designed to stabilize the formation. Simulations and real-data experiments confirm the theoretical results and show the effectiveness of the proposed formation control.


computer vision and pattern recognition | 2001

Structure and motion from uncalibrated catadioptric views

Christopher Geyer; Kostas Daniilidis

In this paper we present a new algorithm for structure from motion from point correspondences in images taken from uncalibrated catadioptric cameras with parabolic mirrors. We assume that the unknown intrinsic parameters are three: the combined focal length of the mirror and lens and the intersection of the optical axis with the image. We introduce a new representation for images of points and lines in catadioptric images which we call the circle space. This circle space includes imaginary circles, one of which is the image of the absolute conic. We formulate the epipolar constraint in this space and establish a new 4/spl times/4 catadioptric fundamental matrix. We show that the image of the absolute conic belongs to the kernel of this matrix. This enables us to prove that Euclidean reconstruction is feasible from two views with constant parameters and from three views with varying parameters. In both cases, it is one less than the number of views necessary with perspective cameras.


conference on decision and control | 2005

Vision-based Localization of Leader-Follower Formations

Gian Luca Mariottini; George J. Pappas; Domenico Prattichizzo; Kostas Daniilidis

This paper focuses on the localization problem for a mobile camera network. In particular, we consider the case of leader-follower formations of nonholonomic mobile vehicles equipped with vision sensors which provide only the bearing to the other robots. We prove a sufficient condition for observability and show that recursive estimation enables a leader-follower formation if the leader is not trapped in an unobservable configuration. We employ an Extended Kalman Filter for the estimation of each follower position and orientation with respect to the leader and we adopt a feedback linearizing control strategy to achieve a desired formation. Simulation results in a noisy environment are provided.


computer vision and pattern recognition | 2016

Sparseness Meets Deepness: 3D Human Pose Estimation from Monocular Video

Xiaowei Zhou; Menglong Zhu; Spyridon Leonardos; Konstantinos G. Derpanis; Kostas Daniilidis

This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables to take into account considerable uncertainties in 2D joint locations. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.


Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02 | 2002

Image processing in catadioptric planes: spatiotemporal derivatives and optical flow computation

Kostas Daniilidis; Ameesh Makadia; Thomas Bülow

Images produced by catadioptric sensors contain a significant amount of radial distortion and variation in inherent scale. Blind application of conventional shift-invariant operators or optical flow estimators yields erroneous results. One could argue that given a calibration of such a sensor we would always be able to remove distortions and apply any operator in a local perspective plane. In addition to the inefficiency of such an approach, interpolation effects during warping have undesired results in filtering. In this paper, we propose to use the sphere as the underlying domain of image processing in central catadioptric systems. This does not mean that we will warp the catadioptric image into a spherical image. Instead, we will formulate all the operations on the sphere but use the samples from the original catadioptric plane. As an example, we study the convolution with the Gaussian and its derivatives and as well as the computation of optical flow in image sequences acquired with a parabolic catadioptric sensor.


intelligent robots and systems | 2004

Sampling based sensor-network deployment

Volkan Isler; Sampath Kannan; Kostas Daniilidis

In this paper, we consider the problem of placing networked sensors in a way that guarantees coverage and connectivity. We focus on sampling based deployment and present algorithms that guarantee coverage and connectivity with a small number of sensors. We consider two different scenarios based on the flexibility of deployment. If deployment has to be accomplished in one step, like airborne deployment, then the main question becomes how many sensors are needed. If deployment can be implemented in multiple steps, then awareness of coverage and connectivity can be updated. For this case, we present incremental deployment algorithms, which consider the current placement to adjust the sampling domain. The algorithms are simple, easy to implement, and require a small number of sensors. We believe the concepts and algorithms presented in this paper provide a unifying framework for existing and future deployment algorithms, which consider many practical issues not considered in the present work.


international conference on computer vision | 2005

Fundamental matrix for cameras with radial distortion

João Pedro Barreto; Kostas Daniilidis

When deploying a heterogeneous camera network or when we use cheap zoom cameras like in cell-phones, it is not practical, if not impossible to off-line calibrate the radial distortion of each camera using reference objects. It is rather desirable to have an automatic procedure without strong assumptions about the scene. In this paper, we present a new algorithm for estimating the epipolar geometry of two views where the two views can be radially distorted with different distortion factors. It is the first algorithm in the literature solving the case of different distortion in the left and right view linearly and without assuming the existence of lines in the scene. Points in the projective plane are lifted to a quadric in three-dimensional projective space. A radial distortion of the projective plane results to a matrix transformation in the space of lifted coordinates. The new epipolar constraint depends linearly on a 4 /spl times/ 4 radial fundamental matrix which has 9 degrees of freedom. A complete algorithm is presented and tested on real imagery.

Collaboration


Dive into the Kostas Daniilidis's collaboration.

Top Co-Authors

Avatar

Xiaowei Zhou

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Ameesh Makadia

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Nikolay Atanasov

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

George J. Pappas

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Vijay Kumar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Roberto Tron

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Menglong Zhu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Petros Maragos

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge