Luca Ballan
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luca Ballan.
international conference on computer vision | 2011
Aparna Taneja; Luca Ballan; Marc Pollefeys
In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.
computer vision and pattern recognition | 2013
Aparna Taneja; Luca Ballan; Marc Pollefeys
In this paper, we propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. We designed our approach to account for all the challenges involved in a large scale application of change detection, such as, inaccuracies in the input geometry, errors in the geo-location data of the images, as well as, the limited amount of information due to sparse imagery. We evaluated our approach on an area of 6 square kilometers inside a city, using 3420 images downloaded from Google Street View. These images besides being publicly available, are also a good example of panoramic images captured with a driving vehicle, and hence demonstrating all the possible challenges resulting from such an acquisition. We also quantitatively compared the performance of our approach with respect to a ground truth, as well as to prior work. This evaluation shows that our approach outperforms the current state of the art.
International Journal of Computer Vision | 2016
Dimitrios Tzionas; Luca Ballan; Abhilash Srikantha; Marc Pollefeys; Juergen Gall
Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.
international conference on 3d imaging, modeling, processing, visualization & transmission | 2012
Aparna Taneja; Luca Ballan; Marc Pollefeys
The availability of geolocated panoramic images of urban environments has been increasing in the recent past thanks to services like Google Street View, Microsoft Street Side, and Navteq. Despite the fact that their primary application is in street navigation, these images can be used, along with cadastral information, for city planning, real-estate evaluation and tracking of changes in an urban environment. The geolocation information, provided with these images, is however not accurate enough for such applications: this inaccuracy can be observed in both the position and orientation of the camera, due to noise introduced during the acquisition. We propose a method to refine the calibration of these images leveraging cadastral 3D information, typically available in urban scenarios. We evaluated the algorithm on a city scale dataset, spanning commercial and residential areas, as well as the countryside.
asian conference on computer vision | 2010
Aparna Taneja; Luca Ballan; Marc Pollefeys
Dynamic scene modeling is a challenging problem in computer vision. Many techniques have been developed in the past to address such a problem but most of them focus on achieving accurate reconstructions in controlled environments, where the background and the lighting are known and the cameras are fixed and calibrated. Recent approaches have relaxed these requirements by applying these techniques to outdoor scenarios. The problem however becomes even harder when the cameras are allowed to move during the recording since no background color model can be easily inferred. In this paper we propose a new approach to model dynamic scenes captured in outdoor environments with moving cameras. A probabilistic framework is proposed to deal with such a scenario and to provide a volumetric reconstruction of all the dynamic elements of the scene. The proposed algorithm was tested on a publicly available dataset filmed outdoors with six moving cameras. A quantitative evaluation of the method was also performed on synthetic data. The obtained results demonstrated the effectiveness of the approach considering the complexity of the problem.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015
Aparna Taneja; Luca Ballan; Marc Pollefeys
We propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. The proposed method can be used to significantly optimize the process of updating the 3D model of an urban environment that is changing overtime, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes such as cars, people etc. The approach also accounts for the challenges involved in a large scale application of change detection, such as inaccuracies in the input geometry, errors in the geo-location data of the images as well as the limited amount of information due to sparse imagery. We evaluated our approach on a small scale setup using high resolution, densely captured images and a large scale setup covering an entire city using instead the more realistic scenario of low resolution, sparsely captured images. A quantitative evaluation was also conducted for the large scale setup consisting of 14,000 images.
workshop on applications of computer vision | 2012
Jens Puwein; Remo Ziegler; Luca Ballan; Marc Pollefeys
In sports broadcasts, networks consisting of pan-tilt-zoom (PTZ) cameras usually exhibit very wide baselines, making standard matching techniques for camera calibration very hard to apply. If, additionally, there is a lack of texture, finding corresponding image regions becomes almost impossible. However, such networks are often set up to observe dynamic scenes on a ground plane. Corresponding image trajectories produced by moving objects need to fulfill specific geometric constraints, which can be leveraged for camera calibration. We present a method which combines image trajectory matching with the self-calibration of rotating and zooming cameras, effectively reducing the remaining degrees of freedom in the matching stage to a 2D similarity transformation. Additionally, lines on the ground plane are used to improve the calibration. In the end, all extrinsic and intrinsic camera parameters are refined in a final bundle adjustment. The proposed algorithm was evaluated both qualitatively and quantitatively on four different soccer sequences.
european conference on computer vision | 2014
Jens Puwein; Luca Ballan; Remo Ziegler; Marc Pollefeys
We propose a method for human pose estimation which extends common unary and pairwise terms of graphical models with a global foreground term. Given knowledge of per pixel foreground, a pose should not only be plausible according to the graphical model but also explain the foreground well.
asian conference on computer vision | 2014
Aparna Taneja; Luca Ballan; Marc Pollefeys
In this paper we propose a simple and lightweight solution to estimate the geospatial trajectory of a moving vehicle from images captured by a cellphone exploiting the map and the imagery provided by Google Streetview.
asian conference on computer vision | 2014
Jens Puwein; Luca Ballan; Remo Ziegler; Marc Pollefeys
In this paper we propose an approach to jointly perform camera pose estimation and human pose estimation from videos recorded by a set of cameras separated by wide baselines. Multi-camera pose estimation is very challenging in case of wide baselines or in general when patch-based feature correspondences are difficult to establish across images.