Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aparna Taneja is active.

Publication


Featured researches published by Aparna Taneja.


international conference on computer vision | 2011

Image based detection of geometric changes in urban environments

Aparna Taneja; Luca Ballan; Marc Pollefeys

In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.


computer vision and pattern recognition | 2013

City-Scale Change Detection in Cadastral 3D Models Using Images

Aparna Taneja; Luca Ballan; Marc Pollefeys

In this paper, we propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. We designed our approach to account for all the challenges involved in a large scale application of change detection, such as, inaccuracies in the input geometry, errors in the geo-location data of the images, as well as, the limited amount of information due to sparse imagery. We evaluated our approach on an area of 6 square kilometers inside a city, using 3420 images downloaded from Google Street View. These images besides being publicly available, are also a good example of panoramic images captured with a driving vehicle, and hence demonstrating all the possible challenges resulting from such an acquisition. We also quantitatively compared the performance of our approach with respect to a ground truth, as well as to prior work. This evaluation shows that our approach outperforms the current state of the art.


international conference on 3d imaging, modeling, processing, visualization & transmission | 2012

Registration of Spherical Panoramic Images with Cadastral 3D Models

Aparna Taneja; Luca Ballan; Marc Pollefeys

The availability of geolocated panoramic images of urban environments has been increasing in the recent past thanks to services like Google Street View, Microsoft Street Side, and Navteq. Despite the fact that their primary application is in street navigation, these images can be used, along with cadastral information, for city planning, real-estate evaluation and tracking of changes in an urban environment. The geolocation information, provided with these images, is however not accurate enough for such applications: this inaccuracy can be observed in both the position and orientation of the camera, due to noise introduced during the acquisition. We propose a method to refine the calibration of these images leveraging cadastral 3D information, typically available in urban scenarios. We evaluated the algorithm on a city scale dataset, spanning commercial and residential areas, as well as the countryside.


asian conference on computer vision | 2010

Modeling dynamic scenes recorded with freely moving cameras

Aparna Taneja; Luca Ballan; Marc Pollefeys

Dynamic scene modeling is a challenging problem in computer vision. Many techniques have been developed in the past to address such a problem but most of them focus on achieving accurate reconstructions in controlled environments, where the background and the lighting are known and the cameras are fixed and calibrated. Recent approaches have relaxed these requirements by applying these techniques to outdoor scenarios. The problem however becomes even harder when the cameras are allowed to move during the recording since no background color model can be easily inferred. In this paper we propose a new approach to model dynamic scenes captured in outdoor environments with moving cameras. A probabilistic framework is proposed to deal with such a scenario and to provide a volumetric reconstruction of all the dynamic elements of the scene. The proposed algorithm was tested on a publicly available dataset filmed outdoors with six moving cameras. A quantitative evaluation of the method was also performed on synthetic data. The obtained results demonstrated the effectiveness of the approach considering the complexity of the problem.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Geometric Change Detection in Urban Environments Using Images

Aparna Taneja; Luca Ballan; Marc Pollefeys

We propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. The proposed method can be used to significantly optimize the process of updating the 3D model of an urban environment that is changing overtime, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes such as cars, people etc. The approach also accounts for the challenges involved in a large scale application of change detection, such as inaccuracies in the input geometry, errors in the geo-location data of the images as well as the limited amount of information due to sparse imagery. We evaluated our approach on a small scale setup using high resolution, densely captured images and a large scale setup covering an entire city using instead the more realistic scenario of low resolution, sparsely captured images. A quantitative evaluation was also conducted for the large scale setup consisting of 14,000 images.


asian conference on computer vision | 2014

Never Get Lost Again: Vision Based Navigation Using StreetView Images

Aparna Taneja; Luca Ballan; Marc Pollefeys

In this paper we propose a simple and lightweight solution to estimate the geospatial trajectory of a moving vehicle from images captured by a cellphone exploiting the map and the imagery provided by Google Streetview.


workshop on applications of computer vision | 2016

Underwater 3D capture using a low-cost commercial depth camera

Sundara Tejaswi Digumarti; Gaurav Chaurasia; Aparna Taneja; Roland Siegwart; Amber Thomas; Paul A. Beardsley

This paper presents underwater 3D capture using a commercial depth camera. Previous underwater capture systems use ordinary cameras, and it is well-known that a calibration procedure is needed to handle refraction. The same is true for a depth camera being used underwater. We describe a calibration method that corrects the depth maps of refraction effects. Another challenge is that depth cameras use infrared light (IR) which is heavily attenuated in water. We demonstrate scanning is possible with commercial depth cameras for ranges up to 20 cm in water. The motivation for using a depth camera under water is the same as in air - it provides dense depth data and higher quality 3D reconstruction than multi-view stereo. Underwater 3D capture is being increasingly used in marine biology and oceanology; our approach offers exciting prospects for such applications. To the best of our knowledge, ours is the first approach that successfully demonstrates underwater 3D capture using low cost depth cameras like Intel RealSense. We describe a complete system, including protective housing for the depth camera which is suitable for handheld use by a diver. Our main contribution is an easy-to-use calibration method, which we evaluate on exemplar data as well as 3D reconstructions in a lab aquarium. We also present initial results of ocean deployment.


Proceedings of the 2010 international conference on Video Processing and Computational Video | 2010

3D reconstruction and video-based rendering of casually captured videos

Aparna Taneja; Luca Ballan; Jens Puwein; Gabriel J. Brostow; Marc Pollefeys

In this chapter we explore the possibility of interactively navigating a collection of casually captured videos of a performance: real-world footage captured on hand held cameras by a few members of the audience. The aim is to navigate the video collection in 3D by generating video based rendering of the performance using the offline pre-computed reconstruction of the event. We propose two different techniques to obtain this reconstruction, considering that the video collection may have been recorded in complex, uncontrolled outdoor environments. One approach recovers the event geometry by exploring the temporal domain of each video independently, while the other explores the spatial domain of the video collection at each time instant, independently. The pros and cons of the two methods and their applicability to the addressed navigation problem, are also discussed. In the end, we propose an interactive GPU-accelerated viewing tool to navigate the video collection.


Archive | 2015

System and method using foot recognition to create a customized guest experience

Paul A. Beardsley; Aparna Taneja


Lecture Notes in Computer Science | 2012

Motion capture of hands in action using discriminative salient points

Luca Ballan; Aparna Taneja; Juergen Gall; Luc Van Gool; Marc Pollefeys

Collaboration


Dive into the Aparna Taneja's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge