Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jana Kosecka is active.

Publication


Featured researches published by Jana Kosecka.


international symposium on 3d data processing visualization and transmission | 2006

Image Based Localization in Urban Environments

Wei Zhang; Jana Kosecka

In this paper we present a prototype system for image based localization in urban environments. Given a database of views of city street scenes tagged by GPS locations, the system computes the GPS location of a novel query view. We first use a wide-baseline matching technique based on SIFT features to select the closest views in the database. Often due to a large change of viewpoint and presence of repetitive structures, a large percentage of matches (> 50%) are not correct correspondences. The subsequent motion estimation between the query view and the reference view, is then handled by a novel and efficient robust estimation technique capable of dealing with large percentage of outliers. This stage is also accompanied by a model selection step among the fundamental matrix and the homography. Once the motion between the closest reference views is estimated, the location of the query view is then obtained by triangulation of translation directions. Approximate solutions for cases when triangulation cannot be obtained reliably are also described. The presented system is tested on the dataset used in ICCV 2005 Computer Vision Contest and is shown to have higher accuracy than previous reported results.


The International Journal of Robotics Research | 1999

A Comparative Study of Vision-Based Lateral Control Strategies for Autonomous Highway Driving:

Camillo J. Taylor; Jana Kosecka; Robert Blasi; Jitendra Malik

With the increasing speeds of modern microprocessors, it has become ever more common for computer-vision algorithms to find application in real-time control tasks. In this paper, we present an analysis of the problem of steering an autonomous vehicle along a highway based on the images obtained from a CCD camera mounted in the vehicle. We explore the effects of changing various important system parameters like the vehicle velocity, the look-ahead range of the vision sensor, and the processing delay associated with the perception and control systems. We also present the results of a series of experiments that were designed to provide a systematic comparison of a number of control strategies. The control strategies that were explored include a lead-lag control law, a full-state linear controller, and an input-output linearizing control law. Each of these control strategies was implemented and tested at highway speeds on our experimental vehicle platform, a Honda Accord LX sedan.


International Journal of Computer Vision | 2001

Optimization Criteria and Geometric Algorithms for Motion and Structure Estimation

Yi Ma; Jana Kosecka; Shankar Sastry

Prevailing efforts to study the standard formulation of motion and structure recovery have recently been focused on issues of sensitivity and robustness of existing techniques. While many cogent observations have been made and verified experimentally, many statements do not hold in general settings and make a comparison of existing techniques difficult. With an ultimate goal of clarifying these issues, we study the main aspects of motion and structure recovery: the choice of objective function, optimization techniques and sensitivity and robustness issues in the presence of noise.We clearly reveal the relationship among different objective functions, such as “(normalized) epipolar constraints,” “reprojection error” or “triangulation,” all of which can be unified in a new “optimal triangulation” procedure. Regardless of various choices of the objective function, the optimization problems all inherit the same unknown parameter space, the so-called “essential manifold.” Based on recent developments of optimization techniques on Riemannian manifolds, in particular on Stiefel or Grassmann manifolds, we propose a Riemannian Newton algorithm to solve the motion and structure recovery problem, making use of the natural differential geometric structure of the essential manifold.We provide a clear account of sensitivity and robustness of the proposed linear and nonlinear optimization techniques and study the analytical and practical equivalence of different objective functions. The geometric characterization of critical points and the simulation results clarify the difference between the effect of bas-relief ambiguity, rotation and translation confounding and other types of local minima. This leads to consistent interpretations of simulation results over a large range of signal-to-noise ratio and variety of configurations.


computer vision and pattern recognition | 2009

Piecewise planar city 3D modeling from street view panoramic sequences

Branislav Micusik; Jana Kosecka

City environments often lack textured areas, contain repetitive structures, strong lighting changes and therefore are very difficult for standard 3D modeling pipelines. We present a novel unified framework for creating 3D city models which overcomes these difficulties by exploiting image segmentation cues as well as presence of dominant scene orientations and piecewise planar structures. Given panoramic street view sequences, we first demonstrate how to robustly estimate camera poses without a need for bundle adjustment and propose a multi-view stereo method which operates directly on panoramas, while enforcing the piecewise planarity constraints in the sweeping stage. At last, we propose a new depth fusion method which exploits the constraints of urban environments and combines advantages of volumetric and viewpoint based fusion methods. Our technique avoids expensive voxelization of space, operates directly on 3D reconstructed points through effective kd-tree representation, and obtains a final surface by tessellation of backprojections of those points into the reference image.


international conference on computer vision | 2009

Multi-view image and ToF sensor fusion for dense 3D reconstruction

Young Min Kim; Christian Theobalt; James Diebel; Jana Kosecka; Branislav Miscusik; Sebastian Thrun

Multi-view stereo methods frequently fail to properly reconstruct 3D scene geometry if visible texture is sparse or the scene exhibits difficult self-occlusions. Time-of-Flight (ToF) depth sensors can provide 3D information regardless of texture but with only limited resolution and accuracy. To find an optimal reconstruction, we propose an integrated multi-view sensor fusion approach that combines information from multiple color cameras and multiple ToF depth sensors. First, multi-view ToF sensor measurements are combined to obtain a coarse but complete model. Then, the initial model is refined by means of a probabilistic multi-view fusion framework, optimizing over an energy function that aggregates ToF depth sensor information with multi-view stereo and silhouette constraints. We obtain high quality dense and detailed 3D models of scenes challenging for stereo alone, while simultaneously reducing complex noise of ToF sensors.


international conference on robotics and automation | 2004

Vision based topological Markov localization

Jana Kosecka; Fayin Li

In this paper we study the problem of acquiring a topological model of indoors environment by means of visual sensing and subsequent localization given the model. The resulting model consists of a set of locations and neighborhood relationships between them. Each location in the model is represented by a collection of representative views and their associated descriptors selected from a temporally sub-sampled video stream captured by a mobile robot during exploration. We compare the recognition performance using global image histograms as well as local scale-invariant features as image descriptors, demonstrate their strengths and weaknesses and show how to model the spatial relationships between individual locations by a Hidden Markov Model. The quality of the acquired model is tested in the localization stage by means of location recognition: given a new view or a sequence of views, the most likely location where that view came from is determined.


computer vision and pattern recognition | 2003

Qualitative image based localization in indoors environments

Jana Kosecka; Liang Zhou; Philip Barber; Zoran Duric

Man made indoor environments possess regularities, which can be efficiently exploited in automated model acquisition by means of visual sensing. In this context we propose an approach for inferring a topological model of an environment from images or the video stream captured by a mobile robot during exploration. The proposed model consists of a set of locations and neighborhood relationships between them. Initially each location in the model is represented by a collection of similar, temporally adjacent views, with the similarity defined according to a simple appearance based distance measure. The sparser representation is obtained in a subsequent learning stage by means of learning vector quantization (LVQ). The quality of the model is tested in the context of qualitative localization scheme by means of location recognition: given a new view, the most likely location where that view came from is determined.


Robotics and Autonomous Systems | 2005

Global localization and relative positioning based on scale-invariant keypoints

Jana Kosecka; Fayin Li; Xiaolong Yang

Abstract The localization capability of a mobile robot is central to basic navigation and map building tasks. We describe a probabilistic environment model which facilitates global localization scheme by means of location recognition. In the exploration stage the environment is partitioned into locations, each characterized by a set of scale-invariant keypoints. The descriptors associated with these keypoints can be robustly matched despite changes in contrast, scale and viewpoint. We demonstrate the efficacy of these features for location recognition, where given a new view the most likely location from which this view came from is determined. The misclassifications due to dynamic changes in the environment or inherent appearance ambiguities are overcome by exploiting location neighborhood relationships captured by a Hidden Markov Model. We report the recognition performance of this approach in an indoor environment consisting of eighteen locations and discuss the suitability of this approach for a more general class of recognition problems. Once the most likely location has been determined we demonstrate how to robustly compute the relative pose between the representative view and the current view.


Computer Vision and Image Understanding | 2005

Extraction, matching, and pose recovery based on dominant rectangular structures

Jana Kosecka; Wei Zhang

Man-made environments possess many regularities which can be efficiently exploited for image-based rendering as well as robotic navigation and localization tasks. In this paper, we present an approach for automatic extraction of dominant rectangular structures from a single view and show how they facilitate the recovery of camera pose, planar structure, and matching across widely separated views. In the presented approach, the rectangular hypothesis formation is based on a higher-level information encoded by the presence of orthogonal vanishing directions, the dominant rectangular structures can be detected and matched despite the presence of multiple repetitive structures often encountered in a variety of buildings. Different stages of the approach are demonstrated on various examples of images of indoor and outdoor structured environments.


Robotics and Autonomous Systems | 1994

Discrete Event Systems for autonomous mobile agents

Jana Kosecka; Ruzena Bajcsy

Abstract Discrete Event Systems (DES) are a special type of dynamic system. The ‘state’ of these systems changes at discrete instants in time and the term ‘event’ represents the occurrence of discontinuous change (at possibly unknown intervals). Different Discrete Event Systems models are currently used for specification, verification, synthesis as well as for analysis and evaluation of different qualitative and quantitative properties of existing physical systems. The focus of this paper is the presentation of the automata and formal language model for DES introduced by Ramadge and Wonham and its application to the domain of mobile manipulator/observer agents. We demonstrate the feasibility of the DES framework for modeling, analysis and synthesis of some visually guided behaviors of agents engaged in navigational tasks and address synchronization issues between different components of the system. The use of DES formalism allows us to synthesize complex behaviors in a systematic fashion and guarantee their controllability.

Collaboration


Dive into the Jana Kosecka's collaboration.

Top Co-Authors

Avatar

Shankar Sastry

University of California

View shared research outputs
Top Co-Authors

Avatar

Yi Ma

ShanghaiTech University

View shared research outputs
Top Co-Authors

Avatar

Stefano Soatto

University of California

View shared research outputs
Top Co-Authors

Avatar

Ruzena Bajcsy

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gautam Singh

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fayin Li

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander C. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge