Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sundar Vedula is active.

Publication


Featured researches published by Sundar Vedula.


international conference on computer vision | 1999

Three-dimensional scene flow

Sundar Vedula; Simon Baker; Peter Rander; Robert T. Collins; Takeo Kanade

Scene flow is the three-dimensional motion field of points in the world, just as optical flow is the two-dimensional motion field of points in an image. Any optical flow is simply the projection of the scene flow onto the image plane of a camera. We present a framework for the computation of dense, non-rigid scene flow from optical flow. Our approach leads to straightforward linear algorithms and a classification of the task into three major scenarios: complete instantaneous knowledge of the scene structure; knowledge only of correspondence information; and no knowledge of the scene structure. We also show that multiple estimates of the normal flow cannot be used to estimate dense scene flow directly without some form of smoothing or regularization.


ACM Transactions on Graphics | 2005

Image-based spatio-temporal modeling and view interpolation of dynamic events

Sundar Vedula; Simon Baker; Takeo Kanade

We present an approach for modeling and rendering a dynamic, real-world event from an arbitrary viewpoint, and at any time, using images captured from multiple video cameras. The event is modeled as a nonrigidly varying dynamic scene, captured by many images from different viewpoints, at discrete times. First, the spatio-temporal geometric properties (shape and instantaneous motion) are computed. The view synthesis problem is then solved using a reverse mapping algorithm, ray-casting across space and time, to compute a novel image from any viewpoint in the 4D space of position and time. Results are shown on real-world events captured in the CMU 3D Room, by creating synthetic renderings of the event from novel, arbitrary positions in space and time. Multiple such recreated renderings can be put together to create retimed fly-by movies of the event, with the resulting visual experience richer than that of a regular video clip, or switching between images from multiple cameras.


computer vision and pattern recognition | 2000

Shape and motion carving in 6D

Sundar Vedula; Simon Baker; Steven M. Seitz; Takeo Kanade

The motion of a non-rigid scene over time imposes more constraints on its structure than those derived from images at a single time instant alone. An algorithm is presented for simultaneously recovering dense scene shape and scene flow (i.e. the instantaneous 3D motion at every point in the scene). The algorithm operates by carving away hexels, or points in the 6D space of all possible shapes and flows that are inconsistent with the images captures at either time instant, or across time. The recovered shape is demonstrated to be more accurate than that recovered using images at a single time instant. Applications of the combined scene shape and flow include motion capture for animation, retiming of videos, and non-rigid motion analysis.


VisSym | 2000

Appearance-Based Virtual-View Generation for Fly Through in a Real Dynamic Scene

Shigeyuki Baba; Hideo Saito; Sundar Vedula; Kong Man Cheung; Takeo Kanade

We present appearance-based Virtual view generation which allows viewers to fly through a real dynamic scene. The scene is captured by synchronized multiple cameras. Arbitrary views are generated by interpolating two original camera-view images near the given viewpoint. The quality of the generated synthetic view is determined by the precision, consistency and density of correspondences between the two images. All or most of previous work that uses interpolation extracts the correspondences from these two images. However, not only is it difficult to do so reliably (the task requires a good stereo algorithm), but also the two images alone sometimes do not have enough information, due to problems such as occlusion. Instead, we take advantage of the fact that we have many views, from which we can extract much more reliable and comprehensive 3D geometry of the scene as a 3D model. The dense and precise correspondences between the two images, to be used for interpolation, are derived from this constructed 3D model. Our method of 3D modeling from multiple images uses the Multiple Baseline Stereo method and Shape from Silhouette method.


international conference on intelligent transportation systems | 1997

Physically realistic haptic interaction with dynamic virtual worlds

Sundar Vedula; David Baraff

There has been increased interest in recent years in the use of the sensation of force as a supplement to visual and auditory feed-back normally found in virtual environments. To produce a realistic feel of interacting with moving synthetic objects, the interactions between the objects and those between the user and the environment need to be based on physical laws. In this work, we present techniques for interaction of a human with a dynamic virtual environment through the haptic channel, specifically by integrating an articulated force feedback arm with a graphical physically based interactive simulation system. A distributed simulation and control model, to separate two fundamental requirements on graphics and force control is used. We describe techniques used for elegant force display in a dynamic environment, and a physically based algorithm to model surface friction between the probe and the objects. Three schemes for a local model update are also presented and compared.


Archive | 1998

The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams

Takeo Kanade; Hideo Saito; Sundar Vedula


eurographics | 2002

Spatio-temporal view interpolation

Sundar Vedula; Simon Baker; Takeo Kanade


virtual systems and multimedia | 1998

Modeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences

Sundar Vedula; Peter Rander; Hideo Saito; Takeo Kanade


digital identity management | 1999

Appearance-based virtual view generation of temporally-varying events from multi-camera images in the 3D room

Hideo Saito; Shigeyuki Baba; Makoto Kimura; Sundar Vedula; Takeo Kanade


ACM Transactions on Graphics | 2005

Im Modeling and View Interpolation of Dynamic Events

Sundar Vedula; Simon Baker; Takeo Kanade

Collaboration


Dive into the Sundar Vedula's collaboration.

Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Rander

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shigeyuki Baba

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David Baraff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Kong Man Cheung

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Robert T. Collins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Makoto Kimura

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge