Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kiran S. Bhat is active.

Publication


Featured researches published by Kiran S. Bhat.


symposium on computer animation | 2003

Estimating cloth simulation parameters from video

Kiran S. Bhat; Christopher D. Twigg; Jessica K. Hodgins; Pradeep K. Khosla; Zoran Popović; Steven M. Seitz

Cloth simulations are notoriously difficult to tune due to the many parameters that must be adjusted to achieve the look of a particular fabric. In this paper, we present an algorithm for estimating the parameters of a cloth simulation from video data of real fabric. A perceptually motivated metric based on matching between folds is used to compare video of real cloth with simulation. This metric compares two video sequences of cloth and returns a number that measures the differences in their folds. Simulated annealing is used to minimize the frame by frame error between the metric for a given simulation and the real-world footage. To estimate all the cloth parameters, we identify simple static and dynamic calibration experiments that use small swatches of the fabric. To demonstrate the power of this approach, we use our algorithm to find the parameters for four different fabrics. We show the match between the video footage and simulated motion on the calibration experiments, on new video sequences for the swatches, and on a simulation of a full skirt.


international conference on computer graphics and interactive techniques | 2004

Flow-based video synthesis and editing

Kiran S. Bhat; Steven M. Seitz; Jessica K. Hodgins; Pradeep K. Khosla

This paper presents a novel algorithm for synthesizing and editing video of natural phenomena that exhibit continuous flow patterns. The algorithm analyzes the motion of textured particles in the input video along user-specified flow lines, and synthesizes seamless video of arbitrary length by enforcing temporal continuity along a second set of user-specified flow lines. The algorithm is simple to implement and use. We used this technique to edit video of water-falls, rivers, flames, and smoke.


international conference on multimedia and expo | 2000

Motion detection and segmentation using image mosaics

Kiran S. Bhat; Mahesh Saptharishi; Pradeep K. Khosla

Proposes a motion segmentation algorithm for extracting foreground objects with a pan-tilt camera. Segmentation is achieved by spatio-temporal filtering of the scene to model the background. Temporal filtering is done by a set of modified AR (autoregressive) filters which model the background statistics for a particular view of the scene. Backgrounds from different views of the pan-tilt camera are stitched together into a planar mosaic using a real-time image mosaicking strategy. Our algorithms work in real time, require no user intervention, and facilitate high-quality video transmission at low bandwidths.


international conference on robotics and automation | 2002

Distributed surveillance and reconnaissance using multiple autonomous ATVs: CyberScout

Mahesh Saptharishi; C. Spence Oliver; Christopher P. Diehl; Kiran S. Bhat; John M. Dolan; Ashitey Trebi-Ollennu; Pradeep K. Khosla

The objective of the CyberScout project is to develop an autonomous surveillance and reconnaissance system using a network of all-terrain vehicles. We focus on two facets of this system: 1) vision for surveillance and 2) autonomous navigation and dynamic path planning. In the area of vision-based surveillance, we have developed robust, efficient algorithms to detect, classify, and track moving objects of interest (person, people, or vehicle) with a static camera. Adaptation through feedback from the classifier and tracker allow the detector to use grayscale imagery, but perform as well as prior color-based detectors. We have extended the detector using scene mosaicing to detect and index moving objects when the camera is panning or tilting. The classification algorithm performs well with coarse inputs, has unparalleled rejection capabilities, and can flag novel moving objects. The tracking algorithm achieves highly accurate (96%) frame-to-frame correspondence for multiple moving objects in cluttered scenes by determining the discriminant relevance of object features. We have also developed a novel mission coordination architecture, CPAD (Checkpoint/Priority/Action Database), which performs path planning via checkpoint and dynamic priority assignment, using statistical estimates of the environments motion structure. The motion structure is used to make both preplanning and reactive behaviors more efficient by applying global context. This approach is more computationally efficient than centralized approaches and exploits robot cooperation in dynamic environments better than decoupled approaches.


european conference on computer vision | 2002

Computing the Physical Parameters of Rigid-Body Motion from Video

Kiran S. Bhat; Steven M. Seitz; Jovan Popović

This paper presents an optimization framework for estimating the motion and underlying physical parameters of a rigid body in free flight from video. The algorithm takes a video clip of a tumbling rigid body of known shape and generates a physical simulation of the object observed in the video clip. This solution is found by optimizing the simulation parameters to best match the motion observed in the video sequence. These simulation parameters include initial positions and velocities, environment parameters like gravity direction and parameters of the camera. A global objective function computes the sum squared difference between the silhouette of the object in simulation and the silhouette obtained from video at each frame. Applications include creating interesting rigid body animations, tracking complex rigid body motions in video and estimating camera parameters from video.


symposium on computer animation | 2013

High fidelity facial animation capture and retargeting with contours

Kiran S. Bhat; Rony Goldenthal; Yuting Ye; Ronald Mallet; Michael Koperwas

Human beings are naturally sensitive to subtle cues in facial expressions, especially in areas of the eyes and mouth. Current facial motion capture methods fail to accurately reproduce motions in those areas due to multiple limitations. In this paper, we present a new performance capture method that focuses on the perceptually important contour features on the face. Additionally, the output of our two-step optimization scheme is also easily editable by an animator. To illustrate the strength of our system, we present a retargeting application that incorporates primary contour lines to map a performance with lip-sync from an actor to a creature.


symposium on computer animation | 2015

Fully automatic generation of anatomical face simulation models

Matthew Cong; Michael Bao; Jane L. E; Kiran S. Bhat; Ronald Fedkiw

We present a fast, fully automatic morphing algorithm for creating simulatable flesh and muscle models for human and humanoid faces. Current techniques for creating such models require a significant amount of time and effort, making them infeasible or impractical. In fact, the vast majority of research papers use only a floating mask with no inner lips, teeth, tongue, eyelids, eyes, head, ears, etc.---and even those that build the full visual model would typically still lack the cranium, jaw, muscles, and other internal anatomy. Our method requires only the target surface mesh as input and can create a variety of models in only a few hours with no user interaction. We start with a symmetric, high resolution, anatomically accurate template model that includes auxiliary information such as feature points and curves. Then given a target mesh, we automatically orient it to the template, detect feature points, and use these to bootstrap the detection of corresponding feature curves. These curve correspondences are used to deform the surface mesh of the template model to match the target mesh. Then, the calculated displacements of the template surface mesh are used to drive a three-dimensional morph of the full template model including all interior anatomy. The resulting target model can be simulated to generate a large range of expressions that are consistent across characters using the same muscle activations. Full automation of this entire process makes it readily available to a wide range of users.


SPIE Proceedings on Unattended Ground Sensor Technologies and Applications (AeroSense 2000) | 2000

Recent advances in distributed collaborative surveillance

Mahesh Saptharishi; Kiran S. Bhat; Christopher P. Diehl; C. S. Oliver; Marios Savvides; Alvaro Soto; John M. Dolan; Pradeep K. Khosla

In Carnegie Mellon Universitys CyberScout project, we are developing mobile and stationary sentries capable of autonomous reconnaissance and surveillance. In this paper, we describe recent advances in the areas of efficient perception algorithms (detection, classification, and correspondence) and mission planning. In detection, we have achieved improved rejection of camera jitter and environmental variations (e.g., lighting, moving foliage) through multi-modal filtering, and we have implemented panoramic backgrounding through pseudo-real-time mosaicing. In classification, we present methods for discriminating between individual, groups of individuals, and vehicles, and between individuals with and without backpacks. In correspondence, we describe an accurate multi-hypothesis approach based on both motion and appearance. Finally, in mission planning, we describe mapbuilding using multiple sensory cues and a computationally efficient decentralized planner for multiple platforms.


international conference on computer graphics and interactive techniques | 2009

ILM's multitrack: a new visual tracking framework for high-end VFX production

Christoph Bregler; Kiran S. Bhat; Jeff Saltzman; Brett Allen

Tracking 2D features on film footage is the starting point for several applications in a VFX pipeline such as camera calibration, match-moving, photomodeling, vision-based motion capture and object tracking. The diversity of captured footage and the accuracy requirements make the feature tracking problem very challenging. For instance, typical background footage exhibits drastic changes in lighting, motion blur, occlusions, and is usually corrupted with environment effects such as smoke or explosions. Tracking features on hero characters such as a human faces is equally challenging, especially near the eyes and lips, where the textures change continuously.


international conference on computer graphics and interactive techniques | 2004

Animating the combustion of deformable materials

Sameer Moidu; James J. Kuffner; Kiran S. Bhat

We present a physically based model for animating the deformation of objects like paper or cloth under combustion. We approximate the temperature changes in the material using a linear conduction model and apply thermal forces on a spring mass system that causes the material to deform as it burns.

Collaboration


Dive into the Kiran S. Bhat's collaboration.

Top Co-Authors

Avatar

Pradeep K. Khosla

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Dolan

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alvaro Soto

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

C. Spence Oliver

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marios Savvides

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge