Parag Chaudhuri
Indian Institute of Technology Bombay
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Parag Chaudhuri.
eurographics | 2004
Parag Chaudhuri; Prem Kalra; Subhashis Banerjee
In this paper, we present a novel system for facilitating the creation of stylized view‐dependent 3D animation. Our system harnesses the skill and intuition of a traditionally trained animator by providing a convivial sketch based 2D to 3D interface. A base mesh model of the character can be modified to match closely to an input sketch, with minimal user interaction. To do this, we recover the best camera from the intended view direction in the sketch using robust computer vision techniques. This aligns the mesh model with the sketch. We then deform the 3D character in two stages ‐ first we reconstruct the best matching skeletal pose from the sketch and then we deform the mesh geometry. We introduce techniques to incorporate deformations in the view‐dependent setting. This allows us to set up view‐dependent models for animation.
IEEE Transactions on Visualization and Computer Graphics | 2013
Saket Patkar; Parag Chaudhuri
This paper presents a simple, three stage method to simulate the mechanics of wetting of porous solid objects, like sponges and cloth, when they interact with a fluid. In the first stage, we model the absorption of fluid by the object when it comes in contact with the fluid. In the second stage, we model the transport of absorbed fluid inside the object, due to diffusion, as a flow in a deforming, unstructured mesh. The fluid diffuses within the object depending on saturation of its various parts and other body forces. Finally, in the third stage, oversaturated parts of the object shed extra fluid by dripping. The simulation model is motivated by the physics of imbibition of fluids into porous solids in the presence of gravity. It is phenomenologically capable of simulating wicking and imbibition, dripping, surface flows over wet media, material weakening, and volume expansion due to wetting. The model is inherently mass conserving and works for both thin 2D objects like cloth and for 3D volumetric objects like sponges. It is also designed to be computationally efficient and can be easily added to existing cloth, soft body, and fluid simulation pipelines.
interactive 3d graphics and games | 2010
Sriram Kashyap; Rhushabh Goradia; Parag Chaudhuri; Sharat Chandran
Mirroring the development of rendering algorithms for polygonal models, z-buffer style rendering for point-based models has given way recently to more advanced methods. A fast raycasting based approach [Wald and Seidel 2005] shows shadows, but does not demonstrate reflective effects. The more general raytracing approach [Linsen et al. 2007] is substantially slower. We advance the state of the art by ray tracing point models in real time. Our system relies on an efficient way of storing and accessing point data structures on the GPU. We hope that this leads the way for future work towards more realistic global illumination effects including soft shadows, simultaneous reflection & refraction, and caustics.
The Visual Computer | 2008
Parag Chaudhuri; George Papagiannakis; Nadia Magnenat-Thalmann
In this paper we present a new character animation technique in which the animation adapts itself based on the change in the user’s perspective, so that when the user moves and their point of viewing the animation changes, then the character animation adapts itself in response to that change. The resulting animation, generated in real-time, is a blend of key animations provided a priori by the animator. The blending is done with the help of efficient dual-quaternion transformation blending. The user’s point of view is tracked using either computer vision techniques or a simple user-controlled input modality, such as mouse-based input. This tracked point of view is then used to suitably select the blend of animations. We show a way to author and use such animations in both virtual as well as augmented reality scenarios and demonstrate that it significantly heightens the sense of presence for the users when they interact with such self adaptive animations of virtual characters.
computer graphics international | 2004
Parag Chaudhuri; Rohit Khandekar; Deepak Sethi; Prem Kumar Kalra
We give an efficient, scalable, and simple algorithm for computation of a central path for navigation in closed virtual environments. The algorithm requires less preprocessing and produces paths of high visual fidelity. The algorithm enables computing paths at multiple resolutions. The algorithm is based on a distance from boundary field computed on a hierarchical subdivision of the free space inside the closed 3D object. We also present a progressive version of our algorithm based on a local search strategy thus giving navigable paths in a localized region of interest
computer vision and pattern recognition | 2013
Sarbartha Sengupta; Parag Chaudhuri
In this paper we present a system to create 3D garments from 2D patterns. Once placed over the 3D character, our system can quickly stitch the patterns into the 3D garment. The stitched cloth is then simulated to obtain the drape of the garment over the character. Our system can accurately and efficiently resolve cloth-body and cloth-cloth collisions.
ieee virtual reality conference | 2013
Jai Mashalkar; Niket Bagwe; Parag Chaudhuri
We present a method to create virtual character models of real users from noisy depth data. We use a combination of four depth sensors to capture a point cloud model of the person. Direct meshing of this data often creates meshes with topology that is unsuitable for proper character animation. We develop our mesh model by fitting a single template mesh to the point cloud in a two-stage process. The first stage fitting involves piecewise smooth deformation of the mesh, whereas the second stage does a finer fit using an iterative Laplacian framework. We complete the model by adding properly aligned and blended textures to the final mesh and show that it can be easily animated using motion data from a single depth camera. Our process maintains the topology of the original mesh and the proportions of the final mesh match the proportions of the actual user, thus validating the accuracy of the process. Other than the depth sensor, the process does not require any specialized hardware for creating the mesh. It is efficient, robust and is mostly automatic.
pacific conference on computer graphics and applications | 2010
Rhushabh Goradia; Sriram Kashyap; Parag Chaudhuri; Sharat Chandran
When it comes to rendering models available as points, rather than meshes, splats are a common intermediate internal representation. In this paper we further the state of the art by ray tracing splats to produce expected effects such as reflections, refraction, and shadows. We render complex models at interactive frame rates allowing real time viewpoint, lighting, and material changes. Our system relies on efficient techniques of storing and traversing point models on Graphics Processing Units (GPUs).
indian conference on computer vision, graphics and image processing | 2010
Sriram Kashyap; Rhushabh Goradia; Parag Chaudhuri; Sharat Chandran
Point-based representations of objects have been used as modeling alternatives to the almost ubiquitous quads or triangles. However, our ability to render these points has not matched their polygonal counterparts when we consider both rendering time and sophisticated lighting effects. In this paper, we present a framework for ray tracing massive point model environments at interactive frame rates on the Graphic Processing Units (GPUs). We introduce the Implicit Surface Octree (ISO), a lightweight data structure for efficient representation of point set surfaces. ISOs provide a compact local manifold approximation of the input point data and can also be embellished with lighting information. This enables us to further the state of the art by demonstrating reflections, refractions and shadow effects on complex point models at interactive frame rates.
Computer Animation and Virtual Worlds | 2004
Prasun Mathur; Chhavi Upadhyay; Parag Chaudhuri; Prem Kumar Kalra
We present a novel measure for compression of time‐variant geometry. Compression of time‐variant geometry has become increasingly relevant as transmission of high quality geometry streams is severely limited by network bandwidth. Some work has been done on such compression schemes, but none of them give a measure for prioritizing the loss of information from the geometry stream while doing a lossy compression. In this paper we introduce a cost function which assigns a cost to the removal of particular geometric primitives during compression, based upon their importance in preserving the complete animation. We demonstrate that the use of this measure visibly enhances the performance of existing compression schemes. Copyright