Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gopal Pingali is active.

Publication


Featured researches published by Gopal Pingali.


international conference on computer graphics and interactive techniques | 1998

A beam tracing approach to acoustic modeling for interactive virtual environments

Thomas A. Funkhouser; Ingrid Carlbom; Gary W. Elko; Gopal Pingali; Mohan Sondhi; James E. West

Virtual environment research has focused on interactive image generation and has largely ignored acoustic modeling for spatialization of sound. Yet, realistic auditory cues can complement and enhance visual cues to aid navigation, comprehension, and sense of presence in virtual environments. A primary challenge in acoustic modeling is computation of reverberation paths from sound sources fast enough for real-time auralization. We have developed a system that uses precomputed spatial subdivision and “beam tree” data structures to enable real-time acoustic modeling and auralization in interactive virtual environments. The spatial subdivision is a partition of 3D space into convex polyhedral regions (cells) represented as a cell adjacency graph. A beam tracing algorithm recursively traces pyramidal beams through the spatial subdivision to construct a beam tree data structure representing the regions of space reachable by each potential sequence of transmission and specular reflection events at cell boundaries. From these precomputed data structures, we can generate high-order specular reflection and transmission paths at interactive rates to spatialize fixed sound sources in real-time as the user moves through a virtual environment. Unlike previous acoustic modeling work, our beam tracing method: 1) supports evaluation of reverberation paths at interactive rates, 2) scales to compute highorder reflections and large environments, and 3) extends naturally to compute paths of diffraction and diffuse reflection efficiently. We are using this system to develop interactive applications in which a user experiences a virtual environment immersively via simultaneous auralization and visualization.


computer vision and pattern recognition | 1998

Real time tracking for enhanced tennis broadcasts

Gopal Pingali; Yves D. Jean; Ingrid Carlbom

This paper develops real time tracking technology for sports broadcast applications. The specific sport chosen here is the game of tennis. The outputs of the tennis tracking system are spatio-temporal trajectories of motion of the players and the ball which can in turn provide a number of statistics about the game. For instance, the distance travelled by a player, the speed and the acceleration at any instant, as well as court coverage patterns can be obtained from the trajectories. The statistics so obtained can be visualized in compelling ways to enhance the appreciation of the athleticism and strategy involved in the sport. We present techniques for tracking the players and the ball in video obtained from stationary cameras. The problem is challenging as the tracking needs to be performed outdoors, players are fast-moving non-rigid objects, and the ball is a small object that can move at speeds in the range of 150 miles an hour. Player trajectories are obtained by dynamically clustering tracks of local features. Ball segmentation and tracking is based on shape and color features of the ball. Real time tracking results are presented on video recorded live by the authors in an international tennis tournament.


acm multimedia | 1999

Audio-visual tracking for natural interactivity

Gopal Pingali; Gamze Tunali; Ingrid Carlbom

The goal in user interfaces is natural interactivity unencumbered by sensor and display technology. In this paper, we propose that a multi-modal approach using inverse modeling techniques from computer vision, speech recognition, and acoustics can result in such interfaces. In particular, we demonstrate a system for audio-visual tracking, showing that such a system is more robust, more accurate, more compact, and yields more information than using a single modality for tracking. We also demonstrate how such a system can be used to find the talker among a group of individuals, and render 3D scenes to the user.


international conference on pattern recognition | 2000

Ball tracking and virtual replays for innovative tennis broadcasts

Gopal Pingali; Agata Opalach; Yves D. Jean

Presents a real-time computer vision system that tracks the motion of a tennis ball in 3D using multiple cameras. Ball tracking enables virtual replays, new game statistics, and other visualizations which result in very new ways of experiencing and analyzing tennis matches. The system has been used in international television broadcasts and webcasts of more than 15 matches. Six cameras around a stadium, divided into four pairs, are currently used to track the ball on serves which sometimes exceed speeds of 225 kmph. A multi-threaded approach is taken to tracking where each thread tracks the ball in a pair of cameras based on motion, intensity and shape, performs stereo matching to obtain the 3D trajectory, detects when a ball goes out of view of its camera pair, and initializes and triggers a subsequent thread. This efficient approach is scalable to many more cameras tracking multiple objects. The ready acceptance of the system indicates the growing potential for multi-camera based real-time tracking in broadcast applications.


international conference on cloud computing | 2011

MADMAC: Multiple Attribute Decision Methodology for Adoption of Clouds

Prasad Saripalli; Gopal Pingali

Cloud Adoption decisions tend to involve multiple, conflicting criteria (attributes) with incommensurable units of measurements, which must be compared among multiple alternatives using imprecise and incomplete available information. Multi-attribute Decision Making (MADM) has been shown to provide a rational basis to aid decision making in such scenarios. We present a MADMAC framework for cloud adoption, consisting of 3 Decision Areas (DA) referred to as the Cloud Switch, Cloud Type and Vendor Choice. It requires the definition of Attributes, Alternatives and Attribute Weights, to construct a Decision Matrix and arrive at a relative ranking to identify the optimal alternative. We also present a taxonomy organized in a two level hierarchy: Server-centric clouds, Client-centric clouds and Mobile-centric clouds, which further map to detailed, specific applications or workloads. DSS presented showing algorithms derived from MADMAC can compute and optimize CA decisions separately for the three stages, where the attributes differently influence CA decisions. A modified Wide-band Delphi method is proposed for assessing the relative weights for each attribute, by workload. Relative ranks are calculated using these weights, and the Simple Additive Weighting (SAW) method is used to generate value functions for all the alternatives, and rank the alternatives by their value to finally choose the best alternative. Results from application of the method to four different types of workloads show that the method converges on reasonable cloud adoption decisions. MADMACs key advantage is its fully quantitative and iterative convergence approach based on proven multi-attribute decision methods, which enables decision makers to comparatively assess the relative robustness of alternative cloud adoption decisions in a defensible manner. Being amenable to automation, it can respond well to even complex arrays of decision criteria inputs, unlike human decision makers. It can be implemented as a web-based DSS to readily support cloud decision making world-wide, and improved further using fuzzy TOPSIS methods, to address concerns about preferential inter-dependence of attributes, insufficient input data or judgment expertise.


pervasive computing and communications | 2003

Steerable interfaces for pervasive computing spaces

Gopal Pingali; Claudio S. Pinhanez; Anthony Levas; Rick Kjeldsen; Mark Podlaseck; Han Chen; Noi Sukaviriya

This paper introduces a new class of interactive interfaces that can be moved around to appear on ordinary objects and surfaces anywhere in a space. By dynamically adapting the form, function, and location of an interface to suit the context of the user, such steerable interfaces have the potential to offer radically new and powerful styles of interaction in intelligent pervasive computing spaces. We propose defining characteristics of steerable interfaces and present the first steerable interface system that combines projection, gesture recognition, user tracking, environment modeling and geometric reasoning components within a system architecture. Our work suggests that there is great promise and rich potential for further research on steerable interfaces.


IEEE Transactions on Multimedia | 2002

Instantly indexed multimedia databases of real world events

Gopal Pingali; Agata Opalach; Yves D. Jean; Ingrid Carlbom

We introduce a new paradigm for real-time conversion of a real world event into a rich multimedia database by processing data from multiple sensors observing the event. A real-time analysis of the sensor data, tightly coupled with domain knowledge, results in instant indexing of multimedia data at capture time. This yields semantic information to answer complex queries about the content and the ability to extract portions of data that correspond to complex actions performed in the real world. The power of such an instantly indexed multimedia database system, in content-based retrieval of multimedia data or in semantic analysis and visualization of the data, far exceeds that of systems which index multimedia data only after it is produced. We present LucentVision, an instantly indexed multimedia database system developed for the sport of tennis. This system analyzes video from multiple cameras in real time and captures the activity of the players and the ball in the form of motion trajectories. The system stores these trajectories in a database along with video, 3D models of the environment, scores, and other domain-specific information. LucentVision has been used to enhance live television and Internet broadcasts with game analyses and virtual replays in more than 250 international tennis matches.


ieee visualization | 2001

Visualization of sports using motion trajectories: providing insights into performance, style, and strategy

Gopal Pingali; Agata Opalach; Yves D. Jean; Ingrid Carlbom

Remote experience of sporting events has thus far been limited mostly to watching video and the scores and statistics associated with the sport. However, a fast-developing trend is the use of visualization techniques to give new insights into performance, style, and strategy of the players. Automated techniques can extract accurate information from video about player performance that not even the most skilled observer is able to discern. When presented as static images or as a three-dimensional virtual replay, this information makes viewing a game an entirely new and exciting experience. This paper presents one such sports visualization system called LucentVision, which has been developed for the sport of tennis. LucentVision uses real-time video analysis to obtain motion trajectories of players and the ball, and offers a rich set of visualization options based on this trajectory data. The system has been used extensively in the broadcast of international tennis tournaments, both on television and the Internet.


international conference on image processing | 2002

Real-time head orientation estimation using neural networks

Liang Zhao; Gopal Pingali; Ingrid Carlbom

Estimation of human head orientation is important for a number of applications such as human-computer interaction, teleconferencing, virtual reality, and 3D audio rendering. We present a system for estimating human head orientation based on visual information. Two neural networks are trained to approximate the functions that map an image of a head to the orientation of the head. We obtain ground-truth data for training and testing from an electromagnetic tracking device worn by subjects. Our experimental results demonstrate orientation accuracy within 10/spl deg/ with the subject free to move about at distances of three to ten feet from the camera. The system is designed to be robust to illumination changes and it runs in real time.


international conference on multimedia and expo | 2002

User-following displays

Gopal Pingali; Claudio S. Pinhanez; Tony Levas; Rick Kjeldsen; Mark Podlaseck

Traditionally, a user has positioned himself/herself to be in front of a display in order to access information from it. In this information age, life at work and even at home is often confined to be in front of a display device that is the source of information or entertainment. The paper introduces another paradigm where the display follows the user rather than the user being tied to the display. We demonstrate how steerable projection and people tracking can be combined to achieve a display that automatically follows the user.

Collaboration


Dive into the Gopal Pingali's collaboration.

Researchain Logo
Decentralizing Knowledge