Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nikunj Raghuvanshi is active.

Publication


Featured researches published by Nikunj Raghuvanshi.


international conference on management of data | 2005

Fast and approximate stream mining of quantiles and frequencies using graphics processors

Naga K. Govindaraju; Nikunj Raghuvanshi; Dinesh Manocha

We present algorithms for fast quantile and frequency estimation in large data streams using graphics processors (GPUs). We exploit the high computation power and memory bandwidth of graphics processors and present a new sorting algorithm that performs rasterization operations on the GPUs. We use sorting as the main computational component for histogram approximation and construction of ε-approximate quantile and frequency summaries. Our algorithms for numerical statistics computation on data streams are deterministic, applicable to fixed or variable-sized sliding windows and use a limited memory footprint. We use GPU as a co-processor and minimize the data transmission between the CPU and GPU by taking into account the low bus bandwidth. We implemented our algorithms on a PC with a NVIDIA GeForce FX 6800 Ultra GPU and a 3.4 GHz Pentium IV CPU and applied them to large data streams consisting of more than 100 million values. We also compared the performance of our GPU-based algorithms with optimized implementations of prior CPU-based algorithms. Overall, our results demonstrate that the graphics processors available on a commodity computer system are efficient stream-processor and useful co-processors for mining data streams.


user interface software and technology | 2014

RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units

Brett R. Jones; Rajinder Sodhi; Michael Murdock; Ravish Mehra; Hrvoje Benko; Andrew D. Wilson; Eyal Ofek; Blair MacIntyre; Nikunj Raghuvanshi; Lior Shapira

RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.


IEEE Transactions on Visualization and Computer Graphics | 2009

Efficient and Accurate Sound Propagation Using Adaptive Rectangular Decomposition

Nikunj Raghuvanshi; Rahul Narain; Ming C. Lin

Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware.


interactive 3d graphics and games | 2006

Interactive sound synthesis for large scale environments

Nikunj Raghuvanshi; Ming C. Lin

We present an interactive approach for generating realistic physically-based sounds from rigid-body dynamic simulations. We use spring-mass systems to model each objects local deformation and vibration, which we demonstrate to be an adequate approximation for capturing physical effects such as magnitude of impact forces, location of impact, and rolling sounds. No assumption is made about the mesh connectivity or topology. Surface meshes used for rigid-body dynamic simulation are utilized for sound simulation without any modifications. We use results in auditory perception and a novel priority-based quality scaling scheme to enable the system to meet variable, stringent time constraints in a real-time application, while ensuring minimal reduction in the perceived sound quality. With this approach, we have observed up to an order of magnitude speed-up compared to an implementation without the acceleration. As a result, we are able to simulate moderately complex simulations with upto hundreds of sounding objects at over 100 frames per second (FPS), making this technique well suited for interactive applications like games and virtual environments. Furthermore, we utilize OpenAL and EAX#8482; on Creative Sound Blaster Audigy 2#8482; cards for fast hardware-accelerated propagation modeling of the synthesized sound.


international conference on computer graphics and interactive techniques | 2010

Precomputed wave simulation for real-time sound propagation of dynamic sources in complex scenes

Nikunj Raghuvanshi; John Snyder; Ravish Mehra; Ming C. Lin; Naga K. Govindaraju

We present a method for real-time sound propagation that captures all wave effects, including diffraction and reverberation, for multiple moving sources and a moving listener in a complex, static 3D scene. It performs an offline numerical simulation over the scene and then applies a novel technique to extract and compactly encode the perceptually salient information in the resulting acoustic responses. Each response is automatically broken into two phases: early reflections (ER) and late reverberation (LR), via a threshold on the temporal density of arriving wavefronts. The LR is simulated and stored in the frequency domain, once per room in the scene. The ER accounts for more detailed spatial variation, by recording a set of peak delays/amplitudes in the time domain and a residual frequency response sampled in octave frequency bands, at each source/receiver point pair in a 5D grid. An efficient run-time uses this precomputed representation to perform binaural sound rendering based on frequency-domain convolution. Our system demonstrates realistic, wave-based acoustic effects in real time, including diffraction low-passing behind obstructions, sound focusing, hollow reverberation in empty rooms, sound diffusion in fully-furnished rooms, and realistic late reverberation.


ACM Transactions on Graphics | 2013

Wave-based sound propagation in large open scenes using an equivalent source formulation

Ravish Mehra; Nikunj Raghuvanshi; Lakulish Antani; Anish Chandak; Sean Curtis; Dinesh Manocha

We present a novel approach for wave-based sound propagation suitable for large, open spaces spanning hundreds of meters, with a small memory footprint. The scene is decomposed into disjoint rigid objects. The free-field acoustic behavior of each object is captured by a compact per-object transfer function relating the amplitudes of a set of incoming equivalent sources to outgoing equivalent sources. Pairwise acoustic interactions between objects are computed analytically to yield compact inter-object transfer functions. The global sound field accounting for all orders of interaction is computed using these transfer functions. The runtime system uses fast summation over the outgoing equivalent source amplitudes for all objects to auralize the sound field for a moving listener in real time. We demonstrate realistic acoustic effects such as diffraction, low-passed sound behind obstructions, focusing, scattering, high-order reflections, and echoes on a variety of scenes.


interactive 3d graphics and games | 2011

Sound synthesis for impact sounds in video games

D. Brandon Lloyd; Nikunj Raghuvanshi; Naga K. Govindaraju

We present an interactive system for synthesizing high quality, physically based audio on current video game consoles. From a recorded impact sound, we compute a modal model, which we use to synthesize variations of the sound on the fly. We show that for many sounds greater quality is obtained by using the amplitude envelopes of the extracted modes directly rather than fitting the envelopes to the standard exponential decay model. When combined with a residual, the synthesized sounds in most cases are indistinguishable from recorded clips. Compared to using multiple prerecorded clips to obtain variation, our system consumes less of the limited console memory. For sounds that are less amenable to modal synthesis, we introduce a simple filter that generates plausible variations from a single clip. Our system integrates easily with existing audio middleware and have been implemented in the Xbox360 game Crackdown II.


Communications of The ACM | 2007

Real-time sound synthesis and propagation for games

Nikunj Raghuvanshi; Christian Lauterbach; Anish Chandak; Dinesh Manocha; Ming C. Lin

Simulating the complete process of sound synthesis and propagation by exploiting aural perception makes the experience of playing games much more realistic and immersive.


solid and physical modeling | 2008

Accelerated wave-based acoustics simulation

Nikunj Raghuvanshi; Nico Galoppo; Ming C. Lin

We present an efficient technique to model sound propagation accurately in an arbitrary 3D scene by numerically integrating the wave equation. We show that by performing an offline modal analysis and using eigenvalues from a refined mesh, we can simulate sound propagation with reduced dispersion on a much coarser mesh, enabling accelerated computation. Since performing a modal analysis on the complete scene is usually not feasible, we present a domain decomposition approach to drastically shorten the pre-processing time. We introduce a simple, efficient and stable technique for handling the communication between the domain partitions. We validate the accuracy of our approach against cases with known analytical solutions. With our approach, we have observed up to an order of magnitude speedup compared to a standard finite-difference technique.


international conference on computer graphics and interactive techniques | 2014

Parametric wave field coding for precomputed sound propagation

Nikunj Raghuvanshi; John Snyder

The acoustic wave field in a complex scene is a chaotic 7D function of time and the positions of source and listener, making it difficult to compress and interpolate. This hampers precomputed approaches which tabulate impulse responses (IRs) to allow immersive, real-time sound propagation in static scenes. We code the field of time-varying IRs in terms of a few perceptual parameters derived from the IRs energy decay. The resulting parameter fields are spatially smooth and compressed using a lossless scheme similar to PNG. We show that this encoding removes two of the seven dimensions, making it possible to handle large scenes such as entire game maps within 100MB of memory. Run-time decoding is fast, taking 100μs per source. We introduce an efficient and scalable method for convolutionally rendering acoustic parameters that generates artifact-free audio even for fast motion and sudden changes in reverberance. We demonstrate convincing spatially-varying effects in complex scenes including occlusion/obstruction and reverberation, in our system integrated with Unreal Engine 3™.

Collaboration


Dive into the Nikunj Raghuvanshi's collaboration.

Top Co-Authors

Avatar

Ming C. Lin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Dinesh Manocha

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ravish Mehra

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anish Chandak

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge