Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl Schissler is active.

Publication


Featured researches published by Carl Schissler.


international conference on computer graphics and interactive techniques | 2014

High-order diffraction and diffuse reflections for interactive sound propagation in large environments

Carl Schissler; Ravish Mehra; Dinesh Manocha

We present novel algorithms for modeling interactive diffuse reflections and higher-order diffraction in large-scale virtual environments. Our formulation is based on ray-based sound propagation and is directly applicable to complex geometric datasets. We use an incremental approach that combines radiosity and path tracing techniques to iteratively compute diffuse reflections. We also present algorithms for wavelength-dependent simplification and visibility graph computation to accelerate higher-order diffraction at runtime. The overall system can generate plausible sound effects at interactive rates in large, dynamic scenes that have multiple sound sources. We highlight the performance in complex indoor and outdoor environments and observe an order of magnitude performance improvement over previous methods.


IEEE Transactions on Visualization and Computer Graphics | 2012

Guided Multiview Ray Tracing for Fast Auralization

Micah Taylor; Anish Chandak; Qi Mo; Christian Lauterbach; Carl Schissler; Dinesh Manocha

We present a novel method for tuning geometric acoustic simulations based on ray tracing. Our formulation computes sound propagation paths from source to receiver and exploits the independence of visibility tests and validation tests to dynamically guide the simulation to high accuracy and performance. Our method makes no assumptions of scene layout and can account for moving sources, receivers, and geometry. We combine our guidance algorithm with a fast GPU sound propagation system for interactive simulation. Our implementation efficiently computes early specular paths and first order diffraction with a multiview tracing algorithm. We couple our propagation simulation with an audio output system supporting a high order interpolation scheme that accounts for attenuation, cross fading, and delay. The resulting system can render acoustic spaces composed of thousands of triangles interactively.


IEEE Transactions on Visualization and Computer Graphics | 2016

Efficient HRTF-based Spatial Audio for Area and Volumetric Sources

Carl Schissler; Aaron Nicholls; Ravish Mehra

We present a novel spatial audio rendering technique to handle sound sources that can be represented by either an area or a volume in VR environments. As opposed to point-sampled sound sources, our approach projects the area-volumetric source to the spherical domain centered at the listener and represents this projection area compactly using the spherical harmonic (SH) basis functions. By representing the head-related transfer function (HRTF) in the same basis, we demonstrate that spatial audio which corresponds to an area-volumetric source can be efficiently computed as a dot product of the SH coefficients of the projection area and the HRTF. This results in an efficient technique whose computational complexity and memory requirements are independent of the complexity of the sound source. Our approach can support dynamic area-volumetric sound sources at interactive rates. We evaluate the performance of our technique in large complex VR environments and demonstrate significant improvement over the naive point-sampling technique. We also present results of a user evaluation, conducted to quantify the subjective preference of the user for our approach over the point-sampling approach in VR environments.


interactive 3d graphics and games | 2016

Adaptive impulse response modeling for interactive sound propagation

Carl Schissler; Dinesh Manocha

We present novel techniques to accelerate the computation of impulse responses for interactive sound rendering. Our formulation is based on geometric acoustic algorithms that use ray tracing to compute the propagation paths from each source to the listener in large, dynamic scenes. In order to accelerate generation of realistic acoustic effects in multi-source scenes, we introduce two novel concepts: the impulse response cache and an adaptive frequency-driven ray tracing algorithm that exploits psychoacoustic characteristics of the impulse response length. As compared to prior approaches, we trace relatively fewer rays while maintaining high simulation fidelity for real-time applications. Furthermore, our approach can handle highly reverberant scenes and high-dynamic-range sources. We demonstrate its application in many scenarios and have observed a 5x speedup in computation time and about two orders of magnitude reduction in memory overhead compared to previous approaches. We also present the results of a preliminary user evaluation of our approach.


international conference on computer graphics and interactive techniques | 2016

Interactive sound propagation with bidirectional path tracing

Chunxiao Cao; Zhong Ren; Carl Schissler; Dinesh Manocha; Kun Zhou

We introduce Bidirectional Sound Transport (BST), a new algorithm that simulates sound propagation by bidirectional path tracing using multiple importance sampling. Our approach can handle multiple sources in large virtual environments with complex occlusion, and can produce plausible acoustic effects at an interactive rate on a desktop PC. We introduce a new metric based on the signal-to-noise ratio (SNR) of the energy response and use this metric to evaluate the performance of ray-tracing-based acoustic simulation methods. Our formulation exploits temporal coherence in terms of using the resulting sample distribution of the previous frame to guide the sample distribution of the current one. We show that our sample redistribution algorithm converges and better balances between early and late reflections. We evaluate our approach on different benchmarks and demonstrate significant speedup over prior geometric acoustic algorithms.


IEEE Transactions on Visualization and Computer Graphics | 2016

SynCoPation: Interactive Synthesis-Coupled Sound Propagation

Atul Rungta; Carl Schissler; Ravish Mehra; Chris Malloy; Ming C. Lin; Dinesh Manocha

Recent research in sound simulation has focused on either sound synthesis or sound propagation, and many standalone algorithms have been developed for each domain. We present a novel technique for coupling sound synthesis with sound propagation to automatically generate realistic aural content for virtual environments. Our approach can generate sounds from rigid-bodies based on the vibration modes and radiation coefficients represented by the single-point multipole expansion. We present a mode-adaptive propagation algorithm that uses a perceptual Hankel function approximation technique to achieve interactive runtime performance. The overall approach allows for high degrees of dynamism - it can support dynamic sources, dynamic listeners, and dynamic directivity simultaneously. We have integrated our system with the Unity game engine and demonstrate the effectiveness of this fully-automatic technique for audio content creation in complex indoor and outdoor scenes. We conducted a preliminary, online user-study to evaluate whether our Hankel function approximation causes any perceptible loss of audio quality. The results indicate that the subjects were unable to distinguish between the audio rendered using the approximate function and audio rendered using the full Hankel function in the Cathedral, Tuscany, and the Game benchmarks.


IEEE Transactions on Visualization and Computer Graphics | 2018

Acoustic Classification and Optimization for Multi-Modal Rendering of Real-World Scenes

Carl Schissler; Christian Loftin; Dinesh Manocha

We present a novel algorithm to generate virtual acoustic effects in captured 3D models of real-world scenes for multimodal augmented reality. We leverage recent advances in 3D scene reconstruction in order to automatically compute acoustic material properties. Our technique consists of a two-step procedure that first applies a convolutional neural network (CNN) to estimate the acoustic material properties, including frequency-dependent absorption coefficients, that are used for interactive sound propagation. In the second step, an iterative optimization algorithm is used to adjust the materials determined by the CNN until a virtual acoustic simulation converges to measured acoustic impulse responses. We have applied our algorithm to many reconstructed real-world indoor scenes and evaluated its fidelity for augmented reality applications.


Journal of the Acoustical Society of America | 2017

Efficient construction of the spatial room impulse response

Carl Schissler; Peter Stirling; Ravish Mehra

An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse response (RIR) for directional listeners. This involves convolution of the listeners head-related transfer function (HRTF) with the RIR to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listeners ear canals. Previous approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements. We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH) domain for RIR partitions of a fixed length. The main contribution is a novel perceptually driven metric that adaptively determines the lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions, our technique saves a significant amount of computation and is almost an order of m...


Journal of the Acoustical Society of America | 2013

Interactive gpu-based sound auralization in dynamic scenes

Qi Mo; Micah Taylor; Anish Chandak; Christian Lauterbach; Carl Schissler; Dinesh Manocha

We present an auralization algorithm for interactive virtual environments with dynamic objects, sources, and listener. Our approach uses a modified image source method that computes propagation paths combining direct transmission, specular reflections, and edge diffractions up to a specified order. We use a novel multi-view raycasting algorithm for parallel computation of image sources on GPUs. Rays that intersect near diffracting edges are detected using barycentric coordinates and further propagated. In order to reduce the artifacts in audio rendering of dynamic scenes, we use a high order interpolation scheme that takes into account attenuation, crossfading, and delay. The resulting system can perform auralization at interactive rates on a high-end PC with NVIDIA GTX 280 GPU with 2–3 orders of reflections and 1 order of diffraction. Overall, our approach can generate plausible sound rendering for game-like scenes with tens of thousands of triangles. We observe more than an order of magnitude improvemen...


acm symposium on applied perception | 2018

Effects of virtual acoustics on target-word identification performance in multi-talker environments

Atul Rungta; Nicholas Rewkowski; Carl Schissler; Philip Robinson; Ravish Mehra; Dinesh Manocha

Many virtual reality applications let multiple users communicate in a multi-talker environment, recreating the classic cocktail-party effect. While there is a vast body of research focusing on the perception and intelligibility of human speech in real-world scenarios with cocktail party effects, there is little work in accurately modeling and evaluating the effect in virtual environments. Given the goal of evaluating the impact of virtual acoustic simulation on the cocktail party effect, we conducted experiments to establish the signal-to-noise ratio (SNR) thresholds for target-word identification performance. Our evaluation was performed for sentences from the coordinate response measure corpus in presence of multi-talker babble. The thresholds were established under varying sound propagation and spatialization conditions. We used a state-of-the-art geometric acoustic system integrated into the Unity game engine to simulate varying conditions of reverberance (direct sound, direct sound & early reflections, direct sound and early reflections and late reverberation) and spatialization (mono, stereo, and binaural). Our results show that spatialization has the biggest effect on the ability of listeners to discern the target words in multi-talker virtual environments. Reverberance, on the other hand, slightly affects the target word discerning ability negatively.

Collaboration


Dive into the Carl Schissler's collaboration.

Top Co-Authors

Avatar

Dinesh Manocha

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ravish Mehra

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Anish Chandak

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Christian Lauterbach

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Micah Taylor

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Qi Mo

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Atul Rungta

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Nicholas Rewkowski

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge