Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhimin Ren is active.

Publication


Featured researches published by Zhimin Ren.


ACM Transactions on Graphics | 2013

Example-guided physically based modal sound synthesis

Zhimin Ren; Hengchin Yeh; Ming C. Lin

Linear modal synthesis methods have often been used to generate sounds for rigid bodies. One of the key challenges in widely adopting such techniques is the lack of automatic determination of satisfactory material parameters that recreate realistic audio quality of sounding materials. We introduce a novel method using prerecorded audio clips to estimate material parameters that capture the inherent quality of recorded sounding materials. Our method extracts perceptually salient features from audio examples. Based on psychoacoustic principles, we design a parameter estimation algorithm using an optimization framework and these salient features to guide the search of the best material parameters for modal synthesis. We also present a method that compensates for the differences between the real-world recording and sound synthesized using solely linear modal synthesis models to create the final synthesized audio. The resulting audio generated from this sound synthesis pipeline well preserves the same sense of material as a recorded audio example. Moreover, both the estimated material parameters and the residual compensation naturally transfer to virtual objects of different sizes and shapes, while the synthesized sounds vary accordingly. A perceptual study shows the results of this system compare well with real-world recordings in terms of material perception.


IEEE Transactions on Visualization and Computer Graphics | 2008

AD-Frustum: Adaptive Frustum Tracing for Interactive Sound Propagation

Anish Chandak; Christian Lauterbach; Micah Taylor; Zhimin Ren; Dinesh Manocha

We present an interactive algorithm to compute sound propagation paths for transmission, specular reflection and edge diffraction in complex scenes. Our formulation uses an adaptive frustum representation that is automatically sub-divided to accurately compute intersections with the scene primitives. We describe a simple and fast algorithm to approximate the visible surface for each frustum and generate new frusta based on specular reflection and edge diffraction. Our approach is applicable to all triangulated models and we demonstrate its performance on architectural and outdoor models with tens or hundreds of thousands of triangles and moving objects. In practice, our algorithm can perform geometric sound propagation in complex scenes at 4-20 frames per second on a multi-core PC.


international conference on computer graphics and interactive techniques | 2013

Wave-ray coupling for interactive sound propagation in large complex scenes

Hengchin Yeh; Ravish Mehra; Zhimin Ren; Lakulish Antani; Dinesh Manocha; Ming C. Lin

We present a novel hybrid approach that couples geometric and numerical acoustic techniques for interactive sound propagation in complex environments. Our formulation is based on a combination of spatial and frequency decomposition of the sound field. We use numerical wave-based techniques to precompute the pressure field in the near-object regions and geometric propagation techniques in the far-field regions to model sound propagation. We present a novel two-way pressure coupling technique at the interface of near-object and far-field regions. At runtime, the impulse response at the listener position is computed at interactive rates based on the stored pressure field and interpolation techniques. Our system is able to simulate high-fidelity acoustic effects such as diffraction, scattering, low-pass filtering behind obstruction, reverberation, and high-order reflections in large, complex indoor and outdoor environments and Half-Life 2 game engine. The pressure computation requires orders of magnitude lower memory than standard wave-based numerical techniques.


ieee virtual reality conference | 2010

Synthesizing contact sounds between textured models

Zhimin Ren; Hengchin Yeh; Ming C. Lin

We present a new interaction handling model for physics-based sound synthesis in virtual environments. A new three-level surface representation for describing object shapes, visible surface bumpiness, and microscopic roughness (e.g. friction) is proposed to model surface contacts at varying resolutions for automatically simulating rich, complex contact sounds. This new model can capture various types of surface interaction, including sliding, rolling, and impact with a combination of three levels of spatial resolutions. We demonstrate our method by synthesizing complex, varying sounds in several interactive scenarios and a game-like virtual environment. The three-level interaction model for sound synthesis enhances the perceived coherence between audio and visual cues in virtual reality applications.


virtual reality software and technology | 2015

Interactive virtual percussion instruments on mobile devices

Zhimin Ren; Ming C. Lin

We present a multimodal virtual percussion instrument system on consumer mobile devices that allows users to design and configure customizable virtual percussion instruments and interact with them in real time. Users can create virtual instruments of different materials and shapes interactively, by editing and selecting the desired characteristics. Both the visual and auditory feedback are then computed on the fly to automatically correspond to the instrument properties and user interaction. We utilize efficient 3D input processing algorithms to approximate and represent real-time multi-touch input with key meta properties and adopt fast physical modeling to synthesize sounds. Despite the relatively limited computing resources on mobile devices, we are able to achieve rich and responsive multimodal feedback based on real-time user input. A pilot study is conducted to assess the effectiveness of the system.


Archive | 2009

FAST EDGE-DIFFRACTION FOR SOUND PROPAGATION IN COMPLEX VIRTUAL ENVIRONMENTS

Micah Taylor; Anish Chandak; Zhimin Ren; Christian Lauterbach; Dinesh Manocha


interactive 3d graphics and games | 2012

Tabletop Ensemble: touch-enabled virtual percussion instruments

Zhimin Ren; Ravish Mehra; Jason Coposky; Ming C. Lin


IEEE Transactions on Visualization and Computer Graphics | 2013

Auditory Perception of Geometry-Invariant Material Properties

Zhimin Ren; Hengchin Yeh; Roberta L. Klatzky; Ming C. Lin


Archive | 2015

METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR SIMULATING SOUND PROPAGATION USING WAVE-RAY COUPLING

Hengchin Yeh; Ravish Mehra; Lakulish Antani; Zhimin Ren; Ming C. Lin; Dinesh Manocha


human factors in computing systems | 2012

Designing virtual instruments with touch-enabled interface

Zhimin Ren; Ravish Mehra; Jason Coposky; Ming C. Lin

Collaboration


Dive into the Zhimin Ren's collaboration.

Top Co-Authors

Avatar

Ming C. Lin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Hengchin Yeh

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Dinesh Manocha

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ravish Mehra

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Lakulish Antani

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Anish Chandak

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Christian Lauterbach

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jason Coposky

Renaissance Computing Institute

View shared research outputs
Top Co-Authors

Avatar

Micah Taylor

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge