Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas Tsingos is active.

Publication


Featured researches published by Nicolas Tsingos.


international conference on computer graphics and interactive techniques | 2001

Modeling acoustics in virtual environments using the uniform theory of diffraction

Nicolas Tsingos; Thomas A. Funkhouser; Addy Ngan; Ingrid Carlbom

Realistic modeling of reverberant sound in 3D virtual worlds provides users with important cues for localizing sound sources and understanding spatial properties of the environment. Unfortunately, current geometric acoustic modeling systems do not accurately simulate reverberant sound. Instead, they model only direct transmission and specular reflection, while diffraction is either ignored or modeled through statistical approximation. However, diffraction is important for correct interpretation of acoustic environments, especially when the direct path between sound source and receiver is occluded. The Uniform Theory of Diffraction (UTD) extends geometrical acoustics with diffraction phenomena: illuminated edges become secondary sources of diffracted rays that in turn may propagate through the environment. In this paper, we propose an efficient way for computing the acoustical effect of diffraction paths using the UTD for deriving secondary diffracted rays and associated diffraction coefficients. Our main contributions are: 1) a beam tracing method for enumerating sequences of diffracting edges efficiently and without aliasing in densely occluded polyhedral environments; 2) a practical approximation to the simulated sound field in which diffraction is considered only in shadow regions; and 3) a real-time auralization system demonstrating that diffraction dramatically improves the quality of spatialized sound in virtual environments.


international conference on computer graphics and interactive techniques | 2004

Perceptual audio rendering of complex virtual environments

Nicolas Tsingos; Emmanuel Gallo; George Drettakis

We propose a real-time 3D audio rendering pipeline for complex virtual scenes containing hundreds of moving sound sources. The approach, based on auditory culling and spatial level-of-detail, can handle more than ten times the number of sources commonly available on consumer 3D audio hardware, with minimal decrease in audio quality. The method performs well for both indoor and outdoor environments. It leverages the limited capabilities of audio hardware for many applications, including interactive architectural acoustics simulations and automatic 3D voice management for video games.Our approach dynamically eliminates inaudible sources and groups the remaining audible sources into a budget number of clusters. Each cluster is represented by one impostor sound source, positioned using perceptual criteria. Spatial audio processing is then performed only on the impostor sound sources rather than on every original source thus greatly reducing the computational cost.A pilot validation study shows that degradation in audio quality, as well as localization impairment, are limited and do not seem to vary significantly with the cluster budget. We conclude that our real-time perceptual audio rendering pipeline can generate spatialized audio for complex auditory environments without introducing disturbing changes in the resulting perceived soundfield.


eurographics | 1995

Automatic Reconstruction of Unstructured 3D Data: Combining a Medial Axis and Implicit Surfaces

Eric Bittar; Nicolas Tsingos; Marie-Paule Gascuel

This paper presents a new method that combines a medial axis and implicit surfaces in order to reconstruct a 3D solid from an unstructured set of points scattered on the objects surface. The representation produced is based on iso‐surfaces generated by skeletons, and is a particularly compact way of defining a smooth free‐form solid. The method is based on the minimisation of an energy representing a “distance” between the set of data points and the iso‐surface, resembling previous reserach19. Initialisation, however, is more robust and efficient since there is computation of the medial axis of the set of points. Instead of subdividing existing skeletons in order to refine the objects surface, a new reconstruction algorithm progressively selects skeleton‐points from the pre‐ computed medial axis using an heuristic principle based on a “local energy” criterion. This drastically speeds up the reconstruction process. Moreover, using the medial axis allows reconstruction of objects with complex topology and geometry, like objects that have holes and branches or that are composed of several connected components. This process is fully automatic. The method has been successfully applied to both synthetic and real data.


Computer Graphics Forum | 1996

Adaptive sampling of implicit surfaces for interactive modelling and animation

Mathieu Desbrun; Nicolas Tsingos; Marie-Paule Gascuel

This paper presents a new adaptive sampling method for implicit surfaces that can be used in both interactive modelling and animation. The algorithm samples implicit objects composed of blending primitives and efficiently maintains this sampling over time, even when their topology changes (during fractures and fusions). It provides two complementary modes of immediate visualization: displaying “scales” lying on the surface, or a “primitive‐wise” polygonization. The sampling method efficiently avoids unwanted blending between different parts of an object. Moreover, it can be used for partitioning an implicit surface into local bounding boxes that will accelerate collision detection during animation and ray‐intersections during final rendering.


eurographics symposium on rendering techniques | 2007

Instant sound scattering

Nicolas Tsingos; Carsten Dachsbacher; Sylvain Lefebvre; Matteo Dellepiane

Real-time sound rendering engines often render occlusion and early sound reflection effects using geometrical techniques such as ray or beam tracing. They can only achieve interactive rendering for environments of low local complexity resulting in crude effects which can degrade the sense of immersion. However, surface detail or complex dynamic geometry has a strong influence on sound propagation and the resulting auditory perception. This paper focuses on high-quality modeling of first-order sound scattering. Based on a surface-integral formulation and the Kirchhoff approximation, we propose an efficient evaluation of scattering effects, including both diffraction and reflection, that leverages programmable graphics hardware for dense sampling of complex surfaces. We evaluate possible surface simplification techniques and show that combined normal and displacement maps can be successfully used for audio scattering calculations. We present an auralization framework that can render scattering effects interactively thus providing a more compelling experience. We demonstrate that, while only considering first order phenomena, our approach can provide realistic results for a number of practical interactive applications. It can also process highly detailed models containing millions of unorganized triangles in minutes, generating high-quality scattering filters. Resulting simulations compare well with on-site recordings showing that the Kirchhoff approximation can be used for complex scattering problems.


eurographics | 2012

Acoustic Rendering and Auditory–Visual Cross-Modal Perception and Interaction

Vedad Hulusic; Carlo Harvey; Kurt Debattista; Nicolas Tsingos; Steve Walker; David M. Howard; Alan Chalmers

In recent years research in the three‐dimensional sound generation field has been primarily focussed upon new applications of spatialized sound. In the computer graphics community the use of such techniques is most commonly found being applied to virtual, immersive environments. However, the field is more varied and diverse than this and other research tackles the problem in a more complete, and computationally expensive manner. Furthermore, the simulation of light and sound wave propagation is still unachievable at a physically accurate spatio‐temporal quality in real time. Although the Human Visual System (HVS) and the Human Auditory System (HAS) are exceptionally sophisticated, they also contain certain perceptional and attentional limitations. Researchers, in fields such as psychology, have been investigating these limitations for several years and have come up with findings which may be exploited in other fields. This paper provides a comprehensive overview of the major techniques for generating spatialized sound and, in addition, discusses perceptual and cross‐modal influences to consider. We also describe current limitations and provide an in‐depth look at the emerging topics in the field.


Computer Graphics | 1995

Implicit surfaces for semi-automatic medical organ reconstruction

Nicolas Tsingos; Eric Bittar; Marie-Paule Gascuel

Publisher Summary This chapter discusses the implicit surfaces for semiautomatic medical organ reconstruction. A new method for reconstruction with implicit surfaces generated by skeletons is presented. Local control on the reconstructed shape due to a local field function, which enables the definition of local energy terms associated with each skeleton is described. This leads to a much more efficient skeleton subdivision process, since one gets a robust criterion telling which skeleton should be divided next. The knowledge of the normal vectors at the data points is not needed. The method works as a semiautomatic process and the user can visualize the data, initially position some skeletons due to an interactive implicit surfaces editor, and further optimize the process by specifying several reconstruction windows, that slightly overlap, and where surface reconstruction follows a local criterion. It is found that if needed, different precisions of reconstruction can be defined in each window. The shapes to reconstruct can be of any topology and geometry, and for instance include holes and branchings. The reconstruction experiments from noisy medical data, for which scattered points are arranged in nonuniform repartition, are shown.


pacific conference on computer graphics and applications | 2008

Reconstructing head models from photographs for individualized 3D‐audio processing

Matteo Dellepiane; Nico Pietroni; Nicolas Tsingos; Manuel Asselot; Roberto Scopigno

Visual fidelity and interactivity are the main goals in Computer Graphics research, but recently also audio is assuming an important role. Binaural rendering can provide extremely pleasing and realistic three‐dimensional sound, but to achieve best results its necessary either to measure or to estimate individual Head Related Transfer Function (HRTF). This function is strictly related to the peculiar features of ears and face of the listener. Recent sound scattering simulation techniques can calculate HRTF starting from an accurate 3D model of a human head. Hence, the use of binaural rendering on large scale (i.e. video games, entertainment) could depend on the possibility to produce a sufficiently accurate 3D model of a human head, starting from the smallest possible input. In this paper we present a completely automatic system, which produces a 3D model of a head starting from simple input data (five photos and some key‐points indicated by user). The geometry is generated by extracting information from images and accordingly deforming a 3D dummy to reproduce user head features. The system proves to be fast, automatic, robust and reliable: geometric validation and preliminary assessments show that it can be accurate enough for HRTF calculation.


Acta Acustica United With Acustica | 2008

Topological Sound Propagation with Reverberation Graphs

Efstathios Stavrakis; Nicolas Tsingos; Paul Calamia

Reverberation graphs is a novel approach to estimate global soundpressure decay and auralize corresponding reverberation effects in interactive virtual environments. We use a 3D model to represent the geometry of the environment explicitly, and we subdivide it into a series of coupled spaces connected by portals. Off-line geometrical-acoustics techniques are used to precompute transport operators, which encode pressure decay characteristics within each space and between coupling interfaces. At run-time, during an interactive simulation, we traverse the adjacency graph corresponding to the spatial subdivision of the environment. We combine transport operators along different sound propagation routes to estimate the pressure decay envelopes from sources to the listener. Our approach compares well with off-line geometrical techniques, but computes reverberation decay envelopes at interactive rates, ranging from 12 to 100 Hz. We propose a scalable artificial reverberator that uses these decay envelopes to auralize reverberation effects, including room coupling. Our complete system can render as many as 30 simultaneous sources in large dynamic virtual environments.


international conference on computer graphics and interactive techniques | 1997

A general model for the simulation of room acoustics based on hierachical radiosity

Nicolas Tsingos; Jean-Dominique Gascuel

We present a new method to compute the impulse response (IR) of a given virtual room based on hierarchical radiosity. Unlike previous work, our approach can treat complex geometries and is listeningposition- independent. Moreover, complex phenomena such as sound diffraction are also taken into account.

Collaboration


Dive into the Nicolas Tsingos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maria Roussou

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge