Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raymond Migneco is active.

Publication


Featured researches published by Raymond Migneco.


IEEE Transactions on Learning Technologies | 2009

Collaborative Online Activities for Acoustics Education and Psychoacoustic Data Collection

Youngmoo E. Kim; Travis M. Doll; Raymond Migneco

Online collaborative game-based activities may offer a compelling tool for mathematics and science education, particularly for younger students in grades K-12. We have created two prototype activities that allow students to explore aspects of different sound and acoustics concepts: the ldquococktail party problemrdquo (sound source identification within mixtures) and the physics of musical instruments. These activities are also inspired by recent work using games to collect labeling data for difficult computational problems from players through a fun and engaging activity. Thus, in addition to their objectives as learning activities, our games facilitate the collection of data on the perception of audio and music, with a range of parameter variation that is difficult to achieve for large subject populations using traditional methods. Our activities have been incorporated into a pilot study with a middle school classroom to demonstrate the potential benefits of this platform.


multimedia signal processing | 2009

An audio DSP Toolkit for rapid application development in Flash

Travis M. Doll; Raymond Migneco; Jeffrey J. Scott; Youngmoo E. Kim

The Adobe Flash platform has become the de facto standard for developing and deploying media rich web applications and games. The relative ease-of-development and cross-platform architecture of Flash enables designers to rapidly prototype graphically rich interactive applications, but comprehensive support for audio and signal processing has been lacking. ActionScript, the primary development language used for Flash, is poorly suited for DSP algorithms. To address the inherent challenges in the integration of interactive audio processing into Flash-based applications, we have developed the DSP Audio Toolkit for Flash, which offers significant performance improvements over algorithms implemented in Java or ActionScript. By developing this toolkit, we hope to open up new possibilities for Flash applications and games, enabling them to utilize real-time audio processing as a means to drive gameplay and improve the experience of the end user.


2009 International IEEE Consumer Electronics Society's Games Innovations Conference | 2009

An audio processing library for game development in flash

Raymond Migneco; Travis M. Doll; Jeffrey J. Scott; Christian M. Hahn; Paul J. Diefenbach; Youngmoo E. Kim

In recent years, there has been sharp rise in the number of games on web-based platforms, which are ideal for rapid game development and easy deployment. In a parallel but unrelated trend, music-centric video games that incorporate well-known popular music directly into the gameplay (e.g., Guitar Hero and Rock Band) have attained widespread popularity on console platforms. The limitations of such web-based platforms as Adobe Flash, however, have made it difficult for developers to utilize complex sound and music interaction within web games. Furthermore, the real-time audio processing and synchronization required in music-centric games demands significant computational power and specialized audio algorithms, which have been difficult or impossible to implement using Flash scripting. Taking advantage of features recently added to the platform, including dynamic audio control and C-compilation for near-native performance, we have developed the Audio processing Library for Flash (ALF), providing developers with a library of common audio processing routines and affording web games with a degree of sound interaction previously available only on console or native PC platforms. We also present several audio-intensive games that incorporate ALF to demonstrate its utility. One example performs real-time analysis of songs in a users music library to drive the gameplay, providing a novel form of game-music interaction.


2009 International IEEE Consumer Electronics Society's Games Innovations Conference | 2009

Web-based sound and music games with activities for STEM education

Travis M. Doll; Raymond Migneco; Youngmoo E. Kim

It is widely believed that computer games have great potential as a compelling and effective tool for science, technology, engineering, and mathematics (STEM) education, particularly for younger students in grades K-12. In previous work we created several prototype games that allow students to use sound and acoustics simulations to explore general scientific concepts. These games were developed using Adobe Flash and are accessible through a standard web browser, making them particularly well-suited for education environments where custom software may be difficult to install. Pilot testing our games with local K-12 classrooms has demonstrated potential benefits in terms of captivating student interest, but has also revealed multiple avenues for improvement and refinement. We have recently developed a unified development platform that overcomes prior architectural limitations to enable dynamic sound processing and synthesis within Flash applications to improve responsiveness and create rich interactive experiences. Since they are constructed in ActionScript, the relatively straightforward scripting language of Flash, these games can be easily extended or adapted to target highly specific lessons and concepts. In this paper, we detail several specific lesson plans for mathematics and science education developed using customized versions of our previous games.


workshop on applications of signal processing to audio and acoustics | 2011

Excitation modeling and synthesis for plucked guitar tones

Raymond Migneco; Youngmoo E. Kim

The analysis and synthesis of plucked-guitar tones via source-filter approximations is a popular and established method for modeling the resonant behavior of the string as well as the driving excitation signal. By varying the source signal, a nearly unlimited number of unique tones can be produced using a given filter model. However, it as unclear as to how exactly the model excitation signals should be parameterized in order to capture the nuances of a guitarists articulation from a recorded performance. In this paper, we apply principal components analysis to a corpus of excitation signals derived from plucked-guitar recordings in order to design a codebook that captures the unique characteristics of certain string articulations. The development of an excitation codebook has several applications, including expressive synthesis of guitar tones for virtual music interfaces and insight into the expressive intentions of a performer through audio analysis.


workshop on applications of signal processing to audio and acoustics | 2011

Modeling musical instrument tones as dynamic textures

Erik M. Schmidt; Raymond Migneco; Jeffrey J. Scott; Youngmoo E. Kim

In this work we introduce the concept of modeling musical instrument tones as dynamic textures. Dynamic textures are multidimensional signals, which exhibit certain temporal-stationary characteristics such that they can be modeled as observations from a linear dynamical system (LDS). Previous work in dynamic textures research has shown that sequences exhibiting such characteristics can in many cases be re-synthesized by an LDS with high accuracy. In this work we demonstrate that short-time Fourier transform (STFT) coefficients of certain instrument tones (e.g. piano, guitar) can be well-modeled under this requirement. We show that these instruments can be re-synthesized using an LDS model with high fidelity, even using low-dimensional models. In looking to ultimately develop models which can be altered to provide control of pitch and articulation, we analyze the connections between such musical qualities as articulation with linear dynamical system model parameters. Finally, we provide preliminary experiments in the alteration of such musical qualities through model re-parameterization.


international conference on digital signal processing | 2011

Modeling plucked guitar tones via joint source-filter estimation

Raymond Migneco; Youngmoo E. Kim

Physical models for plucked string instruments can produce high-quality tones using a computationally efficient implementation, but the estimation of model parameters through the analysis of audio remains problematic. Moreover, an accurate representation of the expressive aspects of a performance requires a separation of the performers articulation (source) from the instruments response (filter). This paper explores a physically-inspired signal model for plucked guitar sounds that facilitates the estimation of both string excitation and resonance parameters simultaneously. We present the application of the joint source-filter model in an analysis-synthesis framework to plucked-guitar recordings, and we demonstrate that our system is particularly adept at capturing and replicating the characteristic sounds resulting from various plucking styles. By explicitly modeling string articulations, we believe this system provides insight towards capturing the expressive intentions of a performer from the audio signal alone.


international conference on digital signal processing | 2011

Tone Bender: A collaborative activity for signal processing education & psychoacoustic data collection

Raymond Migneco; Youngmoo E. Kim

Collaborative web-based activities can serve as a cogent method for large-scale, psychoacoustic data collection. The advancing maturity of web-based platforms makes it feasible to rapidly develop entertaining games that can collect information on specific tasks, such as audio and image annotation, on a large scale. The accessibility and entertainment value of these games make them attractive to educators as well, since they promote interactive learning methods through gaming. To this end, we have developed Tone Bender, a collaborative activity developed for signal processing education and data collection on the perception of musical instruments. In this paper, we describe how Tone Bender collects perceptual information from players and provide sample lessons for K-12 curricula using the game as an educational platform.


Archive | 2010

MUSIC EMOTION RECOGNITION: A STATE OF THE ART REVIEW

Youngmoo E. Kim; Erik M. Schmidt; Raymond Migneco; Brandon G. Morton; Patrick Richardson; Jeffrey J. Scott; Jacquelin A. Speck; Douglas Turnbull


international symposium/conference on music information retrieval | 2010

State of the Art Report: Music Emotion Recognition: A State of the Art Review.

Youngmoo E. Kim; Erik M. Schmidt; Raymond Migneco; Brandon G. Morton; Patrick Richardson; Jeffrey J. Scott; Jacquelin A. Speck; Douglas Turnbull

Collaboration


Dive into the Raymond Migneco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge