Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brandon Mechtley is active.

Publication


Featured researches published by Brandon Mechtley.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Segmentation, Indexing, and Retrieval for Environmental and Natural Sounds

Gordon Wichern; Jiachen Xue; Harvey D. Thornburg; Brandon Mechtley; Andreas Spanias

We propose a method for characterizing sound activity in fixed spaces through segmentation, indexing, and retrieval of continuous audio recordings. Regarding segmentation, we present a dynamic Bayesian network (DBN) that jointly infers onsets and end times of the most prominent sound events in the space, along with an extension of the algorithm for covering large spaces with distributed microphone arrays. Each segmented sound event is indexed with a hidden Markov model (HMM) that models the distribution of example-based queries that a user would employ to retrieve the event (or similar events). In order to increase the efficiency of the retrieval search, we recursively apply a modified spectral clustering algorithm to group similar sound events based on the distance between their corresponding HMMs. We then conduct a formal user study to obtain the relevancy decisions necessary for evaluation of our retrieval algorithm on both automatically and manually segmented sound clips. Furthermore, our segmentation and retrieval algorithms are shown to be effective in both quiet indoor and noisy outdoor recording conditions.


Advances in Human-computer Interaction | 2008

Embodiment, Multimodality, and Composition: Convergent Themes across HCI and Education for Mixed-Reality Learning Environments

David Birchfield; Harvey D. Thornburg; M. Colleen Megowan-Romanowicz; Sarah Hatton; Brandon Mechtley; Igor Dolgov; Winslow Burleson

We present concurrent theoretical work from HCI and Education that reveals a convergence of trends focused on the importance of three themes: embodiment, multimodality, and composition. We argue that there is great potential for truly transformative work that aligns HCI and Education research, and posit that there is an important opportunity to advance this effort through the full integration of the three themes into a theoretical and technological framework for learning. We present our own work in this regard, introducing the Situated Multimedia Arts Learning Lab (SMALLab). SMALLab is a mixed-reality environment where students collaborate and interact with sonic and visual media through full-body, 3D movements in an open physical space. SMALLab emphasizes human-to-human interaction within a multimodal, computational context. We present a recent case study that documents the development of a new SMALLab learning scenario, a collaborative student participation framework, a student-centered curriculum, and a three-day teaching experiment for seventy-two earth science students. Participating students demonstrated significant learning gains as a result of the treatment. We conclude that our theoretical and technological framework can be broadly applied in the realization of mixed reality, student-centered learning environments.


acm multimedia | 2008

Mixed-reality learning in the art museum context

David Birchfield; Brandon Mechtley; Sarah Hatton; Harvey D. Thornburg

We describe the realization of two interactive, mixed-reality installations arising from a partnership of K-12, university, and museum participants. Our goal was to apply emerging technologies to produce an innovative, hands-on arts learning experience within a conventional art museum. Suspended Animation, a Reflection on Calder is a mixed-reality installation created in response to a sculpture by Alexander Calder. Another Rack for Peto was created in response to a painting by John Frederick Peto. Both installations express formal aspects of the original artworks, and allow visitors to explore specific conceptual themes through their interactions. The project culminated in a six-month exhibition where the original artworks were presented alongside these new installations. We present data that the installations were well received by an audience of 25,000 visitors.


content based multimedia indexing | 2007

Robust Multi-Features Segmentation and Indexing for Natural Sound Environments

Gordon Wichern; Harvey D. Thornburg; Brandon Mechtley; Alex Fink; Kai Tu; Andreas Spanias

Creating an audio database from continuous long-term recordings, allows for sounds to not only be linked by the time and place in which they were recorded, but also to sounds with similar acoustic characteristics. Of paramount importance in this application is the accurate segmentation of sound events, enabling realistic navigation of these recordings. We first propose a novel feature set of specific relevance to environmental sounds, and then develop a Bayesian framework for sound segmentation, which fuses dynamics across multiple features. This probabilistic model possesses the ability to account for non-instantaneous sound onsets and absent or delayed responses among individual features, providing flexibility in defining exactly what constitutes a sound event. Example recordings demonstrate the diversity of our feature set, and the utility of our probabilistic segmentation model in extracting sound events from both indoor and outdoor environments.


international conference on acoustics, speech, and signal processing | 2010

Combining semantic, social, and acoustic similarity for retrieval of environmental sounds

Brandon Mechtley; Gordon Wichern; Harvey D. Thornburg; Andreas Spanias

Recent work in audio information retrieval has demonstrated the effectiveness of combining semantic information, such as descriptive, tags with acoustic content. However, these methods largely ignore the possibility of tag queries that do not yet exist in the database and the possibility of similar terms. In this work, we propose a network structure integrating similarity between semantic tags, content-based similarity between environmental audio recordings, and the collective sound descriptions provided by a user community. We then demonstrate the effectiveness of our approach by comparing the use of existing similarity measures for incorporating new vocabulary into an audio annotation and retrieval system.


Eurasip Journal on Audio, Speech, and Music Processing | 2010

An ontological framework for retrieving environmental sounds using semantics and acoustic content

Gordon Wichern; Brandon Mechtley; Alex Fink; Harvey D. Thornburg; Andreas Spanias

Organizing a database of user-contributed environmental sound recordings allows sound files to be linked not only by the semantic tags and labels applied to them, but also to other sounds with similar acoustic characteristics. Of paramount importance in navigating these databases are the problems of retrieving similar sounds using text- or sound-based queries, and automatically annotating unlabeled sounds. We propose an integrated system, which can be used for text-based retrieval of unlabeled audio, content-based query-by-example, and automatic annotation of unlabeled sound files. To this end, we introduce an ontological framework where sounds are connected to each other based on the similarity between acoustic features specifically adapted to environmental sounds, while semantic tags and sounds are connected through link weights that are optimized based on user-provided tags. Furthermore, tags are linked to each other through a measure of semantic similarity, which allows for efficient incorporation of out-of-vocabulary tags, that is, tags that do not yet exist in the database. Results on two freely available databases of environmental sounds contributed and labeled by nonexpert users demonstrate effective recall, precision, and average precision scores for both the text-based retrieval and annotation tasks.


international conference on virtual, augmented and mixed reality | 2018

Enactive Steering of an Experiential Model of the Atmosphere

Brandon Mechtley; Christopher Roberts; Julian Stein; Benjamin Nandin; Xin Wei Sha

We present a stream of research on Experiential Complex Systems which aims to incorporate responsive, experiential media systems, i.e. interactive, multimodal media environments capable of responding to sensed activity at perceptual rates, into the toolbox of computational science practitioners. Drawing on enactivist, embodied approaches to design, we suggest that these responsive, experiential media systems, driven by models of complex system dynamics, can help provide an experiential, enactive mode of scientific computing in the form of perceptually instantaneous, seamless iterations of hypothesis generation and immersive gestural shaping of dense simulations when used together with existing high performance computing implementations and analytical tools. As a first study of such a system, we present EMA, an Experiential Model of the Atmosphere, a responsive media environment that uses immersive projection, spatialized audio, and infrared-filtered optical sensing to allow participants to interactively steer a computational model of cloud physics, exploring the necessary conditions for different atmospheric processes and phenomena through the movement and presence of their bodies and objects in the lab space.


Archive | 2010

RE-SONIFICATION OF GEOGRAPHIC SOUND ACTIVITY USING ACOUSTIC, SEMANTIC, AND SOCIAL INFORMATION

Alex Fink; Brandon Mechtley; Gordon Wichern; Jinru Liu; Harvey D. Thornburg; Andreas Spanias; Grisha Coleman


international symposium/conference on music information retrieval | 2012

Shortest Path Techniques for Annotation and Retrieval of Environmental Sounds.

Brandon Mechtley; Andreas Spanias; Perry R. Cook


acm multimedia | 2017

Rich State Transitions in a Media Choreography Framework Using an Idealized Model of Cloud Dynamics

Brandon Mechtley; Julian Stein; Christopher Roberts; Sha Xin Wei

Collaboration


Dive into the Brandon Mechtley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gordon Wichern

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Alex Fink

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Hatton

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor Dolgov

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Julian Stein

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge