Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven R. Ness is active.

Publication


Featured researches published by Steven R. Ness.


acm multimedia | 2009

Improving automatic music tag annotation using stacked generalization of probabilistic SVM outputs

Steven R. Ness; Anthony Theocharis; George Tzanetakis; Luis Gustavo Martins

Music listeners frequently use words to describe music. Personalized music recommendation systems such as Last.fm and Pandora rely on manual annotations (tags) as a mechanism for querying and navigating large music collections. A well-known issue in such recommendation systems is known as the cold-start problem: it is not possible to recommend new songs/tracks until those songs/tracks have been manually annotated. Automatic tag annotation based on content analysis is a potential solution to this problem and has recently been gaining attention. We describe how stacked generalization can be used to improve the performance of a state-of-the-art automatic tag annotation system for music based on audio content analysis and report results on two publicly available datasets.


pervasive technologies related to assistive environments | 2009

Assistive music browsing using self-organizing maps

George Tzanetakis; Manjinder Singh Benning; Steven R. Ness; Darren Minifie; N. J. Livingston

Music listening is an important activity for many people. Advances in technology have made possible the creation of music collections with thousands of songs in portable music players. Navigating these large music collections is challenging especially for users with vision and/or motion disabilities. In this paper we describe our current efforts to build effective music browsing interfaces for people with disabilities. The foundation of our approach is the automatic extraction of features for describing musical content and the use of self-organizing maps to create two-dimensional representations of music collections. The ultimate goal is effective browsing without using any meta-data. We also describe different control interfaces to the system: a regular desktop application, an iPhone implementation, an eye tracker, and a smart room interface based on Wii-mote tracking.


Multimedia Tools and Applications | 2010

Computer-assisted cantillation and chant research using content-aware web visualization tools

Steven R. Ness; Dániel Péter Biró; George Tzanetakis

Chant and cantillation research is particularly interesting as it explores the transition from oral to written transmission of music. The goal of this work to create web-based computational tools that can assist the study of how diverse recitation traditions, having their origin in primarily non-notated melodies, later became codified. One of the authors is a musicologist and music theorist who has guided the system design and development by providing manual annotations and participating in the design process. We describe novel content-based visualization and analysis algorithms that can be used for problem-seeking exploration of audio recordings of chant and recitations.


Proceedings of the 2nd ACM workshop on Multimedia semantics | 2008

Chants and Orcas: semi-automatic tools for audio annotation and analysis in niche domains

Steven R. Ness; Matthew Wright; L. Gustavo Martins; George Tzanetakis

The recent explosion of web-based collaborative applications in business and social media sites demonstrated the power of collaborative internet scale software. This includes the ability to access huge datasets, the ability to quickly update software, and the ability to let people around the world collaborate seamlessly. Multimedia learning techniques have the potential to make unstructured multimedia data accessible, reusable, searchable, and manageable. We present two different web-based collaborative projects: Cantillion, and the Orchive. Cantillion enables ethnomusicology scholars to listen and view data relating to chants from a variety of traditions, letting them view and interact with various pitch contour representations of the chant. The Orchive is a project to digitize over 20,000 hours of Orcinus orca (killer whale) vocalizations, recorded over a period of approximately 35 years, and provide tools to assist their study. The developed tools utilize ideas and techniques that are similar to the ones used in general multimedia domains such as sports video or news. However, their niche nature has presented us with special challenges as well as opportunities. Unlike more traditional domains where there are clearly defined objectives one of the biggest challenges has been the desire to support researchers to formulate questions and problems related to the data even when there is no clearly defined objective.


technical symposium on computer science education | 2014

Taking a walk on the wild side: teaching cloud computing on distributed research testbeds

Yanyan Zhuang; Chris Matthews; Stephen Tredger; Steven R. Ness; Jesse Short-Gershman; Li Ji; Niko Rebenich; Andrew French; Josh Erickson; Kyliah Clarkson; Yvonne Coady; Rick McGeer

Distributed platforms are now a de facto standard in modern software and application development. Although the ACM/IEEE Curriculum 2013 introduces Parallel and Distributed Computing as a first class knowledge area for the first time, the right level of abstraction to teach these concepts is still an important question that needs to be explored. This work presents our findings in teaching cloud computing by exposing upper-level students to testbeds in use by the distributed systems research community. The possibility of giving students practical and relevant experience was explored in the context of new course assignment objectives. Furthermore, students were able to significantly contribute to a pilot class project with medium-scale computation based on satellite data. However, the software engineering challenges in these environments proved to be daunting. In particular, these challenges were exacerbated by a lack of debugging support relative to the environments students were more familiar with---requiring development practices that out-stripped typical course experiences. Our proposed set of experiments and project provide a basis for an evaluation of the trade-offs of teaching cloud and distributed systems on the wild side. We hope that these findings provide insight into some of the possibilities to consider when preparing the next generation of computer scientists to engage with software practices and paradigms that are already fundamental in todays highly distributed systems.


pacific rim conference on communications, computers and signal processing | 2011

Automatic event detection for long-term monitoring of hydrophone data

Farook Sattar; Peter F. Driessen; George Tzanetakis; Steven R. Ness; Wyatt Page

In this paper, we propose an efficient method for long-term monitoring of a wide variety of marine mammals and human related activities using hydrophone data. The proposed method uses a combination of a two-stage denoising process followed by a new event detection function that estimates temporal predictability. The detection function utilizes long-term and short-term predictions in order to detect various acoustic events from the background noise. The first stage of the denoising process uses temporal decomposition via Empirical Mode Decomposition to improve the correct detection rate, while the second stage uses Wavelet Packet spectral decomposition to reduce the false detection rate. Applied to event detection in NEPTUNE hydrophone recordings, the method demonstrates an accuracy of 95% and an F-measure of 94%.


ieee global conference on signal and information processing | 2014

Human and machine annotation in the Orchive, a large scale bioacoustic archive

Steven R. Ness; George Tzanetakis

Advances in computer technology have enabled the collection, digitization, and automated processing of huge archives of bioacoustic sound. Many of the tools previously used in bioacoustics research work well with small to medium-sized audio collections, but are challenged when processing large collections ranging from tens of terabytes to petabyte size. The Orchive is a system that assists researchers to listen to, view, annotate and run advanced audio feature extraction and machine learning algorithms on large bioacoustic archives. Annotation is one of the biggest challenges in our work. In this paper, we describe our efforts to utilize experts as well as citizen scientists to participate in the process of annotating recordings. The Orchive contains over 23,000 hours of orca vocalizations collected over the course of 30 years, and represents one of the largest continuous collections of bioacoustic recordings in the world. Manual annotation is practically impossible and therefore we investigate the effectiveness of a semi-automatic approach for extracting information from these recordings, and show various experimental results. Finally we have been able to apply our automatic analysis over the a large portion of the archive and describe the computational resources required. To the best of our knowledge this is the largest archive of bioacoustic data that has even been automatically analyzed.


multimedia signal processing | 2011

Strategies for orca call retrieval to support collaborative annotation of a large archive

Steven R. Ness; Alexander Lerch; George Tzanetakis

The Orchive is a large audio archive of hydrophone recordings of Killer whale (Orcinus orca) vocalizations. Researchers and users from around the world can interact with the archive using a collaborative web-based annotation, visualization and retrieval interface. In addition a mobile client has been written in order to crowdsource Orca call annotation. In this paper we describe and compare different strategies for the retrieval of discrete Orca calls. In addition, the results of the automatic analysis are integrated in the user interface facilitating annotation as well as leveraging the existing annotations for supervised learning. The best strategy achieves a mean average precision of 0.77 with the first retrieved item being relevant 95% of the time in a dataset of 185 calls belonging to 4 types.


content based multimedia indexing | 2009

Content-Aware Web Browsing and Visualization Tools for Cantillation and Chant Research

Steven R. Ness; George Tzanetakis; Dániel Péter Biró

Chant and cantillation research is particularly interesting as it explores the transition from oral to written transmission of music. The goal of this work to create web-based computational tools that can assist the study of how diverse recitation traditions, having their origin in primarily non-notated melodies, later became codified. One of the authors is a musicologist and music theorist who has guided the system design and development by providing manual annotations and participating in the design process. We describe novel content-based visualization and analysis algorithms that can be used for problem-seeking exploration of audio recordings of chant and recitations.


international computer music conference | 2009

Audioscapes: Exploring Surface Interfaces for Music Exploration

Steven R. Ness; George Tzanetakis

Collaboration


Dive into the Steven R. Ness's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Reimer

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin Love

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

P. van Kranenburg

Royal Netherlands Academy of Arts and Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge