Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Berger is active.

Publication


Featured researches published by Jonathan Berger.


Neuron | 2007

Neural Dynamics of Event Segmentation in Music: Converging Evidence for Dissociable Ventral and Dorsal Networks

Devarajan Sridharan; Daniel J. Levitin; Chris Chafe; Jonathan Berger; Vinod Menon

The real world presents our sensory systems with a continuous stream of undifferentiated information. Segmentation of this stream at event boundaries is necessary for object identification and feature extraction. Here, we investigate the neural dynamics of event segmentation in entire musical symphonies under natural listening conditions. We isolated time-dependent sequences of brain responses in a 10 s window surrounding transitions between movements of symphonic works. A strikingly right-lateralized network of brain regions showed peak response during the movement transitions when, paradoxically, there was no physical stimulus. Model-dependent and model-free analysis techniques provided converging evidence for activity in two distinct functional networks at the movement transition: a ventral fronto-temporal network associated with detecting salient events, followed in time by a dorsal fronto-parietal network associated with maintaining attention and updating working memory. Our study provides direct experimental evidence for dissociable and causally linked ventral and dorsal networks during event segmentation of ecologically valid auditory stimuli.


Computer Music Journal | 2001

SICIB: An Interactive Music Composition System Using Body Movements

Roberto Morales-Manzanares; Eduardo F. Morales; Roger F. Dannenberg; Jonathan Berger

Traditionally, music and dance have been comple-mentary arts. However, their integration has notalways been entirely satisfactory. In general, adancer must conform movements to a predefinedpiece of music, leaving very little room for impro-visational creativity. In this article, a system calledSICIB—capable of music composition, improvisa-tion, and performance using body movements—isdescribed. SICIB uses data from sensors attached todancers and “if-then” rules to couple choreo-graphic gestures with music. The article describesthe choreographic elements considered by the sys-tem (such as position, velocity, acceleration, curva-ture, and torsion of movements, jumps, etc.), aswell as the musical elements that can be affectedby them (e.g., intensity, tone, music sequences,etc.) through two different music composition sys-tems: Escamol and Aura. The choreographic infor-mation obtained from the sensors, the musicalcapabilities of the music composition systems, anda simple rule-based coupling mechanism offersgood opportunities for interaction between chore-ographers and composers.The architecture of SICIB, which allows real-time performance, is also described. SICIB hasbeen used by three different composers and a cho-reographer with very encouraging results. In par-ticular, the dancer has been involved in music dia-logues with live performance musicians. Ourexperiences with the development of SICIB andour own insights into the relationship that newtechnologies offer to choreographers and dancersare also discussed.


IEEE Transactions on Audio, Speech, and Language Processing | 2007

Melody Extraction and Musical Onset Detection via Probabilistic Models of Framewise STFT Peak Data

Harvey D. Thornburg; Jonathan Berger

We propose a probabilistic method for the joint segmentation and melody extraction for musical audio signals which arise from a monophonic score. The method operates on framewise short-time Fourier transform (STFT) peaks, enabling a computationally efficient inference of note onset, duration, and pitch attributes while retaining sufficient information for pitch determination and spectral change detection. The system explicitly models note events in terms of transient and steady-state regions as well as possible gaps between note events. In this way, the system readily distinguishes abrupt spectral changes associated with musical onsets from other abrupt change events. Additionally, the method may incorporate melodic context by modeling note-to-note dependences. The method is successfully applied to a variety of piano and violin recordings containing reverberation, effective polyphony due to legato playing style, expressive pitch variations, and background voices. While the method does not provide a sample-accurate segmentation, it facilitates the latter in subsequent processing by isolating musical onsets to frame neighborhoods and identifying possible pitch content before and after the true onset sample location


Journal of Applied Clinical Medical Physics | 2010

Commissioning and quality assurance for a respiratory training system based on audiovisual biofeedback

G Cui; Siddharth Gopalan; T Yamamoto; Jonathan Berger; Peter G. Maxim; P Keall

A respiratory training system based on audiovisual biofeedback has been implemented at our institution. It is intended to improve patients respiratory regularity during four‐dimensional (4D) computed tomography (CT) image acquisition. The purpose is to help eliminate the artifacts in 4D‐CT images caused by irregular breathing, as well as improve delivery efficiency during treatment, where respiratory irregularity is a concern. This article describes the commissioning and quality assurance (QA) procedures developed for this peripheral respiratory training system, the Stanford Respiratory Training (START) system. Using the Varian real‐time position management system for the respiratory signal input, the START software was commissioned and able to acquire sample respiratory traces, create a patient‐specific guiding waveform, and generate audiovisual signals for improving respiratory regularity. Routine QA tests that include hardware maintenance, visual guiding‐waveform creation, auditory sounds synchronization, and feedback assessment, have been developed for the START system. The QA procedures developed here for the START system could be easily adapted to other respiratory training systems based on audiovisual biofeedback. PACS number: 87.56.FcA respiratory training system based on audiovisual biofeedback has been implemented at our institution. It is intended to improve patients respiratory regularity during four‐dimensional (4D) computed tomography (CT) image acquisition. The purpose is to help eliminate the artifacts in 4D‐CT images caused by irregular breathing, as well as improve delivery efficiency during treatment, where respiratory irregularity is a concern. This article describes the commissioning and quality assurance (QA) procedures developed for this peripheral respiratory training system, the Stanford Respiratory Training (START) system. Using the Varian real‐time position management system for the respiratory signal input, the START software was commissioned and able to acquire sample respiratory traces, create a patient‐specific guiding waveform, and generate audiovisual signals for improving respiratory regularity. Routine QA tests that include hardware maintenance, visual guiding‐waveform creation, auditory sounds synchronization, and feedback assessment, have been developed for the START system. The QA procedures developed here for the START system could be easily adapted to other respiratory training systems based on audiovisual biofeedback. PACS number: 87.56.Fc


Journal of New Music Research | 2010

Analysis of Pitch Perception of Inharmonicity in Pipa Strings Using Response Surface Methodology

Shin Hui Lin Chin; Jonathan Berger

Abstract The timbre of the pipa, one of the principal plucked string instruments of traditional Chinese music, is characterized by richly nuanced inharmonicity resulting from peculiarities of regionally distinct string composition and construction. This study investigates the effect of this feature of timbre on pitch perception. Beyond the specific issue of the pipa, we propose a response surface based experimental design and modelling approach as a general framework for determining the effect of inharmonicity on pitch perception applicable to any stringed instrument.


Radiotherapy and Oncology | 2016

The impact of audiovisual biofeedback on 4D functional and anatomic imaging: Results of a lung cancer pilot study

Jaewon Yang; T Yamamoto; Sean Pollock; Jonathan Berger; Maximilian Diehn; Edward E. Graves; Billy W. Loo; P Keall

BACKGROUND AND PURPOSEnThe impact of audiovisual (AV) biofeedback on four dimensional (4D) positron emission tomography (PET) and 4D computed tomography (CT) image quality was investigated in a prospective clinical trial (NCT01172041).nnnMATERIAL AND METHODSn4D-PET and 4D-CT images of ten lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images were analyzed for motion artifacts by comparing 4D to 3D PET for gross tumor volumes (GTVPET) and maximum standardized uptake values (SUVmax). The 4D-CT images were analyzed for artifacts by comparing normalized cross correlation-based scores (NCCS) and quantifying a visual assessment score (VAS). A Wilcoxon signed-ranks test was used for statistical testing.nnnRESULTSnThe impact of AV biofeedback varied widely. Overall, the 3D to 4D decrease of GTVPET was 1.2±1.3cm(3) with AV and 0.6±1.8cm(3) for FB. The 4D-PET increase of SUVmax was 1.3±0.9 with AV and 1.3±0.8 for FB. The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.08). The 4D-CT VAS was 0.0±2.7.nnnCONCLUSIONnThis study demonstrated a high patient dependence on the use of AV biofeedback to reduce motion artifacts in 4D imaging. None of the hypotheses tested were statistically significant. Future development of AV biofeedback will focus on optimizing the human-computer interface and including patient training sessions for improved comprehension and compliance.


Medical Physics | 2014

SU-D-17A-04: The Impact of Audiovisual Biofeedback On Image Quality During 4D Functional and Anatomic Imaging: Results of a Prospective Clinical Trial

P Keall; Jaewon Yang; T Yamamoto; Sean Pollock; M. Diehn; Jonathan Berger; Edward E. Graves; Billy W. Loo

PURPOSEnThe ability of audiovisual (AV) biofeedback to improve breathing regularity has not previously been investigated for functional imaging studies. The purpose of this study was to investigate the impact of AV biofeedback on 4D-PET and 4D-CT image quality in a prospective clinical trial. We hypothesized that motion blurring in 4D-PET images and the number of artifacts in 4D-CT images are reduced using AV biofeedback.nnnMETHODSnAV biofeedback is a real-time, interactive and personalized system designed to help a patient self-regulate his/her breathing using a patient-specific representative waveform and musical guides. In an IRB-approved prospective clinical trial, 4D-PET and 4D-CT images of 10 lung cancer patients were acquired with AV biofeedback (AV) and free breathing (FB). The 4D-PET images in 6 respiratory bins were analyzed for motion blurring by: (1) decrease of GTVPET and (2) increase of SUVmax in 4-DPET compared to 3D-PET. The 4D-CT images were analyzed for artifacts by: (1) comparing normalized cross correlation-based scores (NCCS); and (2) quantifying a visual assessment score (VAS). A two-tailed paired t-test was used to test the hypotheses.nnnRESULTSnThe impact of AV biofeedback on 4D-PET and 4D-CT images varied widely between patients, suggesting inconsistent patient comprehension and capability. Overall, the 4D-PET decrease of GTVPET was 2.0±3.0cm3 with AV and 2.3±3.9cm^3 for FB (p=0.61). The 4D-PET increase of SUVmax was 1.6±1.0 with AV and 1.1±0.8 with FB (p=0.002). The 4D-CT NCCS were 0.65±0.27 with AV and 0.60±0.32 for FB (p=0.32). The 4D-CT VAS was 0.0±2.7 (p=ns).nnnCONCLUSIONnA 10-patient study demonstrated a statistically significant reduction of motion blurring of AV over FB for 1/2 functional 4D-PET imaging metrics. No difference between AV and FB was found for 2 anatomic 4D-CT imaging metrics. Future studies will focus on optimizing the human-computer interface and including patient training sessions for improved comprehension and capability. Supported by NIH/NCI R01 CA 093626, Stanford BioX Interdisciplinary Initiatives Program, NHMRC Australia Fellowship, and Kwanjeong Educational Foundation. GE Healthcare provided the Respiratory Gating Toolbox for 4D-PET image reconstruction. Stanford University owns US patent #7955270 which is unlicensed to any commercial entity.


asilomar conference on signals, systems and computers | 2004

Analysis of hyperspectral colon tissue images using vocal synthesis models

Ryan J. Cassidy; Jonathan Berger; Kyogu Lee; Mauro Maggioni; Ronald R. Coifman

In prior work, we examined the possibility of sound generation from colon tissue scan data using vocal synthesis models. In this work, we review key results and present extensions to the prior work. Sonification entails the mapping of data values to sound synthesis parameters such that informative sounds are produced by the chosen sound synthesis model. We review the physical equations and technical highlights of a vocal synthesis model developed by Cook. Next we present the colon tissue scan data gathered, and discuss processing steps applied to the data. Finally, we review preliminary results from a simple sonification map. New findings regarding perceptual distance of vowel sounds are presented.


Journal of the Acoustical Society of America | 2008

A hybrid model of timbre perception.

Hiroko Terasawa; Jonathan Berger

Timbre is a fundamental attribute of sound. It is important in differentiating between musical sounds, speech utterances, and characterizing everyday sounds in our environment as well as novel synthetic sounds. A hybrid model of timbre perception, which integrates the concepts of color and texture of sound, is proposed. The color of sound is described in terms of an instantaneous (or ideally timeless) spectral envelope, while the texture of a sound describes the temporal structure of the sound, as the sequential changes of color with an arbitrary range of time‐scale. The computational implementation of this model represents a sound’s color as the spectral envelope of a specific window, and its texture as the granularity (or microtexture) of the corresponding window. The temporal structures across windows from both color and texture parts of the model serve as the texture of a sound in a larger time‐scale. In support of the proposed theory a series of psychoacoutic experiments was performed. The quantitative relationship between the spectral envelope and subjective perception of complex tones used Mel‐frequency cepstral coefficients as a representation. A perceptually tested quantitative representation of texture was established using normalized echo density.


Computer Music Journal | 1993

1992 International Computer Music Conference, San Jose, California USA, 14-18 October 1992

Doug Keislar; Robert Pritchard; Todd Winkler; Heinrich Taube; Mara Helmuth; Jonathan Berger; Jonathan Hallstrom; Brad Garton

The 1992 International Computer Music Conference (ICMC) took place in San Jose, California, 14-18 October 1992. Below, we present reviews of the conference as a whole (with special reference to the paper sessions), written by Paul Berg (organizer of the 1986 ICMC), Rob Duisberg, and Carla Scaletti. Following these general remarks are separate reviews of each of the eight concerts, for which we thank Bob Pritchard, Todd Winkler, Rick Taube, Mara Helmuth, Jon Berger, Jon Hallstrom, and Brad Garton. We also provide a review of the one concert that was included

Collaboration


Dive into the Jonathan Berger's collaboration.

Top Co-Authors

Avatar

T Yamamoto

University of California

View shared research outputs
Top Co-Authors

Avatar

P Keall

University of Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaewon Yang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge