Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wiktor Młynarski is active.

Publication


Featured researches published by Wiktor Młynarski.


PLOS ONE | 2014

Statistics of Natural Binaural Sounds

Wiktor Młynarski; Jürgen Jost

Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.


PLOS Computational Biology | 2015

The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

Wiktor Młynarski

In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.


Neural Computation | 2018

Learning Midlevel Auditory Codes from Natural Sound Statistics

Wiktor Młynarski; Josh H. McDermott

Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.


Neuroreport | 2014

Position of acoustic stimulus modulates visual α activity.

Wiktor Młynarski; Claudia Freigang; Jan Bennemann; Marc Stöhr; Rudolf Rübsamen

It has been repeatedly shown that a unimodal stimulus can modulate oscillatory activity of multiple cortical areas already at early stages of sensory processing. In this way, an influence can be exerted on the response to a subsequent sensory input. Even though this fact is now well established, it is still not clear whether cortical sensory areas are informed about spatial positions of objects of modality other than their preferred one. Here, we test the hypothesis of whether oscillatory activity of the human visual cortex depends on the position of a unimodal auditory object. We recorded electroencephalogram while presenting sounds in an acoustic free-field either at the center of the visual field or at lateral positions. Using independent component analysis, we identified three cortical sources located in the visual cortex, showing stimulus position-specific oscillatory responses. The most pronounced effect was an immediate &agr; (8–12 Hz) power decrease over the entire occipital lobe when the stimulus originated from the center of the binocular visual field. Following a lateral stimulation, the amplitude of &agr; activity decreased slightly over contralateral visual areas, while at the same time a weak &agr; synchronization was observed in corresponding ipsilateral areas. Thus, even in the absence of visual stimuli, the visual cortex is differentially activated depending on the position of an acoustic sound source. Our results show that the visual cortex receives information about the position of auditory stimuli within the visual field.


Frontiers in Computational Neuroscience | 2014

Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

Wiktor Młynarski

To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.


Journal of the Acoustical Society of America | 2017

Lossy compression of uninformative stimuli in the auditory system

Wiktor Młynarski; Josh H. McDermott

Despite its temporal precision, the auditory system does not encode fine detail of some classes of natural sounds. For example, sounds known as “auditory textures” seem to be encoded and retained with a lossy, compressed representation consisting of time-averaged statistics. One explanation is that the auditory system compresses stimuli that exceed its informational bandwidth. Decreased sensitivity to temporal detail of sound would reflect a limit of the auditory system to transmit sensory information above a certain rate. Here we instead propose a normative explanation. We assume that to minimize energy expenditure, the auditory system compresses stimuli that do not carry novel information about the environment. We developed practical measures of stimulus coding cost (the number of simulated auditory nerve spikes required to encode the sound) and stationarity (degree of change to the sound spectrum across successive time windows). We found that coding cost is not predictive of the ability to discriminate...


Multisensory Research | 2013

Perceptual ambiguity — perception and processing of spatially discordant/concordant audiovisual stimuli

Jan Bennemann; Philipp Benner; Claudia Freigang; Wiktor Młynarski; Marc Stöhr; Rudolf Rübsamen

It is well known that under certain conditions crossmodal interactions alter our perception of multisensory events, especially in conflicting/ambiguous situations, e.g., presenting physically separated (incongruent) audiovisual cues. In case of the ventriloquist illusion, subjects shift the location of an acoustic signal to a spatially discordant visual cue (perceptual fusion), i.e., subjects perceptually unify visual and acoustic stimuli as coming from the same source although they are spatially separated. In a behavioral experiment applying a simple unification task we presented audiovisual stimuli in the freefield either spatially discordant or spatially concordant. Participants were required to judge spatial relation of co-occurring events (separation/unity). The focus of the present study was to quantify participants’ response behavior using a recently developed nonparametric adaptive sequential sampling procedure (Poppe et al., 2012). We tested four acoustic reference positions in central and para-central space (±8° and ±25° corresponding to left (−) and right (+) hemispace). Deviating visual signals were presented between 30° left and right of acoustic signals. Acquired psychometric functions allowed a detailed analysis of individual response behavior across all sampled conditions. Additionally, estimations of 50% correct response thresholds were used as guideline for the parameter setting of audiovisual stimuli in subsequent EEG experiments. We tested the hypothesis according to which the prestimulus cortical activity modulates the perceptual organization of a given stimulus and therefore determines the perceptual outcome of ambiguous audiovisual stimuli. For that EEG was recorded while presenting physically identical stimuli which, however, generate a perception that varies between two possible alternatives.


BMC Genomics | 2013

Novel drug-regulated transcriptional networks in brain reveal pharmacological properties of psychotropic drugs

Michal Korostynski; Marcin Piechota; Jaroslaw Dzbek; Wiktor Młynarski; Klaudia Szklarczyk; Barbara Ziółkowska; Ryszard Przewlocki


arXiv: Learning | 2014

Sparse, complex-valued representations of natural sounds learned with phase and amplitude continuity priors

Wiktor Młynarski


eLife | 2018

Adaptive coding for dynamic sensory inference

Wiktor Młynarski; Ann M Hermundstad

Collaboration


Dive into the Wiktor Młynarski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Josh H. McDermott

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaroslaw Dzbek

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge