Frank Jäkel
University of Osnabrück
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frank Jäkel.
Journal of Vision | 2005
Malte Kuss; Frank Jäkel; Felix A. Wichmann
In psychophysical studies, the psychometric function is used to model the relation between physical stimulus intensity and the observers ability to detect or discriminate between stimuli of different intensities. In this study, we propose the use of Bayesian inference to extract the information contained in experimental data to estimate the parameters of psychometric functions. Because Bayesian inference cannot be performed analytically, we describe how a Markov chain Monte Carlo method can be used to generate samples from the posterior distribution over parameters. These samples are used to estimate Bayesian confidence intervals and other characteristics of the posterior distribution. In addition, we discuss the parameterization of psychometric functions and the role of prior distributions in the analysis. The proposed approach is exemplified using artificially generated data and in a case study for real experimental data. Furthermore, we compare our approach with traditional methods based on maximum likelihood parameter estimation combined with bootstrap techniques for confidence interval estimation and find the Bayesian approach to be superior.
Neuropsychologia | 2007
Theresa Cooke; Frank Jäkel; Christian Wallraven; Hh Bülthoff
Similarity has been proposed as a fundamental principle underlying mental object representations and capable of supporting cognitive-level tasks such as categorization. However, much of the research has considered connections between similarity and categorization for tasks performed using a single perceptual modality. Considering similarity and categorization within a multimodal context opens up a number of important questions: Are the similarities between objects the same when they are perceived using different modalities or using more than one modality at a time? Is similarity still able to explain categorization performance when objects are experienced multimodally? In this study, we addressed these questions by having subjects explore novel, 3D objects which varied parametrically in shape and texture using vision alone, touch alone, or touch and vision together. Subjects then performed a pair-wise similarity rating task and a free sorting categorization task. Multidimensional scaling (MDS) analysis of similarity data revealed that a single underlying perceptual map whose dimensions corresponded to shape and texture could explain visual, haptic, and bimodal similarity ratings. However, the relative dimension weights varied according to modality: shape dominated texture when objects were seen, whereas shape and texture were roughly equally important in the haptic and bimodal conditions. Some evidence was found for a multimodal connection between similarity and categorization: the probability of category membership increased with similarity while the probability of a category boundary being placed between two stimuli decreased with similarity. In addition, dimension weights varied according to modality in the same way for both tasks. The study also demonstrates the usefulness of 3D printing technology and MDS techniques in the study of visuohaptic object processing.
Psychological Science | 2011
Roland W. Fleming; Frank Jäkel; Laurence T. Maloney
Under typical viewing conditions, human observers readily distinguish between materials such as silk, marmalade, or granite, an achievement of the visual system that is poorly understood. Recognizing transparent materials is especially challenging. Previous work on the perception of transparency has focused on objects composed of flat, infinitely thin filters. In the experiments reported here, we considered thick transparent objects, such as ice cubes, which are irregular in shape and can vary in refractive index. An important part of the visual evidence signaling the presence of such objects is distortions in the perceived shape of other objects in the scene. We propose a new class of visual cues derived from the distortion field induced by thick transparent objects, and we provide experimental evidence that cues arising from the distortion field predict both the successes and the failures of human perception in judging refractive indices.
Trends in Cognitive Sciences | 2009
Frank Jäkel; Bernhard Schölkopf; Felix A. Wichmann
Kernel methods are among the most successful tools in machine learning and are used in challenging data analysis problems in many disciplines. Here we provide examples where kernel methods have proven to be powerful tools for analyzing behavioral data, especially for identifying features in categorization experiments. We also demonstrate that kernel methods relate to perceptrons and exemplar models of categorization. Hence, we argue that kernel methods have neural and psychological plausibility, and theoretical results concerning their behavior are therefore potentially relevant for human category learning. In particular, we believe kernel methods have the potential to provide explanations ranging from the implementational via the algorithmic to the computational level.
Psychonomic Bulletin & Review | 2008
Frank Jäkel; Bernhard Schölkopf; Felix A. Wichmann
Exemplar theories of categorization depend on similarity for explaining subjects’ ability to generalize to new stimuli. A major criticism of exemplar theories concerns their lack of abstraction mechanisms and thus, seemingly, of generalization ability. Here, we use insights from machine learning to demonstrate that exemplar models can actually generalize very well. Kernel methods in machine learning are akin to exemplar models and are very successful in real-world applications. Their generalization performance depends crucially on the chosen similarity measure. Although similarity plays an important role in describing generalization behavior, it is not the only factor that controls generalization performance. In machine learning, kernel methods are often combined with regularization techniques in order to ensure good generalization. These same techniques are easily incorporated in exemplar models. We show that the generalized context model (Nosofsky, 1986) and ALCOVE (Kruschke, 1992) are closely related to a statistical model called kernel logistic regression. We argue that generalization is central to the enterprise of understanding categorization behavior, and we suggest some ways in which insights from machine learning can offer guidance.
international conference on machine learning | 2006
Dilan Görür; Frank Jäkel; Carl Edward Rasmussen
Elimination by aspects (EBA) is a probabilistic choice model describing how humans decide between several options. The options from which the choice is made are characterized by binary features and associated weights. For instance, when choosing which mobile phone to buy the features to consider may be: long lasting battery, color screen, etc. Existing methods for inferring the parameters of the model assume pre-specified features. However, the features that lead to the observed choices are not always known. Here, we present a non-parametric Bayesian model to infer the features of the options and the corresponding weights from choice data. We use the Indian buffet process (IBP) as a prior over the features. Inference using Markov chain Monte Carlo (MCMC) in conjugate IBP models has been previously described. The main contribution of this paper is an MCMC algorithm for the EBA model that can also be used in inference for other non-conjugate IBP models---this may broaden the use of IBP priors considerably.
Frontiers in Neuroscience | 2011
Maik C. Stüttgen; Cornelius Schwarz; Frank Jäkel
Single-unit recordings conducted during perceptual decision-making tasks have yielded tremendous insights into the neural coding of sensory stimuli. In such experiments, detection or discrimination behavior (the psychometric data) is observed in parallel with spike trains in sensory neurons (the neurometric data). Frequently, candidate neural codes for information read-out are pitted against each other by transforming the neurometric data in some way and asking which code’s performance most closely approximates the psychometric performance. The code that matches the psychometric performance best is retained as a viable candidate and the others are rejected. In following this strategy, psychometric data is often considered to provide an unbiased measure of perceptual sensitivity. It is rarely acknowledged that psychometric data result from a complex interplay of sensory and non-sensory processes and that neglect of these processes may result in misestimating psychophysical sensitivity. This again may lead to erroneous conclusions regarding the adequacy of candidate neural codes. In this review, we first discuss requirements on the neural data for a subsequent neurometric-psychometric comparison. We then focus on different psychophysical tasks for the assessment of detection and discrimination performance and the cognitive processes that may underlie their execution. We discuss further factors that may compromise psychometric performance and how they can be detected or avoided. We believe that these considerations point to shortcomings in our understanding of the processes underlying perceptual decisions, and therefore offer potential for future research.
Neural Networks | 2001
Jan Storck; Frank Jäkel; Gustavo Deco
We apply spiking neurons with dynamic synapses to detect temporal patterns in a multi-dimensional signal. We use a network of integrate-and-fire neurons, fully connected via dynamic synapses, each of which is given by a biologically plausible dynamical model based on the exact pre- and post-synaptic spike timing. Dependent on their adaptable configuration (learning) the synapses automatically implement specific delays. Hence, each output neuron with its set of incoming synapses works as a detector for a specific temporal pattern. The whole network functions as a temporal clustering mechanism with one output per input cluster. The classification capability is demonstrated by illustrative examples including patterns from Poisson processes and the analysis of speech data.
MEi:CogSci Conference 2013, Budapest | 2013
Frank Jäkel; Cornell Schreiber
Problem solving research has encountered an impasse. Since the seminal work of Newell und Simon (1972) researchers do not seem to have made much theoretical progress (Batchelder and Alexander, 2012; Ohlsson, 2012). In this paper we argue that one factor that is holding back the field is the widespread rejection of introspection among cognitive scientists. We review evidence that introspection improves problem solving performance, sometimes dramatically. Several studies suggest that self-observation, self-monitoring, and self-reflection play a key role in developing problem solving strategies. We argue that studying these introspective processes will require researchers to systematically ask subjects to introspect. However, we document that cognitive science textbooks dismiss introspection and as a consequence introspective methods are not used in problem solving research, even when it would be appropriate. We conclude that research on problem solving would benefit from embracing introspection rather than dismissing it.
Neural Computation | 2015
Johannes Schumacher; Thomas Wunderle; Pascal Fries; Frank Jäkel; Gordon Pipa
In neuroscience, data are typically generated from neural network activity. The resulting time series represent measurements from spatially distributed subsystems with complex interactions, weakly coupled to a high-dimensional global system. We present a statistical framework to estimate the direction of information flow and its delay in measurements from systems of this type. Informed by differential topology, gaussian process regression is employed to reconstruct measurements of putative driving systems from measurements of the driven systems. These reconstructions serve to estimate the delay of the interaction by means of an analytical criterion developed for this purpose. The model accounts for a range of possible sources of uncertainty, including temporally evolving intrinsic noise, while assuming complex nonlinear dependencies. Furthermore, we show that if information flow is delayed, this approach also allows for inference in strong coupling scenarios of systems exhibiting synchronization phenomena. The validity of the method is demonstrated with a variety of delay-coupled chaotic oscillators. In addition, we show that these results seamlessly transfer to local field potentials in cat visual cortex.