Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim C. Kietzmann is active.

Publication


Featured researches published by Tim C. Kietzmann.


Journal of Vision | 2010

Investigating task-dependent top-down effects on overt visual attention

Torsten Betz; Tim C. Kietzmann; Niklas Wilming; Peter König

Different tasks can induce different viewing behavior, yet it is still an open question how or whether at all high-level task information interacts with the bottom-up processing of stimulus-related information. Two possible causal routes are considered in this paper. Firstly, the weak top-down hypothesis, according to which top-down effects are mediated by changes of feature weights in the bottom-up system. Secondly, the strong top-down hypothesis, which proposes that top-down information acts independently of the bottom-up process. To clarify the influences of these different routes, viewing behavior was recorded on web pages for three different tasks: free viewing, content awareness, and information search. The data reveal significant task-dependent differences in viewing behavior that are accompanied by minor changes in feature-fixation correlations. Extensive computational modeling shows that these small but significant changes are insufficient to explain the observed differences in viewing behavior. Collectively, the results show that task-dependent differences in the current setting are not mediated by a reweighting of features in the bottom-up hierarchy, ruling out the weak top-down hypothesis. Consequently, the strong top-down hypothesis is the most viable explanation for the observed data.


The Journal of Neuroscience | 2012

Prevalence of Selectivity for Mirror-Symmetric Views of Faces in the Ventral and Dorsal Visual Pathways

Tim C. Kietzmann; Jascha D. Swisher; Peter König; Frank Tong

Although the ability to recognize faces and objects from a variety of viewpoints is crucial to our everyday behavior, the underlying cortical mechanisms are not well understood. Recently, neurons in a face-selective region of the monkey temporal cortex were reported to be selective for mirror-symmetric viewing angles of faces as they were rotated in depth (Freiwald and Tsao, 2010). This property has been suggested to constitute a key computational step in achieving full view-invariance. Here, we measured functional magnetic resonance imaging activity in nine observers as they viewed upright or inverted faces presented at five different angles (−60, −30, 0, 30, and 60°). Using multivariate pattern analysis, we show that sensitivity to viewpoint mirror symmetry is widespread in the human visual system. The effect was observed in a large band of higher order visual areas, including the occipital face area, fusiform face area, lateral occipital cortex, mid fusiform, parahippocampal place area, and extending superiorly to encompass dorsal regions V3A/B and the posterior intraparietal sulcus. In contrast, early retinotopic regions V1–hV4 failed to exhibit sensitivity to viewpoint symmetry, as their responses could be largely explained by a computational model of low-level visual similarity. Our findings suggest that selectivity for mirror-symmetric viewing angles may constitute an intermediate-level processing step shared across multiple higher order areas of the ventral and dorsal streams, setting the stage for complete viewpoint-invariant representations at subsequent levels of visual processing.


Neurocomputing | 2008

Incremental GRLVQ: Learning relevant features for 3D object recognition

Tim C. Kietzmann; Sascha Lange; Martin A. Riedmiller

We present a new variant of generalized relevance learning vector quantization (GRLVQ) in a computer vision scenario. A version with incrementally added prototypes is used for the non-trivial case of high-dimensional object recognition. Training is based upon a generic set of standard visual features, the learned input weights are used for iterative feature pruning. Thus, prototypes and input space are altered simultaneously, leading to very sparse and task-specific representations. The effectiveness of the approach and the combination of the incremental variant together with pruning was tested on the COIL100 database. It exhibits excellent performance with regard to codebook size, feature selection and recognition accuracy.


PLOS ONE | 2011

Overt Visual Attention as a Causal Factor of Perceptual Awareness

Tim C. Kietzmann; Stephan Geuter; Peter König

Our everyday conscious experience of the visual world is fundamentally shaped by the interaction of overt visual attention and object awareness. Although the principal impact of both components is undisputed, it is still unclear how they interact. Here we recorded eye-movements preceding and following conscious object recognition, collected during the free inspection of ambiguous and corresponding unambiguous stimuli. Using this paradigm, we demonstrate that fixations recorded prior to object awareness predict the later recognized object identity, and that subjects accumulate more evidence that is consistent with their later percept than for the alternative. The timing of reached awareness was verified by a reaction-time based correction method and also based on changes in pupil dilation. Control experiments, in which we manipulated the initial locus of visual attention, confirm a causal influence of overt attention on the subsequent result of object perception. The current study thus demonstrates that distinct patterns of overt attentional selection precede object awareness and thereby directly builds on recent electrophysiological findings suggesting two distinct neuronal mechanisms underlying the two phenomena. Our results emphasize the crucial importance of overt visual attention in the formation of our conscious experience of the visual world.


international conference on machine learning and applications | 2009

The Neuro Slot Car Racer: Reinforcement Learning in a Real World Setting

Tim C. Kietzmann; Martin A. Riedmiller

This paper describes a novel real-world reinforcement learning application: The Neuro Slot Car Racer. In addition to presenting the system and first results based on Neural Fitted Q-Iteration, a standard batch reinforcement learning technique, an extension is proposed that is capable of improving training times and results by allowing for a reduction of samples required for successful training. The Neuralgic Pattern Selection approach achieves this by applying a failure-probability function which emphasizes neuralgic parts of the state space during sampling.


Biological Cybernetics | 2009

Computational object recognition: a biologically motivated approach

Tim C. Kietzmann; Sascha Lange; Martin A. Riedmiller

We propose a conceptual framework for artificial object recognition systems based on findings from neurophysiological and neuropsychological research on the visual system in primate cortex. We identify some essential questions, which have to be addressed in the course of designing object recognition systems. As answers, we review some major aspects of biological object recognition, which are then translated into the technical field of computer vision. The key suggestions are the use of incremental and view-based approaches together with the ability of online feature selection and the interconnection of object-views to form an overall object representation. The effectiveness of the computational approach is estimated by testing a possible realization in various tasks and conditions explicitly designed to allow for a direct comparison with the biological counterpart. The results exhibit excellent performance with regard to recognition accuracy, the creation of sparse models and the selection of appropriate features.


bioRxiv | 2017

Deep Neural Networks In Computational Neuroscience

Tim C. Kietzmann; Patrick McClure; Nikolaus Kriegeskorte

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behaviour. At the heart of the field are its models, i.e. mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioural responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g. visual object and auditory speech recognition) to cognitive tasks (e.g. machine translation), and on to motor control (e.g. playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviours, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.


Journal of Cognitive Neuroscience | 2017

Representational dynamics of facial viewpoint encoding

Tim C. Kietzmann; Anna L. Gert; Frank Tong; Peter König

Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another persons attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all viewpoints. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demonstrated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity.


Vision Research | 2015

Effects of contextual information and stimulus ambiguity on overt visual sampling behavior.

Tim C. Kietzmann; Peter König

The sampling of our visual environment through saccadic eye movements is an essential function of the brain, allowing us to overcome the limits of peripheral vision. Understanding which parts of a scene attract overt visual attention is subject to intense research, and considerable progress has been made in unraveling the underlying cortical mechanisms. In contrast to spatial aspects, however, relatively little is understood about temporal aspects of overt visual sampling. At every fixation, the oculomotor system faces the decision whether to keep exploring different aspects of an object or scene or whether to remain fixated to allow for in-depth cortical processing - a situation that can be understood in terms of an exploration-exploitation dilemma. To improve our understanding of the factors involved in these decisions, we here investigate how the level of visual information, experimentally manipulated by scene context and stimulus ambiguity, changes the sampling behavior preceding the recognition of centrally presented ambiguous and disambiguated objects. Behaviorally, we find that context, although only presented until the first voluntary saccade, biases the perceptual outcome and significantly reduces reaction times. Importantly, we find that increased information about an object significantly alters its visual exploration, as evident through increased fixation durations and reduced saccade amplitudes. These results demonstrate that the initial sampling of an object, preceding its recognition, is subject to change based on the amount of information available in the system: increased evidence for its identity biases the exploration-exploitation strategy towards in-depth analyses.


Scientific Data | 2017

An extensive dataset of eye movements during viewing of complex images.

Niklas Wilming; Selim Onat; José P. Ossandón; Alper Açık; Tim C. Kietzmann; Kai Kaspar; Ricardo Ramos Gameiro; Alexandra Vormberg; Peter König

We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7–80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.

Collaboration


Dive into the Tim C. Kietzmann's collaboration.

Top Co-Authors

Avatar

Peter König

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Niklas Wilming

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Anna L. Gert

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danja Porada

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge