Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yağmur Güçlütürk is active.

Publication


Featured researches published by Yağmur Güçlütürk.


Frontiers in Human Neuroscience | 2016

Liking versus Complexity: Decomposing the Inverted U-curve

Yağmur Güçlütürk; Richard H. A. H. Jacobs; Rob van Lier

The relationship between liking and stimulus complexity is commonly reported to follow an inverted U-curve. However, large individual differences among complexity preferences of participants have frequently been observed since the earliest studies on the topic. The common use of across-participant analysis methods that ignore these large individual differences in aesthetic preferences gives an impression of high agreement between individuals. In this study, we collected ratings of liking and perceived complexity from 30 participants for a set of digitally generated grayscale images. In addition, we calculated an objective measure of complexity for each image. Our results reveal that the inverted U-curve relationship between liking and stimulus complexity comes about as the combination of different individual liking functions. Specifically, after automatically clustering the participants based on their liking ratings, we determined that one group of participants in our sample had increasingly lower liking ratings for increasingly more complex stimuli, while a second group of participants had increasingly higher liking ratings for increasingly more complex stimuli. Based on our findings, we call for a focus on the individual differences in aesthetic preferences, adoption of alternative analysis methods that would account for these differences and a re-evaluation of established rules of human aesthetic preferences.


european conference on computer vision | 2016

Convolutional sketch inversion

Yağmur Güçlütürk; Umut Güçlü; Rob van Lier; Marcel A. J. van Gerven

In this paper, we use deep neural networks for inverting face sketches to synthesize photorealistic face images. We first construct a semi-simulated dataset containing a very large number of computer-generated face sketches with different styles and corresponding face images by expanding existing unconstrained face data sets. We then train models achieving state-of-the-art results on both computer-generated sketches and hand-drawn sketches by leveraging recent advances in deep learning such as batch normalization, deep residual learning, perceptual losses and stochastic optimization in combination with our new dataset. We finally demonstrate potential applications of our models in fine arts and forensic arts. In contrast to existing patch-based approaches, our deep-neural-network-based approach can be used for synthesizing photorealistic face images by inverting face sketches in the wild.


european conference on computer vision | 2016

Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition

Yağmur Güçlütürk; Umut Güçlü; Marcel A. J. van Gerven; Rob van Lier

Here, we develop an audiovisual deep residual network for multimodal apparent personality trait recognition. The network is trained end-to-end for predicting the Big Five personality traits of people from their videos. That is, the network does not require any feature engineering or visual analysis such as face detection, face landmark alignment or facial expression recognition. Recently, the network won the third place in the ChaLearn First Impressions Challenge with a test accuracy of 0.9109.


Scientific Reports | 2018

Representations of naturalistic stimulus complexity in early and associative visual and auditory cortices

Yağmur Güçlütürk; Umut Güçlü; M.A.J. van Gerven; R.J. van Lier

The complexity of sensory stimuli has an important role in perception and cognition. However, its neural representation is not well understood. Here, we characterize the representations of naturalistic visual and auditory stimulus complexity in early and associative visual and auditory cortices. This is realized by means of encoding and decoding analyses of two fMRI datasets in the visual and auditory modalities. Our results implicate most early and some associative sensory areas in representing the complexity of naturalistic sensory stimuli. For example, parahippocampal place area, which was previously shown to represent scene features, is shown to also represent scene complexity. Similarly, posterior regions of superior temporal gyrus and superior temporal sulcus, which were previously shown to represent syntactic (language) complexity, are shown to also represent music (auditory) complexity. Furthermore, our results suggest the existence of gradients in sensitivity to naturalistic sensory stimulus complexity in these areas.


NeuroImage | 2018

Generative adversarial networks for reconstructing natural images from brain activity

K. Seeliger; Umut Güçlü; Luca Ambrogioni; Yağmur Güçlütürk; M.A.J. van Gerven

ABSTRACT We explore a method for reconstructing visual stimuli from brain activity. Using large databases of natural images we trained a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimuli presented during two functional magnetic resonance imaging experiments. Using a linear model we learned to predict the generative models latent space from measured brain activity. The objective was to create an image similar to the presented stimulus image through the previously trained generator. Using this approach we were able to reconstruct structural and some semantic features of a proportion of the natural images sets. A behavioural test showed that subjects were capable of identifying a reconstruction of the original stimulus in 67.2% and 66.4% of the cases in a pairwise comparison for the two natural image datasets respectively. Our approach does not require end‐to‐end training of a large generative model on limited neuroimaging data. Rapid advances in generative modeling promise further improvements in reconstruction performance. HIGHLIGHTSA generative adversarial network (DCGAN) is used for reconstructing visual percepts.Minimizing image loss, a linear model learns to predict the latent space from BOLD.With a GAN limited to 6 handwritten characters, detailed features can be retrieved.Reconstructions of arbitrary natural images are identifiable by human raters.The specific GAN is a component and replaceable by advanced deterministic generators.


IEEE Transactions on Affective Computing | 2018

Multimodal First Impression Analysis with Deep Residual Networks

Yağmur Güçlütürk; Umut Güçlü; Xavier Baró; Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera; Marcel A. J. van Gerven; Rob van Lier

People form first impressions about the personalities of unfamiliar individuals even after very brief interactions with them. In this study we present and evaluate several models that mimic this automatic social behavior. Specifically, we present several models trained on a large dataset of short YouTube video blog posts for predicting apparent Big Five personality traits of people and whether they seem suitable to be recommended to a job interview. Along with presenting our audiovisual approach and results that won the third place in the ChaLearn First Impressions Challenge, we investigate modeling in different modalities including audio only, visual only, language only, audiovisual, and combination of audiovisual and language. Our results demonstrate that the best performance could be obtained using a fusion of all data modalities. Finally, in order to promote explainability in machine learning and to provide an example for the upcoming ChaLearn challenges, we present a simple approach for explaining the predictions for job interview recommendations.


arXiv: Computer Vision and Pattern Recognition | 2017

End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks.

Umut Güçlü; Yağmur Güçlütürk; Meysam Madadi; Sergio Escalera; Xavier Baró; Jordi Gonzàlez; Rob van Lier; Marcel A. J. van Gerven


neural information processing systems | 2017

Deep adversarial neural decoding

Yağmur Güçlütürk; Umut Güçlü; K. Seeliger; S.E. Bosch; Rob van Lier; Marcel A. J. van Gerven


international conference on computer vision | 2017

Visualizing Apparent Personality Analysis with Deep Residual Networks

Yağmur Güçlütürk; Umut Güçlü; Marc Pérez; Hugo Jair Escalante; Xavier Baró; Carlos Andujar; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Sergio Escalera; Marcel A. J. van Gerven; Rob van Lier


international symposium on neural networks | 2017

Design of an explainable machine learning challenge for video interviews

Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera; Julio Cezar Silveira Jacques; Meysam Madadi; Xavier Baró; Stéphane Ayache; Evelyne Viegas; Yağmur Güçlütürk; Umut Güçlü; Marcel A. J. van Gerven; Rob van Lier

Collaboration


Dive into the Yağmur Güçlütürk's collaboration.

Top Co-Authors

Avatar

Umut Güçlü

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rob van Lier

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Baró

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Hugo Jair Escalante

National Institute of Astrophysics

View shared research outputs
Top Co-Authors

Avatar

Luca Ambrogioni

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Isabelle Guyon

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Maris

Radboud University Nijmegen

View shared research outputs
Researchain Logo
Decentralizing Knowledge