Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dejan Draschkow is active.

Publication


Featured researches published by Dejan Draschkow.


Journal of Vision | 2014

Seek and you shall remember: scene semantics interact with visual search to build better memories.

Dejan Draschkow; Jeremy M. Wolfe; Melissa L.-H. Võ

Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.


Acta Psychologica | 2016

Gist in time: Scene semantics and structure enhance recall of searched objects

Emilie Josephs; Dejan Draschkow; Jeremy M. Wolfe; Melissa L.-H. Võ

Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization.


Scientific Reports | 2017

Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search

Dejan Draschkow; Melissa L.-H. Võ

Predictions of environmental rules (here referred to as “scene grammar”) can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one’s scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants’ memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants’ construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.


Attention Perception & Psychophysics | 2016

Of "what" and "where" in a natural search task: Active object handling supports object location memory beyond the object's identity.

Dejan Draschkow; Melissa L.-H. Võ

Looking for as well as actively manipulating objects that are relevant to ongoing behavioral goals are intricate parts of natural behavior. It is, however, not clear to what degree these two forms of interaction with our visual environment differ with regard to their memory representations. In a real-world paradigm, we investigated if physically engaging with objects as part of a search task influences identity and position memory differently for task-relevant versus irrelevant objects. Participants equipped with a mobile eye tracker either searched for cued objects without object interaction (Find condition) or actively collected the objects they found (Handle condition). In the following free-recall task, identity memory was assessed, demonstrating superior memory for relevant compared to irrelevant objects, but no difference between the Handle and Find conditions. Subsequently, location memory was inferred via times to first fixation in a final object search task. Active object manipulation and task-relevance interacted in that location memory for relevant objects was superior to irrelevant ones only in the Handle condition. Including previous object recall performance as a covariate in the linear mixed-model analysis of times to first fixation allowed us to explore the interaction between remembered/forgotten object identities and the execution of location memory. Identity memory performance predicted location memory in the Find but not the Handle condition, suggesting that active object handling leads to strong spatial representations independent of object identity memory. We argue that object handling facilitates the prioritization of relevant location information, but this might come at the cost of deprioritizing irrelevant information.


Neuropsychologia | 2018

No evidence from MVPA for different processes underlying the N300 and N400 incongruity effects in object-scene processing

Dejan Draschkow; Edvard Heikel; Melissa L.-H. Võ; Christian J. Fiebach; Jona Sassenhagen

Attributing meaning to diverse visual input is a core feature of human cognition. Violating environmental expectations (e.g., a toothbrush in the fridge) induces a late event-related negativity of the event-related potential/ERP. This N400 ERP has not only been linked to the semantic processing of language, but also to objects and scenes. Inconsistent object-scene relationships are additionally associated with an earlier negative deflection of the EEG signal between 250 and 350 ms. This N300 is hypothesized to reflect pre-semantic perceptual processes. To investigate whether these two components are truly separable or if the early object-scene integration activity (250-350 ms) shares certain levels of processing with the late neural correlates of meaning processing (350-500 ms), we used time-resolved multivariate pattern analysis (MVPA) where a classifier trained at one time point in a trial (e.g., during the N300 time window) is tested at every other time point (i.e., including the N400 time window). Forty participants were presented with semantic inconsistencies, in which an object was inconsistent with a scenes meaning. Replicating previous findings, our manipulation produced significant N300 and N400 deflections. MVPA revealed above chance decoding performance for classifiers trained during time points of the N300 component and tested during later time points of the N400, and vice versa. This provides no evidence for the activation of two separable neurocognitive processes following the violation of context-dependent predictions in visual scene perception. Our data supports the early appearance of high-level, context-sensitive processes in visual cognition.


Scientific Reports | 2018

The role of scene summary statistics in object recognition

Tim Lauer; Tim Cornelissen; Dejan Draschkow; Verena Willenbockel; Melissa L.-H. Võ

Objects that are semantically related to the visual scene context are typically better recognized than unrelated objects. While context effects on object recognition are well studied, the question which particular visual information of an object’s surroundings modulates its semantic processing is still unresolved. Typically, one would expect contextual influences to arise from high-level, semantic components of a scene but what if even low-level features could modulate object processing? Here, we generated seemingly meaningless textures of real-world scenes, which preserved similar summary statistics but discarded spatial layout information. In Experiment 1, participants categorized such textures better than colour controls that lacked higher-order scene statistics while original scenes resulted in the highest performance. In Experiment 2, participants recognized briefly presented consistent objects on scenes significantly better than inconsistent objects, whereas on textures, consistent objects were recognized only slightly more accurately. In Experiment 3, we recorded event-related potentials and observed a pronounced mid-central negativity in the N300/N400 time windows for inconsistent relative to consistent objects on scenes. Critically, inconsistent objects on textures also triggered N300/N400 effects with a comparable time course, though less pronounced. Our results suggest that a scene’s low-level features contribute to the effective processing of objects in complex real-world environments.


Quarterly Journal of Experimental Psychology | 2018

The lower bounds of massive memory: Investigating memory for object details after incidental encoding

Dejan Draschkow; Saliha Reinecke; Corbin Cunningham; Melissa L-H Võ

Visual long-term memory capacity appears massive and detailed when probed explicitly. In the real world, however, memories are usually built from chance encounters. Therefore, we investigated the capacity and detail of incidental memory in a novel encoding task, instructing participants to detect visually distorted objects among intact objects. In a subsequent surprise recognition memory test, lures of a novel category, another exemplar, the same object in a different state, or exactly the same object were presented. Lure recognition performance was above chance, suggesting that incidental encoding resulted in reliable memory formation. Critically, performance for state lures was worse than for exemplars, which was driven by a greater similarity of state as opposed to exemplar foils to the original objects. Our results indicate that incidentally generated visual long-term memory representations of isolated objects are more limited in detail than recently suggested.


Journal of Vision | 2014

Active visual search boosts memory for objects, but only when making a scene

Emilie Josephs; Dejan Draschkow; Jeremy M. Wolfe; Melissa L.-H. Võ

It seems intuitive that intentionally memorizing objects in scenes would create stronger memory representations than incidental encoding, such as might occur during visual search. Contrary to this intuition, we have shown that observers recalled more objects from photographic scenes following object search than following intentional memorization of the same objects (Draschkow, Vo, & Wolfe, 2013). Does the act of searching itself produce better memorization, or is it necessary to search in realistic scenes? We conducted two experiments in which exemplars of the target objects from the naturalistic scenes were placed on non-scene backgrounds. Object placement was random in Exp1. In Exp 2, placement mimicked real-world positions (e.g. mirror above sink) but did not convey depth information. Displays contained 15 critical objects. Ten were targets in the search or memory tasks. In the search task, Os located target objects – indicated by word cues-as quickly as possible, but were not told to memorize anything. In the memory task, the target object was framed by a red square for 3s immediately after the cue, eliminating the search. Os were instruted to remember as much as possible about the scene, especially the framed objects. Each task was followed by a free recall test, which consisted of drawing as much of the displays as they could remember. Recall performance was evaluated by counting the number of drawn targets. The recall benefit for searched over memorized objects in scenes was eliminated in non-scene displays. Apparently, the simple act of searching is not enough to create a search benefit. While grouping the objects in Exp2 caused better overall object memory, it also failed to produce the recall benefit for searched objects that was observed in naturalistic scenes. Thus we conclude that realistic inter-object relationships are not sufficient to benefit memory for searched objects.


Journal of Vision | 2013

Task dependent memory recall performance of naturalistic scenes: Incidental memorization during search outperforms intentional scene memorization

Dejan Draschkow; Melissa L.-H. Võ; Ray Farmer; Jeremy M. Wolfe

Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in scenes during search was compared to explicit memorization of those scenes. Participants were shown 10 different, indoor scenes. 150 objects (15/scene) were preselected. Regions of interest were defined around each object for eye tracking analysis. Observers searched for ten objects in each of five scenes (50 trials, mean RT=3550ms). For the other five scenes, observers spent 15 seconds memorizing as much as possible of each scene for later recall (task order and scenes assignments counterbalanced). We subsequently tested explicit scene memory in two ways. Global scene representation / boundary extension was assessed by showing observers previously-presented scenes and asking if the scene was closer-up or further away than the “original”. Detailed scene memory was assessed by asking participants to redraw each of the 10 scenes and their objects. We found no indication of boundary extension for these complex, indoor scenes. Inferences about the scene beyond the image border did not become part of scene memory. Overall, a rather small percentage of the objects were subsequently drawn. Interestingly, even though participants in the search condition were not explicitly asked to memorize the scenes, they reproduced a substantially greater number of objects (22%) compared to the memory condition (11%). This advantage was produced by 29% recall of search targets, which received highest gaze durations (2600ms). Only 9% of all distractors were recalled despite mean gaze durations of 1730ms. Objects in the memorize condition were only looked at for 700ms, but 11% of these objects were recalled suggesting differential, task-dependent encoding strategies. Instructions to search for specific objects produced stronger encoding than the general request to memorize the scene even though the critical objects were repeatedly fixated in both conditions.


Journal of Vision | 2016

Of What and Where in a natural search task: Active object handling supports object location memory beyond the objects' identity

Dejan Draschkow; Melissa L.-H. Võ

Collaboration


Dive into the Dejan Draschkow's collaboration.

Top Co-Authors

Avatar

Melissa L.-H. Võ

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Jeremy M. Wolfe

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edvard Heikel

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Jona Sassenhagen

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caroline Seidel

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Melissa L-H Võ

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Saliha Reinecke

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Tim Cornelissen

Goethe University Frankfurt

View shared research outputs
Researchain Logo
Decentralizing Knowledge