J. Daniel McCarthy
University of Nevada, Reno
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Daniel McCarthy.
Trends in Cognitive Sciences | 2014
J. Daniel McCarthy; Gideon Caplovitz
A recent study showed that color synesthetes have increased color sensitivity but impaired motion perception. This is exciting because little research has examined how synesthesia affects basic perceptual processes outside the context of synesthetic experiences. The results suggest that synesthesia broadly impacts perception with greater neural implications than previously considered.
Consciousness and Cognition | 2013
J. Daniel McCarthy; Lianne N. Barnes; Bryan D. Alvarez; Gideon Caplovitz
In grapheme-color synesthesia, graphemes (e.g., numbers or letters) evoke color experiences. It is generally reported that the opposite is not true: colors will not generate experiences of graphemes or their associated information. However, recent research has provided evidence that colors can implicitly elicit symbolic representations of associated graphemes. Here, we examine if these representations can be cognitively accessed. Using a mathematical verification task replacing graphemes with color patches, we find that synesthetes can verify such problems with colors as accurately as with graphemes. Doing so, however, takes time: ~250 ms per color. Moreover, we find minimal reaction time switch-costs for switching between computing with graphemes and colors. This demonstrates that given specific task demands, synesthetes can cognitively access numerical information elicited by physical colors, and they do so as accurately as with graphemes. We discuss these results in the context of possible cognitive strategies used to access the information.
Attention Perception & Psychophysics | 2015
J. Daniel McCarthy; Lars Strother; Gideon Caplovitz
Objects in the world often are occluded and in motion. The visible fragments of such objects are revealed at different times and locations in space. To form coherent representations of the surfaces of these objects, the visual system must integrate local form information over space and time. We introduce a new illusion in which a rigidly rotating square is perceived on the basis of sequentially presented Pacman inducers. The illusion highlights two fundamental processes that allow us to perceive objects whose form features are revealed over time: Spatiotemporal Form Integration (STFI) and Position Updating. STFI refers to the spatial integration of persistent representations of local form features across time. Position updating of these persistent form representations allows them to be integrated into a rigid global motion percept. We describe three psychophysical experiments designed to identify spatial and temporal constraints that underlie these two processes and a fourth experiment that extends these findings to more ecologically valid stimuli. Our results indicate that although STFI can occur across relatively long delays between successive inducers (i.e., greater than 500 ms), position updating is limited to a more restricted temporal window (i.e., ~300 ms or less), and to a confined range of spatial (mis)alignment. These findings lend insight into the limits of mechanisms underlying the visual system’s capacity to integrate transient, piecemeal form information, and support coherent object representations in the ever-changing environment.
Journal of Cognitive Neuroscience | 2015
J. Daniel McCarthy; Peter Köhler; Peter U. Tse; Gideon Caplovitz
When an object moves behind a bush, for example, its visible fragments are revealed at different times and locations across the visual field. Nonetheless, a whole moving object is perceived. Unlike traditional modal and amodal completion mechanisms known to support spatial form integration when all parts of a stimulus are simultaneously visible, relatively little is known about the neural substrates of the spatiotemporal form integration (STFI) processes involved in generating coherent object representations from a succession visible fragments. We used fMRI to identify brain regions involved in two mechanisms supporting the representation of stationary and rigidly rotating objects whose form features are shown in succession: STFI and position updating. STFI allows past and present form cues to be integrated over space and time into a coherent object even when the object is not visible in any given frame. STFI can occur whether or not the object is moving. Position updating allows us to perceive a moving object, whether rigidly rotating or translating, even when its form features are revealed at different times and locations in space. Our results suggest that STFI is mediated by visual regions beyond V1 and V2. Moreover, although widespread cortical activation has been observed for other motion percepts derived solely from form-based analyses [Tse, P. U. Neural correlates of transformational apparent motion. Neuroimage, 31, 766-773, 2006; Krekelberg, B., Vatakis, A., & Kourtzi, Z. Implied motion from form in the human visual cortex. Journal of Neurophysiology, 94, 4373-4386, 2005], increased responses for the position updating that lead to rigidly rotating object representations were only observed in visual areas KO and possibly hMT+, indicating that this is a distinct and highly specialized type of processing.
F1000Research | 2013
J. Daniel McCarthy; Colin Kupitz; Gideon Caplovitz
Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.
Archive | 2017
J. Daniel McCarthy; Gennady Erlikhman; Gideon Caplovitz
When an object partially or completely disappears behind an occluding surface, a representation of that object persists. For example, fragments of no longer visible objects can serve as an input into mid-level constructive visual processes, interacting and integrating with currently visible portions to form perceptual units and global motion signals. Remarkably, these persistent representations need not be static and can have their positions and orientations updated postdictively as new information becomes visible. In this chapter, we highlight historical considerations, behavioral evidence, and neural correlates of this type of representational updating of no longer visible information at three distinct levels of visual processing. At the lowest level, we discuss spatiotemporal boundary formation in which visual transients can be integrated over space and time to construct local illusory edges, global form, and global motion percepts. At an intermediate level, we review how the visual system updates form information seen at one moment in time and integrates it with subsequently available information to generate global shape and motion representations (e.g., spatiotemporal form integration and anorthoscopic perception). At a higher level, when an entire object completely disappears behind an occluder, the objects identity and predicted position can be maintained in the absence of visual information.
Journal of Vision | 2016
J. Daniel McCarthy; Joo-Hyun Song
In daily life, humans interact with multiple objects in complex environments. A large body of literature demonstrates that target selection is biased toward recently attended features, such that reaches are faster and trajectory curvature is reduced when target features (i.e., color) are repeated (priming of pop-out). In the real world, however, objects are comprised of several features—some of which may be more suitable for action than others. When fetching a mug from the cupboard, for example, attention not only has to be allocated to the object, but also the handle. To date, no study has investigated the impact of hierarchical feature organization on target selection for action. Here, we employed a color-oddity search task in which targets were Pac-men (i.e., a circle with a triangle cut out) oriented to be either consistent or inconsistent with the percept of a global Kanizsa triangle. We found that reaches were initiated faster when a task-irrelevant illusory figure was present independent of color repetition. Additionally, consistent with priming of pop-out, both reach planning and execution were facilitated when local target colors were repeated, regardless of whether a global figure was present. We also demonstrated that figures defined by illusory, but not real contours, afforded an early target selection benefit. In sum, these findings suggest that when local targets are perceptually grouped to form an illusory surface, attention quickly spreads across the global figure and facilitates the early stage of reach planning, but not execution. In contrast, local color priming is evident throughout goal-directed reaching.
F1000Research | 2015
J. Daniel McCarthy; Joo-Hyun Song
Archive | 2014
J. Daniel McCarthy; Colin Kupitz; Gideon Caplovitz
F1000Research | 2014
J. Daniel McCarthy; Lianne N. Barnes; Bryan D. Alvarez; Gideon Caplovitz