Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicholas A. Giudice is active.

Publication


Featured researches published by Nicholas A. Giudice.


Current Biology | 2011

Modality-Independent Coding of Spatial Layout in the Human Brain

Thomas Wolbers; Roberta L. Klatzky; Jack M. Loomis; Magdalena Wutte; Nicholas A. Giudice

In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.


computer vision and pattern recognition | 2005

Digital Sign System for Indoor Wayfinding for the Visually Impaired

Bosco S. Tjan; Paul J. Beckmann; Rudrava Roy; Nicholas A. Giudice; Gordon E. Legge

Mobility challenges and independent travel are major concerns for blind and visually impaired pedestrians [1][2]. Navigation and wayfinding in unfamiliar indoor environments are particularly challenging because blind pedestrians do not have ready access to building maps, signs and other orienting devices. The development of assistive technologies to aid wayfinding is hampered by the lack of a reliable and costefficient method for providing location information in an indoor environment. Here we describe the design and implementation of a digital sign system based on low-cost passive retro-reflective tags printed with specially designed patterns that can be readily detected and identified by a handheld camera and machine-vision system. Performance of the prototype showed the tag detection/recognition system could cope with the real-world environment of a typical building.


conference on computers and accessibility | 2012

Learning non-visual graphical information using a touch-based vibro-audio interface

Nicholas A. Giudice; Hari Prasath Palani; Eric Brenner; Kevin M. Kramer

This paper evaluates an inexpensive and intuitive approach for providing non-visual access to graphic material, called a vibro-audio interface. The system works by allowing users to freely explore graphical information on the touchscreen of a commercially available tablet and synchronously triggering vibration patterns and auditory information whenever an on-screen visual element is touched. Three studies were conducted that assessed legibility and comprehension of the relative relations and global structure of a bar graph (Exp 1), Pattern recognition via a letter identification task (Exp 2), and orientation discrimination of geometric shapes (Exp 3). Performance with the touch-based device was compared to the same tasks performed using standard hardcopy tactile graphics. Results showed similar error performance between modes for all measures, indicating that the vibro-audio interface is a viable multimodal solution for providing access to dynamic visual information and supporting accurate spatial learning and the development of mental representations of graphical material.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2011

Functional Equivalence of Spatial Images from Touch and Vision: Evidence from Spatial Updating in Blind and Sighted Individuals

Nicholas A. Giudice; Maryann R. Betty; Jack M. Loomis

This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.


Archive | 2013

Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language

Jack M. Loomis; Roberta L. Klatzky; Nicholas A. Giudice

The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent—that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features).


international conference of the ieee engineering in medicine and biology society | 2008

An indoor navigation system to support the visually impaired

Timothy H. Riehle; Patrick A. Lichter; Nicholas A. Giudice

Indoor navigation technology is needed to support seamless mobility for the visually impaired. A small portable personal navigation device that provides current position, useful contextual wayfinding information about the indoor environment and directions to a destination would greatly improve access and independence for people with low vision. This paper describes the construction of such a device which utilizes a commercial Ultra-Wideband (UWB) asset tracking system to support real-time location and navigation information. Human trials were conducted to assess the efficacy of the system by comparing target-finding performance between blindfolded subjects using the navigation system for real-time guidance, and blindfolded subjects who only received speech information about their local surrounds but no route guidance information (similar to that available from a long cane or guide dog). A normal vision control condition was also run. The time and distance traveled was measured in each trial and a point-back test was performed after goal completion to assess cognitive map development. Statistically significant differences were observed between the three conditions in time and distance traveled; with the navigation system and the visual condition yielding the best results, and the navigation system dramatically outperforming the non-guided condition.


Multisensory Research | 2014

Touch-Screen Technology for the Dynamic Display of 2D Spatial Information Without Vision: Promise and Progress

Roberta L. Klatzky; Nicholas A. Giudice; Christopher R. Bennett; Jack M. Loomis

Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.


IEEE Transactions on Haptics | 2015

Designing Media for Visually-Impaired Users of Refreshable Touch Displays: Possibilities and Pitfalls

Sile O'Modhrain; Nicholas A. Giudice; John A. Gardner; Gordon E. Legge

This paper discusses issues of importance to designers of media for visually impaired users. The paper considers the influence of human factors on the effectiveness of presentation as well as the strengths and weaknesses of tactile, vibrotactile, haptic, and multimodal methods of rendering maps, graphs, and models. The authors, all of whom are visually impaired researchers in this domain, present findings from their own work and work of many others who have contributed to the current understanding of how to prepare and render images for both hard-copy and technology-mediated presentation of Braille and tangible graphics.


Perception | 2008

Learning Building Layouts with Non-Geometric Visual Information: The Effects of Visual Impairment and Age

Amy Kalia; Gordon E. Legge; Nicholas A. Giudice

Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (eg doors, signs, and lighting) for acquiring cognitive maps of novel indoor layouts. In this study we asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (<50 years of age) normally sighted; older (50–70 years of age) normally sighted; and low-vision (people with heterogeneous forms of visual impairment ranging in age from 18 to 67 years). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (sparse VE); a VE displaying both geometric and non-geometric cues (photorealistic VE); a map; and a real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information.


Spatial Cognition and Computation | 2009

Evidence for Amodal Representations after Bimodal Learning: Integration of Haptic-Visual Layouts into a Common Spatial Image

Nicholas A. Giudice; Roberta L. Klatzky; Jack M. Loomis

Abstract Participants learned circular layouts of six objects presented haptically or visually, then indicated the direction from a start target to an end target of the same or different modality (intramodal versus intermodal). When objects from the two modalities were learned separately, superior performance for intramodal trials indicated a cost of switching between modalities. When a bimodal layout intermixing modalities was learned, intra- and intermodal trials did not differ reliably. These findings indicate that a spatial image, independent of input modality, can be formed when inputs are spatially and temporally congruent, but not when modalities are temporally segregated in learning.

Collaboration


Dive into the Nicholas A. Giudice's collaboration.

Top Co-Authors

Avatar

Jack M. Loomis

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge