Guido Maiello
Northeastern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guido Maiello.
Behavior Research Methods | 2017
Agostino Gibaldi; Mauricio Vanegas; Peter J. Bex; Guido Maiello
The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.
Journal of Vision | 2014
Guido Maiello; Manuela Chessa; Fabio Solari; Peter J. Bex
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion.
PLOS ONE | 2015
Guido Maiello; Manuela Chessa; Fabio Solari; Peter J. Bex
We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.
Human-Computer Interaction | 2016
Manuela Chessa; Guido Maiello; Alessia Borsari; Peter J. Bex
The recent release of the Oculus Rift, originally developed for entertainment applications, has reignited the interest of researchers and clinicians toward the use of head-mounted-displays in basic behavioral research and physical and psychological rehabilitation. However, careful evaluation of the Oculus Rift is necessary to determine whether it can be effectively used in these novel applications. In this article we address two issues concerning the perceptual quality of the Oculus Rift. (a) Is the Oculus able to generate an acceptable degree of immersivity? In particular, is it possible to elicit the sensation of presence via the virtual stimuli rendered by the device? (b) Does the Virtual Reality experienced through the Oculus Rift induce physical discomfort? To answer these questions, we employed four virtual scenarios in three separate experiments and evaluated performance with objective and subjective outcomes. In Experiment 1 we monitored observers’ heart rate and asked them to rate their Virtual Reality experience via a custom questionnaire. In Experiment 2 we monitored observers’ head movements in reaction to virtual obstacles and asked them to fill out the Simulator Sickness Questionnaire (Kennedy et al., 1993) both before and after experiencing Virtual Reality. In Experiment 3 we compared the Oculus Rift against two other low-cost devices used in immersive Virtual Reality: the Google cardboard and a standard 3DTV monitor. Observers’ heart rate increased during exposure to Virtual Reality, and they subjectively reported the experience to be immersive and realistic. We found a strong relationship between observers’ fear of heights and vertigo experienced during one of the virtual scenarios involving heights, suggesting that observers felt a strong sensation of presence within the virtual worlds. Subjects reacted to virtual obstacles by moving to avoid them, suggesting that the obstacles were perceived as real threats. Observers did not experience simulator sickness when the exposure to Virtual Reality was short and did not induce excessive amounts of vection. Compared to the other devices the Oculus Rift elicited a greater degree of immersivity. Thus our investigation suggests that the Oculus Rift head-mounted-display is a potentially powerful tool for a wide array of basic research and clinical applications.
Journal of Vision | 2017
Guido Maiello; Lenna E. Walker; Peter J. Bex; Fuensanta A. Vera-Diaz
We evaluated the ability of emmetropic and myopic observers to detect and discriminate blur across the retina under monocular or binocular viewing conditions. We recruited 39 young (23–30 years) healthy adults (n = 19 myopes) with best-corrected visual acuity 0.0 LogMAR (20/20) or better in each eye and no binocular or accommodative dysfunction. Monocular and binocular blur discrimination thresholds were measured as a function of pedestal blur using naturalistic stimuli with an adaptive 4AFC procedure. Stimuli were presented in a 46° diameter window at 40 cm. Gaussian blur pedestals were confined to an annulus at either 0°, 4°, 8°, or 12° eccentricity, with a blur increment applied to only one quadrant of the image. The adaptive procedure efficiently estimated a dipper shaped blur discrimination threshold function with two parameters: intrinsic blur and blur sensitivity. The amount of intrinsic blur increased for retinal eccentricities beyond 4° (p < 0.001) and was lower in binocular than monocular conditions (p < 0.001), but was similar across refractive groups (p = 0.47). Blur sensitivity decreased with retinal eccentricity (p < 0.001) and was highest for binocular viewing, but only for central vision (p < 0.05). Myopes showed worse blur sensitivity than emmetropes monocularly (p < 0.05) but not binocularly (p = 0.66). As expected, blur perception worsens in the visual periphery and binocular summation is most evident in central vision. Furthermore, myopes exhibit a monocular impairment in blur sensitivity that improves under binocular conditions. Implications for the development of myopia are discussed.
Journal of Vision | 2016
Manuela Chessa; Guido Maiello; Peter J. Bex; Fabio Solari
We implement a neural model for the estimation of the focus of radial motion (FRM) at different retinal locations and assess the model by comparing its results with respect to the precision with which human observers can estimate the FRM in naturalistic motion stimuli. The model describes the deep hierarchy of the first stages of the dorsal visual pathway and is space variant, since it takes into account the retino-cortical transformation of the primate visual system through log-polar mapping. The log-polar transform of the retinal image is the input to the cortical motion-estimation stage, where optic flow is computed by a three-layer neural population. The sensitivity to complex motion patterns that has been found in area MST is modeled through a population of adaptive templates. The first-order description of cortical optic flow is derived from the responses of the adaptive templates. Information about self-motion (e.g., direction of heading) is estimated by combining the first-order descriptors computed in the cortical domain. The models performance at FRM estimation as a function of retinal eccentricity neatly maps onto data from human observers. By employing equivalent-noise analysis we observe that loss in FRM accuracy for both model and human observers is attributable to a decrease in the efficiency with which motion information is pooled with increasing retinal eccentricity in the visual field. The decrease in sampling efficiency is thus attributable to receptive-field size increases with increasing retinal eccentricity, which are in turn driven by the lossy log-polar mapping that projects the retinal image onto primary visual areas. We further show that the model is able to estimate direction of heading in real-world scenes, thus validating the models potential application to neuromimetic robotic architectures. More broadly, we provide a framework in which to model complex motion integration across the visual field in real-world scenes.
Journal of Vision | 2015
Guido Maiello; William Harrison; Fuensanta A. Vera-Diaz; Peter J. Bex
Myopic eyes are elongated compared to the eyes of normally-sighted, emmetropic observers. This simple observation gives rise to an empirical question: what are the physiological and perceptual consequences of an elongated retinal surface? To address this question, we developed a geometric model of emmetropic and myopic retinae, based on magnetic resonance imaging (MRI) data [Atchison et al. (2005)], from which we derived psychophysically-testable predictions about visual function. We input range image data of natural scenes [Howe and Purves (2002)] to the geometric model to statistically estimate where in the visual periphery perception may be altered due to the different shapes of myopic and emmetropic eyes. The model predicts that central visual function should be similar for the two eye types, but myopic peripheral vision should differ regardless of optical correction. We tested this hypothesis by measuring the fall-off in contrast sensitivity with retinal eccentricity in emmetropes and best-corrected myopes. The full contrast sensitivity function (CSF) was assessed at 5, 10 and 15 degrees eccentricity using an adaptive testing procedure [Vul et al. (2010)]. Consistent with our model predictions, the area under the log CSF decreases in the periphery at a faster rate in best-corrected myopic observers than in emmetropes. Our modeling also revealed that a target at a given eccentricity projects onto a larger area of peripheral retinal for myopic than emmetropic eyes. This raises the possibility that crowding zones - the area over which features are integrated - may differ between eye types. We measured crowding zones at 5, 10 and 15 degrees of eccentricity using a 26 AFC letter identification task and found no significant differences between myopic and emmetropic observers. This suggests that crowding depends on spatial rather than retinal feature separation, which implies differences in the retino-cortical transformations in myopes and emmetropes. Meeting abstract presented at VSS 2015.
Experimental Eye Research | 2018
Guido Maiello; Kristen L. Kerber; Frank Thorn; Peter J. Bex; Fuensanta A. Vera-Diaz
ABSTRACT The formation of focused and corresponding foveal images requires a close synergy between the accommodation and vergence systems. This linkage is usually decoupled in virtual reality systems and may be dysfunctional in people who are at risk of developing myopia. We study how refractive error affects vergence‐accommodation interactions in stereoscopic displays. Vergence and accommodative responses were measured in 21 young healthy adults (n=9 myopes, 22–31 years) while subjects viewed naturalistic stimuli on a 3D display. In Step 1, vergence was driven behind the monitor using a blurred, non‐accommodative, uncrossed disparity target. In Step 2, vergence and accommodation were driven back to the monitor plane using naturalistic images that contained structured depth and focus information from size, blur and/or disparity. In Step 1, both refractive groups converged towards the stereoscopic target depth plane, but the vergence‐driven accommodative change was smaller in emmetropes than in myopes (F1,19=5.13, p=0.036). In Step 2, there was little effect of peripheral depth cues on accommodation or vergence in either refractive group. However, vergence responses were significantly slower (F1,19=4.55, p=0.046) and accommodation variability was higher (F1,19=12.9, p=0.0019) in myopes. Vergence and accommodation responses are disrupted in virtual reality displays in both refractive groups. Accommodation responses are less stable in myopes, perhaps due to a lower sensitivity to dioptric blur. Such inaccuracies of accommodation may cause long‐term blur on the retina, which has been associated with a failure of emmetropization. HighlightsVergence and accommodation are disrupted in virtual reality systems.Vergence‐driven accommodation is stronger in myopes than in emmetropes.In myopes vergence is slower and accommodation is less stable.Accommodation inaccuracy causes retinal blur, which may be associated with myopia.
Scientific Reports | 2016
Guido Maiello; William J. Harrison; Peter J. Bex
Most eye movements in the real-world redirect the foveae to objects at a new depth and thus require the co-ordination of monocular saccade amplitudes and binocular vergence eye movements. Additionally to maintain the accuracy of these oculomotor control processes across the lifespan, ongoing calibration is required to compensate for errors in foveal landing positions. Such oculomotor plasticity has generally been studied under conditions in which both eyes receive a common error signal, which cannot resolve the long-standing debate regarding whether both eyes are innervated by a common cortical signal or by a separate signal for each eye. Here we examine oculomotor plasticity when error signals are independently manipulated in each eye, which can occur naturally owing to aging changes in each eye’s orbit and extra-ocular muscles, or in oculomotor dysfunctions. We find that both rapid saccades and slow vergence eye movements are continuously recalibrated independently of one another and corrections can occur in opposite directions in each eye. Whereas existing models assume a single cortical representation of space employed for the control of both eyes, our findings provide evidence for independent monoculomotor and binoculomotor plasticities and dissociable spatial mapping for each eye.
Experimental Brain Research | 2018
Guido Maiello; MiYoung Kwon; Peter J. Bex
Sensorimotor coupling in healthy humans is demonstrated by the higher accuracy of visually tracking intrinsically—rather than extrinsically—generated hand movements in the fronto-parallel plane. It is unknown whether this coupling also facilitates vergence eye movements for tracking objects in depth, or can overcome symmetric or asymmetric binocular visual impairments. Human observers were therefore asked to track with their gaze a target moving horizontally or in depth. The movement of the target was either directly controlled by the observer’s hand or followed hand movements executed by the observer in a previous trial. Visual impairments were simulated by blurring stimuli independently in each eye. Accuracy was higher for self-generated movements in all conditions, demonstrating that motor signals are employed by the oculomotor system to improve the accuracy of vergence as well as horizontal eye movements. Asymmetric monocular blur affected horizontal tracking less than symmetric binocular blur, but impaired tracking in depth as much as binocular blur. There was a critical blur level up to which pursuit and vergence eye movements maintained tracking accuracy independent of blur level. Hand–eye coordination may therefore help compensate for functional deficits associated with eye disease and may be employed to augment visual impairment rehabilitation.