Matthew J. Stainer
University of Melbourne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew J. Stainer.
Frontiers in Human Neuroscience | 2013
Matthew J. Stainer; Kenneth C. Scott-Brown; Benjamin W. Tatler
Recent research has begun to address how CCTV operators in the modern control room attempt to search for crime (e.g., Howard et al., 2011). However, an often-neglected element of the CCTV task is that the operators have at their disposal a multiplexed wall of scenes, and a single spot-monitor on which they can select any of these feeds for inspection. Here we examined how 2 trained CCTV operators used these sources of information to search from crime during a morning, afternoon, and night-time shift. We found that they spent surprisingly little time viewing the multiplex wall, instead preferentially spending most of their time searching on the single-scene spot-monitor. Such search must require a sophisticated understanding of the surveilled environment, as the operators must make their selection of which screen to view based on their prediction of where crime is likely to occur. This seems to be reflected in the difference in the screens that they selected to view at different times of the day. For example, night-clubs received close monitoring at night, but were seldom viewed in mid-morning. Such narrowing of search based on a contextual understanding of an environment is not a new idea (e.g., Torralba et al., 2006), and appears to contribute to operators selection strategy. This research prompts new questions regarding the nature of representation that operators have of their environment, and how they might develop expectation-based search strategies to countermand the demands of the large influx of visual information. Future research should ensure not to neglect examination of operator behavior “in the wild” (Hutchins, 1995a), as such insights are difficult to gain from laboratory based paradigms alone.
Frontiers in Psychology | 2013
Matthew J. Stainer; Kenneth C. Scott-Brown; Benjamin W. Tatler
Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate “sub-scenes.” Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers.
I-perception | 2017
Matthew J. Stainer; Kenneth C. Scott-Brown; Benjamin W. Tatler
Multiplex viewing of static or dynamic scenes is an increasing feature of screen media. Most existing multiplex experiments have examined detection across increasing scene numbers, but currently no systematic evaluation of the factors that might produce difficulty in processing multiplexes exists. Across five experiments we provide such an evaluation. Experiment 1 characterises difficulty in change detection when the number of scenes is increased. Experiment 2 reveals that the increased difficulty across multiple-scene displays is caused by the total amount of visual information accounts for differences in change detection times, regardless of whether this information is presented across multiple scenes, or contained in one scene. Experiment 3 shows that whether quadrants of a display were drawn from the same, or different scenes did not affect change detection performance. Experiment 4 demonstrates that knowing which scene the change will occur in means participants can perform at monoplex level. Finally, Experiment 5 finds that changes of central interest in multiplexed scenes are detected far easier than marginal interest changes to such an extent that a centrally interesting object removal in nine screens is detected more rapidly than a marginally interesting object removal in four screens. Processing multiple-screen displays therefore seems dependent on the amount of information, and the importance of that information to the task, rather than simply the number of scenes in the display. We discuss the theoretical and applied implications of these findings.
Canadian Journal of Experimental Psychology | 2017
Sharon Scrafton; Matthew J. Stainer; Benjamin W. Tatler
Vision and action are tightly coupled in space and time: for many tasks we must look at the right place at the right time to gather the information that we need to complete our behavioural goals. Vision typically leads action by about 0.5 seconds in many natural tasks. However, the factors that influence this temporal coordination are not well understood, and variations have been found previously between two domestic tasks each with similar constraints: tea making and sandwich making. This study offers a systematic exploration of the factors that govern spatiotemporal coordination of vision and action within complex real-world activities. We found that the temporal coordination eye movements and action differed between tea making and sandwich making. Longer eye–hand latencies, more “look ahead” fixations and more looks to irrelevant objects were found when making tea than when making a sandwich. Contrary to previous suggestions, we found that the requirement to move around the environment did not influence the coordination of vision and action. We conclude that the dynamics of visual behaviour during motor acts are sensitive to the task and specific objects and actions required but not to the spatial demands requiring movement around an environment. La vision et l’action sont étroitement liées dans le temps et l’espace : pour bon nombre de tâches, nous devons regarder au bon endroit et au bon moment pour recueillir les informations dont nous avons besoin pour atteindre nos objectifs comportementaux. La vision mène à l’action en plus ou moins 0,5 seconde dans la majorité des tâches que nous effectuons naturellement. Or, les facteurs qui influencent la coordination temporelle ne sont pas bien compris et des variations ont été constatées entre deux tâches domestiques présentant des contraintes similaires : la préparation du thé et la préparation de sandwich. Cette étude propose une exploration systématique des facteurs qui gouvernent la coordination spatio-temporelle de la vision et de l’action lors de la pratique d’activités complexes réelles. Nous avons trouvé que la coordination temporale des mouvements oculaires différait selon que l’on préparait du thé ou des sandwichs. Nous avons noté des latences oeil-main plus longues, un plus haut taux de fixations « vers l’avant » et de regards vers des objets non pertinents lors de la préparation de thé que lors de la préparation de sandwich. Contrairement aux suggestions antérieures, nous avons constaté que le besoin de se déplacer dans un environnement n’influençait pas la coordination de la vision et de l’action. Nous en concluons que la dynamique du comportement visuel durant les actions motrices est sensible à la tâche et aux objets spécifiques et aux actions requises mais pas aux exigences spatiales nécessitant un déplacement dans un environnement.
Ophthalmic and Physiological Optics | 2015
Matthew J. Stainer; Andrew J. Anderson; Jonathan Denniss
Expertise in viewing medical images is thought to be due to the ability to process holistic image information. Eye care clinicians can inspect photographs of the retina to search for signs of disease. However, they commonly also view the eye in vivo using the restricted view of a slit lamp, which removes the potential benefits of holistic processing. We investigated how expert and novice clinicians inspect the fundus using these two methods.
Experimental Brain Research | 2014
Andrew J. Anderson; Matthew J. Stainer; Peter Brotchie; R. H. S. Carpenter
Saccadic latencies to targets appearing to the left and right of fixation in a repeating sequence are significantly increased when a target is presented out of sequence. Is this because the target is in the wrong position, the wrong direction, or both? To find out, we arranged for targets in a horizontal plane occasionally to appear with an unexpected eccentricity, though in the correct direction. This had no significant effect on latency, unlike what is observed when targets appeared in the unexpected direction. That subjects learnt sequences of directions rather than simply positions was further confirmed in an experiment where saccade direction was a repeating sequence, but eccentricity was randomised. Latency was elevated when a target was episodically presented in an unexpected direction. Latencies were also elevated when targets appeared in the correct hemifield but at an unexpected direction (35° polar angular displacement from the horizontal, a displacement roughly equivalent in collicular spacing to our unexpected eccentricity), although this elevation was of a smaller magnitude than when targets appeared in an unexpected direction along the horizontal. Finally, we confirmed that not all changes in the stimulus cause disruption: an unexpected change in the orientation or colour of the target did not alter latency. Our results show that in a repeating sequence, the oculomotor system is primarily concerned with predicting the direction of an upcoming eye movement rather than its position. This is consistent with models of oculomotor control developed for randomly appearing targets in which the direction and amplitude of saccades are programmed separately.
Journal of Vision | 2017
Alasdair Clarke; Matthew J. Stainer; Benjamin W. Tatler; Amelia R. Hunt
Much effort has been made to explain eye guidance during natural scene viewing. However, a substantial component of fixation placement appears to be a set of consistent biases in eye movement behavior. We introduce the concept of saccadic flow, a generalization of the central bias that describes the image-independent conditional probability of making a saccade to (xi+1, yi+1), given a fixation at (xi, yi). We suggest that saccadic flow can be a useful prior when carrying out analyses of fixation locations, and can be used as a submodule in models of eye movements during scene viewing. We demonstrate the utility of this idea by presenting bias-weighted gaze landscapes, and show that there is a link between the likelihood of a saccade under the flow model, and the salience of the following fixation. We also present a minor improvement to our central bias model (based on using a multivariate truncated Gaussian), and investigate the leftwards and coarse-to-fine biases in scene viewing.
Vision Research | 2016
Matthew J. Stainer; R. H. S. Carpenter; Peter Brotchie; Andrew J. Anderson
Every day we perform learnt sequences of actions that seem to happen almost without awareness. It has been argued that for learning such sequences parallel learning networks exist - one using spatial coordinates and one using motor coordinates - with sequence acquisition involving a progressive shift from the former to the latter as a sequence is rehearsed. When sequences are interrupted by an out-of-sequence target, there is a delay in the response to the target, and so here we transiently interrupt oculomotor sequences to probe the influence of oculomotor rehearsal and spatial coordinates in sequence acquisition. For our main experiments, we used a repeating sequences of eight targets in length that was first learnt either using saccadic eye movements (left/right), manual responses (left/right or up/down) or as a sequence of colour (blue/red) requiring no motor response. The sequence was immediately repeated for saccadic eye movements, during which the influence of on out-of-sequence target (an interruption) was assessed. When a sequence is learnt beforehand in an abstract way (for example, as a sequence of colours or of orthogonally mapped manual responses), interruptions are immediately disruptive to latency, suggesting neither motor rehearsal nor specific spatial coordinates are essential for encoding sequences of actions and that sequences - no matter how they are encoded - can be rapidly translated into oculomotor coordinates. The magnitude of a disruption does, however, correspond to how well a sequence is learnt: introducing an interruption to an extended sequence before it was reliably learnt reduces the magnitude of the latency disruption.
Acta Psychologica | 2018
Elham Azizi; Matthew J. Stainer; Larry A. Abel
Developing impulsivity has been one of the main concerns thought to arise from the increasing popularity of video gaming. Most of the relevant literature has treated gamers as pure-genre players (i.e. those who play only a specific genre of game). However, it is not clear how impulsivity is associated with different genres of games in multi-genre gamers, given that there is increasing diversity in the games played by individuals. In this study, we compared 33 gamers to 23 non-gamers in a go/no-go task: the Continuous Performance Test (CPT). To evaluate whether or not impulsivity occurs as a trade-off between speed and accuracy, we emphasised fast performance to all participants. Then, to examine the ability to predict impulsivity from game genre-hours, we fitted separate multiple regression models to several dependent variables. As an additional measure, we also compared groups in an antisaccade task. In the CPT, gamers showed a trend towards significantly faster reaction time (RT), accompanied by higher false alarm rate (FAR) and more risk-taking response bias (β), suggesting impulsive responses. Interestingly, there was a significant negative correlation between RT and FAR across all participants, suggesting an overall speed-accuracy trade-off strategy, perhaps driven by the emphasis on speed during task instruction. Moreover, time spent on role playing games (RPG) and real-time strategy (RTS) games better predicted FAR and β than did time spent on action and puzzle games. In the antisaccade task; however, gamers showed a shorter antisaccade latency but a comparable error rate in comparison with non-gamers. There was no specific game genre which could predict performance in the antisaccade task. Altogether, there was no evidence of oculomotor impulsivity in gamers; however, the CPT results suggested the presence of impulsive responses in gamers, which might be the result of a speed-accuracy trade-off. Furthermore, there was a difference in game genres, with time spent on RPG and RTS games being accompanied by greater probability of impulsive responses. Training studies are required to investigate the causality of different video game genres on the development of impulsivity.
Ophthalmology | 2014
Andrew J. Anderson; Matthew J. Stainer
PURPOSE Contrast sensitivity sometimes increases in patients with open-angle glaucoma when intraocular pressure (IOP) is decreased. Although often interpreted as demonstrating reversible glaucoma-induced dysfunction, this result, if true, could simply reflect a general relationship between sensitivity and IOP in visual mechanisms unaffected by glaucoma. To investigate this relationship, we test the hypothesis that reducing IOP in eyes without glaucoma (ocular hypertension) does not increase perimetric contrast sensitivity. DESIGN Comparative case series. PARTICIPANTS A total of 692 participants drawn from the Ocular Hypertension Treatment Study (OHTS) (22 clinical centers). METHODS Commercially available topical ocular hypotensive medications. MAIN OUTCOME MEASURES Post hoc analysis of IOP and perimetric contrast sensitivity (mean deviation [MD] and pattern standard deviation [PSD]) both at baseline (0 months, immediately before ocular antihypertensive therapy) and at 6-month review. An additional 618 control eyes from OHTS that did not receive treatment were examined over the same period. Data from the second phase of OHTS also were examined, and control eyes then received treatment. RESULTS Treated eyes had a decrease in IOP at 6 months (5.1 mmHg, P<0.001) but no significant change in MD (0.04 decibels [dB], P = 0.59) or PSD (0.03 dB, P = 0.19), relative to controls. A similar decrease in IOP was found for eyes that began treatment in the second phase of OHTS, but no significant change in MD or PSD. CONCLUSIONS Despite using a large sample size, we found no relationship between perimetric contrast sensitivity and IOP reduction in ocular hypertension, which suggests that previous sensitivity changes seen in patients with glaucoma, if true, are indicative of reversible glaucoma-induced dysfunction rather than a general relationship between sensitivity and IOP in visual mechanisms unaffected by glaucoma.