J. Eric T. Taylor
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Eric T. Taylor.
Perception | 2011
J. Eric T. Taylor; Jessica K. Witt; Mila Sugovic
Through training, skilled parkour athletes (traceurs) overcome everyday obstacles, such as walls, that are typically insurmountable. Traceurs and untrained novices estimated the height of walls and reported their anticipated ability to climb the wall. The traceurs perceived the walls as shorter than did novices. This result suggests that perception is scaled by the perceivers anticipated ability to act, and is consistent with the action-specific account of perception.
Journal of Experimental Psychology: Human Perception and Performance | 2012
Jessica K. Witt; Mila Sugovic; J. Eric T. Taylor
According to the action-specific account of perception, perceivers see the environment relative to their ability to perform the intended action. For example, in a modified version of the computer game Pong, balls that were easier to block looked to be moving slower than balls that were more difficult to block (Witt & Sugovic, 2010). It is unknown, however, if perception can be influenced by another persons abilities. In the current experiment, we examined whether another persons ability to block a ball influenced the observers perception of ball speed. Participants played and observed others play the modified version of Pong where the task was to successfully block the ball with paddles that varied in size, and both the actor and observer estimated the speed of the ball. The results showed that both judged the ball to be moving faster when it was harder to block. However, the same effect of difficulty on speed estimates was not found when observers watched a computer play, suggesting the effect is specific to people and not to the task. These studies suggest that the environment can be perceived relative to another persons abilities.
Cognition | 2012
J. Eric T. Taylor; Jessica K. Witt; Phillip J. Grimaldi
Observed actions are covertly and involuntarily simulated within the observers motor system. It has been argued that simulation is involved in processing abstract, gestural paintings, as the artists movements can be simulated by observing static brushstrokes. Though this argument is grounded in theory, empirical research has yet to examine the claim. Five experiments are described wherein participants executed arm movements resembling the act of painting horizontal brushstrokes while observing paintings featuring broad, discernable brushstrokes. Participants responded faster when their movement was compatible with the observed brushstrokes, even though the paintings were irrelevant to their task. Additional results suggest that this effect occurs outside of awareness. These results provide evidence that observers can simulate the actions of the painter by simply observing the painting, revealing a connection between artist and audience hitherto undemonstrated by cognitive science.
Perception | 2011
Jessica K. Witt; Donald M Schuck; J. Eric T. Taylor
Action-specific effects on perception are apparent in terrestrial environments. For example, targets that require more effort to walk, jump, or throw to look farther away than when the targets require less effort. Here, we examined whether action-specific effects would generalize to an underwater environment. Instead, perception might be geometrically precise, rather than action-specific, in an environment that is novel from an evolutionary perspective. We manipulated ease to swim by giving participants swimming flippers or taking them away. Those who estimated distance while wearing the flippers judged underwater targets to be closer than did participants who had taken them off. In addition, participants with better swimming ability judged the targets to be closer than did those with worse swimming ability. These results suggest perceived distance underwater is a function of the perceivers ability to swim to the targets.
Psychological Research-psychologische Forschung | 2015
J. Eric T. Taylor; Jessica K. Witt
Musicians sometimes report twitching in their fingers or hands while listening to music. This anecdote could be indicative of a tendency for auditory-motor co-representation in musicians. Here, we describe two studies showing that pianists (Experiment 1), but not novices (Experiment 2) automatically generate spatial representations that correspond to learned musical actions while listening to music. Participants made one-handed movements to the left or right from a central location in response to visual stimuli while listening to task-irrelevant auditory stimuli, which were scales played on a piano. These task-irrelevant scales were either ascending (compatible with rightward movements) or descending (compatible with leftward movements). Pianists were faster to respond when the scale direction was compatible with the direction of response movement, whereas novices’ movements were unaffected by the scale. These results are in agreement with existing research on action–effect coupling in musicians, which draw heavily on common coding theory. In addition, these results show how intricate auditory stimuli (ascending or descending scales) evoke coarse, domain-general spatial representations.
Translational Neuroscience | 2014
J. Eric T. Taylor; Davood G. Gozli; David Chan; Greg Huffman; Jay Pratt
Abstract A growing body of evidence demonstrates that human vision operates differently in the space near and on the hands; for example, early findings in this literature reported that rapid onsets are detected faster near the hands, and that objects are searched more thoroughly. These and many other effects were attributed to enhanced attention via the recruitment of bimodal visual-tactile neurons representing the hand and near-hand space. However, recent research supports an alternative account: stimuli near the hands are preferentially processed by the action-oriented magnocellular visual pathway at the expense of processing in the parvocellular pathway. This Modulated Visual Pathways (MVP) account of altered vision near the hands describes a hand position-dependent trade-off between the two main retinal-cortical visual pathways between the eye and brain. The MVP account explains past findings and makes new predictions regarding near-hand vision supported by new research.
Cognition | 2014
J. Eric T. Taylor; Jessica K. Witt
Attention operates in the space near the hands with unique, action-related priorities. Here, we examined how attention treats objects on the hands themselves. We tested two hypotheses. First, attention may treat stimuli on the hands like stimuli near the hands, as though the surface of the hands were the proximal case of near-hand space. Alternatively, we proposed that the surface of the hands may be attentionally distinct from the surrounding space. Specifically, we predicted that attention should be slow to orient toward the hands in order to remain entrained to near-hand space, where the targets of actions are usually located. In four experiments, we observed delayed orienting of attention on the hands compared to orienting attention near or far from the hands. Similar delayed orienting was also found for tools connected to the body compared to tools disconnected from the body. These results support our second hypothesis: attention operates differently on the functional surfaces of the hand. We suggest this effect serves a functional role in the execution of manual actions.
Journal of Vision | 2016
Jessica K. Witt; J. Eric T. Taylor; Mila Sugovic; John T. Wixted
Previously, we showed that an effect on c, even in the absence of any effect on d, can be reflective of a perceptual effect rather than a change in response bias (Witt, Taylor, Sugovic, & Wixted, 2015). Knotts and Shams (2016) agreed with this main point. In addition, they pointed out a potential source of confusion from our paper and defended their previous analyses of the sound-induced flash illusion. First, Knotts and Shams (2016) questioned whether the Müller–Lyer illusion would only affect c without also affecting d. Asserting that this issue is important seems to imply that effects on d would change our main claim. However, the interpretation of a change in c as potentially reflecting a perceptual effect is completely orthogonal to whether or not there is any effect on d as well. We elected to simulate a situation in which d did not change in order to emphasize our point that c can reflect a perceptual effect even under that extreme scenario. But c can still capture a perceptual effect even if there is also a perceptually induced change in d. This point was mentioned in our article when we said: ‘‘For discrimination experiments, d can be interpreted as a perceptual effect related to changes in sensitivity, but c can be interpreted only as a bias without the ability to distinguish between perceptual bias and response-based bias’’ (p. 298). In this response, we take the opportunity to explain this key point in more detail. Second, Knotts and Shams (2016) defended their comparison of the no-beep versus two-beep conditions. They argue that d changed across those two conditions, thereby establishing that a perceptual effect occurred. Again, we did not dispute the interpretation of the observed change in d. Instead, our focus was on the interpretation of a change in c, such as the change in c that occurred when comparing their one-beep versus two-beep conditions. We focused on that comparison both because it illustrates our point and because it seems more theoretically informative than a comparison between the no-beep versus two-beep conditions, which confounds multisensory cues that are consistent versus inconsistent. Regardless of which comparison one makes, we agree with Knotts and Shams (2016) that it is worth specifying the underlying signal detection models that can be used to interpret the results. They made an effort to do just that in their figure 3, but the models they presented are problematic. We present more viable models and emphasize that researchers should carefully consider both the information represented on the decision axis (the x-axis) and the theorized effect of an experimental manipulation. Knotts and Shams (2016) used ‘‘Perceived # of flashes’’ as their decision axis variable, and they selected a range of 6 to 6. However, perceiving a negative number of flashes is nonsensible. We illustrate better ways to conceptualize the perceptual effects they observed (and that showed up mostly as a change in c).
PLOS ONE | 2015
J. Eric T. Taylor; Timothy K. Lam; Alison L. Chasteen; Jay Pratt
Embodied cognition holds that abstract concepts are grounded in perceptual-motor simulations. If a given embodied metaphor maps onto a spatial representation, then thinking of that concept should bias the allocation of attention. In this study, we used positive and negative self-esteem words to examine two properties of conceptual cueing. First, we tested the orientation-specificity hypothesis, which predicts that conceptual cues should selectively activate certain spatial axes (in this case, valenced self-esteem concepts should activate vertical space), instead of any spatial continuum. Second, we tested whether conceptual cueing requires semantic processing, or if it can be achieved with shallow visual processing of the cue words. Participants viewed centrally presented words consisting of high or low self-esteem traits (e.g., brave, timid) before detecting a target above or below the cue in the vertical condition, or on the left or right of the word in the horizontal condition. Participants were faster to detect targets when their location was compatible with the valence of the word cues, but only in the vertical condition. Moreover, this effect was observed when participants processed the semantics of the word, but not when processing its orthography. The results show that conceptual cueing by spatial metaphors is orientation-specific, and that an explicit consideration of the word cues’ semantics is required for conceptual cueing to occur.
Attention Perception & Psychophysics | 2017
Jason Rajsic; J. Eric T. Taylor; Jay Pratt
Confirmation bias has recently been reported in visual search, where observers who were given a perceptual rule to test (e.g. “Is the p on a red circle?”) search stimuli that could confirm the rule stimuli preferentially (Rajsic, Wilson, & Pratt, Journal of Experimental Psychology: Human Perception and Performance, 41(5), 1353–1364, 2015). In this study, we compared the ability of concrete and abstract visual templates to guide attention using the visual confirmation bias. Experiment 1 showed that confirmatory search tendencies do not result from simple low-level priming, as they occurred when color templates were verbally communicated. Experiment 2 showed that confirmation bias did not occur when targets needed to be reported as possessing or not possessing the absence of a feature (i.e., reporting whether a target was on a nonred circle). Experiment 3 showed that confirmatory search also did not occur when search prompts referred to a set of visually heterogenous features (i.e., reporting whether a target on a colorful circle, regardless of the color). Together, these results show that the confirmation bias likely results from a matching heuristic, such that visual codes involved in representing the search goal prioritize stimuli possessing these features.