Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John P. McIntire is active.

Publication


Featured researches published by John P. McIntire.


Applied Ergonomics | 2014

Detection of vigilance performance using eye blinks.

Lindsey K. McIntire; R. Andy McKinley; Chuck Goodyear; John P. McIntire

Research has shown that sustained attention or vigilance declines over time on task. Sustained attention is necessary in many environments such as air traffic controllers, cyber operators, and imagery analysts. A lapse of attention in any one of these environments can have harmful consequences. The purpose of this study was to determine if eye blink metrics from an eye-tracker are related to changes in vigilance performance and cerebral blood flow velocities. Nineteen participants performed a vigilance task while wearing an eye-tracker on four separate days. Blink frequency and duration changed significantly over time during the task. Both blink frequency and duration increased as performance declined and right cerebral blood flow velocity declined. These results suggest that eye blink information may be an indicator of arousal levels. Using an eye-tracker to detect changes in eye blinks in an operational environment would allow preventative measures to be implemented, perhaps by providing perceptual warning signals or augmenting human cognition through non-invasive brain stimulation techniques.


Human Factors | 2010

Visual Search Performance With 3-D Auditory Cues: Effects of Motion, Target Location, and Practice

John P. McIntire; Paul R. Havig; Scott N. J. Watamaniuk; Robert H. Gilkey

Objectives: We evaluate visual search performance in both static (nonmoving) and dynamic (moving) search environments with and without spatial (3-D) auditory cues to target location. Additionally, the effects of target trajectory, target location, and practice are assessed. Background: Previous research on aurally aided visual search has shown a significant reduction in response times when 3-D auditory cues are displayed, relative to unaided search. However, the vast majority of this research has examined only searches for static targets in static visual environments. The present experiment was conducted to examine the effect of dynamic stimuli upon aurally aided visual search performance. Method: The 8 participants conducted repeated searches for a single visual target hidden among 15 distracting stimuli. The four main conditions of the experiment consisted of the four possible combinations of 3-D auditory cues (present or absent) and search environment (static or dynamic). Results: The auditory cues were comparably effective at reducing search times in dynamic environments (—25%) as in static environments (—22%). Audio cues helped all participants. The cues were most beneficial when the target appeared at large eccentricities and on the horizontal plane. After a brief initial exposure to 3-D audio, no training or practice effects with 3-D audio were found. Conclusion: We conclude that 3-D audio is as beneficial in environments comprising moving stimuli as in those comprising static stimuli. Application: Operators in dynamic environments, such as aircraft cockpits, ground vehicles, and command-and-control centers, could benefit greatly from 3-D auditory technology when searching their environments for visual targets or other time-critical information.


2014 IEEE VIS International Workshop on 3DVis (3DVis) | 2014

The (possible) utility of stereoscopic 3D displays for information visualization: The good, the bad, and the ugly

John P. McIntire; Kristen Liggett

The good, bad, and “ugly” aspects of stereoscopic three-dimensional display viewing are presented and discussed in relation to data and information visualization applications, primarily relating to spatial comprehension and spatial understanding tasks. We show that three-dimensional displays hold the promise of improving spatial perception, complex scene understanding, memory, and related aspects of performance, but primarily for (1) tasks that are multidimensional or spatial in nature; (2) for tasks that are difficult, complex, or unfamiliar; and/or (3) when other visual spatial cues are degraded or missing. No current 3D display system is capable of satisfying all visual depth cues simultaneously with high fidelity, though stereoscopic 3D displays offer the distinct advantage of binocular stereopsis without incurring substantial costs, or loss in the fidelity of other depth cues. Human factors problems that continue to plague 3D displays and that are especially pertinent to stereoscopic visualizations are considered. We conclude that stereo 3D displays may be an invaluable tool for some applications of data or information visualization, but warn that it is a tool that must be utilized thoughtfully and carefully.


eye tracking research & application | 2014

Detection of vigilance performance with pupillometry

Lindsey K. McIntire; John P. McIntire; R. Andy McKinley; Chuck Goodyear

Sustained attention (vigilance) is required for many professions such as air traffic controllers, imagery analysts, airport security screeners, and cyber operators. A lapse in attention in any of these environments can have deadly consequences. The purpose of this study was to determine the ability of pupillometry to detect changes in vigilance performance. Each participant performed a 40-minute vigilance task while wearing an eye-tracker on each of four separate days. Pupil diameter, pupil eccentricity, and pupil velocity all changed significantly over time (p<.05) during the task. Significant correlations indicate that all metrics increased as vigilance performance declined except for pupil diameter, which decreased and the pupil became miotic. These results are consistent with other research on attention, fatigue, and arousal levels. Using an eye-tracker to detect changes in pupillometry in an operational environment would allow interventions to be implemented.


collaboration technologies and systems | 2009

A variety of automated turing tests for network security: Using AI-hard problems in perception and cognition to ensure secure collaborations

John P. McIntire; Lindsey K. McIntire; Paul R. Havig

There are a multitude of collaborative and network applications that are vulnerable to interference, infiltration, or attack by automated computer programs. Malicious programs can spam or otherwise disrupt email systems, blogs, and file sharing networks. They can cheat at online gaming, skew the results of online polls, or conduct denial-of-service attacks. And sophisticated AI “chat-bots” can pose as humans in order to gather intelligence from unsuspecting targets. Thus, a recurring problem in collaborative systems is how to verify that a user is a human and not a computer. Following the work of Coates et al. [1], von Ahn et al. [2], and others, we propose several AI-hard problems in perception and cognition that can serve as “CAPTCHAs,” or tests capable of distinguishing between human-level intelligence and artificial intelligence, ensuring that all collaborators interfacing a particular system are humans and not nefarious computer programs.


collaboration technologies and systems | 2010

Methods for chatbot detection in distributed text-based communications

John P. McIntire; Lindsey K. McIntire; Paul R. Havig

Distributed text-based communications (e.g., chat, instant-messaging) are facing the growing problem of malicious “chatbots” or “chatterbots” (automated communication programs posing as humans) attempting social engineering, gathering intelligence, mounting phishing attacks, spreading malware and spam, and threatening the usability and security of collaborative communication platforms. We provide supporting evidence for the suggestion that gross communication and behavioral patterns (e.g., message size, inter-message delays) can be used to passively distinguish between humans and chatbots. Further, we discuss several potential interrogation strategies for users and chat room administrators who may need to actively distinguish between a human and a chatbot, quickly and reliably, during distributed communication sessions. Interestingly, these issues are in many ways analogous to the identification problem faced by interrogators in a Turing Test, and the proposed methods and strategies might find application to and inspiration from this topic as well.


national aerospace and electronics conference | 2009

Ideas on authenticating humanness in collaborative systems using AI-hard problems in perception and cognition

John P. McIntire; Paul R. Havig; Lindsey K. McIntire; Henry M. Jackson

Collaborative applications including email, chat, file-sharing networks, blogs, and gaming are under constant threat of automated programs that are gaining access to, attacking, degrading, or otherwise disrupting the intended communications and interactions. Thus, an important issue in collaborative systems security is how to verify that a user is a human, and not a computer attempting to access the system for malicious purposes. We propose and discuss several AI-hard examples from perception and cognition that may be useful for distinguishing between human-level intelligence and artificial intelligence.


Optical Engineering | 2014

Optometric measurements predict performance but not comfort on a virtual object placement task with a stereoscopic three-dimensional display

John P. McIntire; Steve T. Wright; Lawrence K. Harrington; Paul R. Havig; Scott N. J. Watamaniuk; Eric L. Heft

Abstract. Twelve participants were tested on a simple virtual object precision placement task while viewing a stereoscopic three-dimensional (S3-D) display. Inclusion criteria included uncorrected or best corrected vision of 20/20 or better in each eye and stereopsis of at least 40 arc sec using the Titmus stereotest. Additionally, binocular function was assessed, including measurements of distant and near phoria (horizontal and vertical) and distant and near horizontal fusion ranges using standard optometric clinical techniques. Before each of six 30 min experimental sessions, measurements of phoria and fusion ranges were repeated using a Keystone View Telebinocular and an S3-D display, respectively. All participants completed experimental sessions in which the task required the precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the simulator sickness questionnaire. Individual placement accuracy in S3-D trials was significantly correlated with several of the binocular screening outcomes: viewers with larger convergent fusion ranges (measured at near distance), larger total fusion ranges (convergent plus divergent ranges, measured at near distance), and/or lower (better) stereoscopic acuity thresholds were more accurate on the placement task. No screening measures were predictive of subjective discomfort, perhaps due to the low levels of discomfort induced.


Proceedings of SPIE | 2009

Current and future helmet-mounted displays for piloted systems

Doug Franck; John P. McIntire; Peter L. Marasco; Paul R. Havig

Scientists and Engineers in the Air Force Research Laboratory (AFRL) are constantly asked what are the new technologies and concepts that are being developed to significantly increase the warfighters capabilities. The warfighting communities have different opinions and priorities based on their platform capabilities and operational requirements that the Laboratory has to make trade-offs to maximize the payoff on investment for the Air Force operator community in this tighter budget era. This paper will discuss the current state of helmet mounted displays in rotorcraft and fast jets as well as the future technology advancements needed to increase warfighter productive and/or reduce life cycle costs.


Proceedings of SPIE | 2009

Helmet-mounted displays: why haven't they taken off?

Paul R. Havig; C. Goff; John P. McIntire; Douglas L. Franck

Helmet-Mounted Display (HMD) technologies have been developing for over 3 decades and have been studied for multiple applications ranging from military aircraft, to virtual reality, augmented reality, entertainment and a host of other ideas. It would not be unreasonable to assume that after this much time they would be employed in our daily lives as ubiquitously as the common desktop monitor. However, this is simply not the case. How can this be when they can be used in so many ways for so many tasks? Throughout this work we will look at some of the reasons why as well of some of the ways they can be used.

Collaboration


Dive into the John P. McIntire's collaboration.

Top Co-Authors

Avatar

Paul R. Havig

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Eric E. Geiselman

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lindsey K. McIntire

Henry M. Jackson Foundation for the Advancement of Military Medicine

View shared research outputs
Top Co-Authors

Avatar

Eric L. Heft

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chuck Goodyear

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

George A. Reis

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

O. Isaac Osesina

University of Arkansas at Little Rock

View shared research outputs
Top Co-Authors

Avatar

M. Eduard Tudoreanu

University of Arkansas at Little Rock

View shared research outputs
Researchain Logo
Decentralizing Knowledge