Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul R. Havig is active.

Publication


Featured researches published by Paul R. Havig.


Computers & Graphics | 2014

A human cognition framework for information visualization

Robert Patterson; Leslie M. Blaha; Georges G. Grinstein; Kristen Liggett; David E. Kaveney; Kathleen C. Sheldon; Paul R. Havig; Jason Moore

Abstract We present a human cognition framework for information visualization. This framework emphasizes how top-down cognitive processing enables the induction of insight, reasoning, and understanding, which are key goals of the visual analytics community. Specifically, we present a set of six leverage points that can be exploited by visualization designers in order to measurably influence certain aspects of human cognition: (1) exogenous attention; (2) endogenous attention; (3) chunking; (4) reasoning with mental models; (5) analogical reasoning; and (6) implicit learning.


visual analytics science and technology | 2012

VAST Challenge 2012: Visual analytics for big data

Kristin A. Cook; Georges G. Grinstein; Mark A. Whiting; Michael Cooper; Paul R. Havig; Kristen Liggett; Bohdan Nebesh; Celeste Lyn Paul

The 2012 Visual Analytics Science and Technology (VAST) Challenge posed two challenge problems for participants to solve using a combination of visual analytics software and their own analytic reasoning abilities. Challenge 1 (C1) involved visualizing the network health of the fictitious Bank of Money to provide situation awareness and identify emerging trends that could signify network issues. Challenge 2 (C2) involved identifying the issues of concern within a region of the Bank of Money network experiencing operational difficulties utilizing the provided network logs. Participants were asked to analyze the data and provide solutions and explanations for both challenges. The data sets were downloaded by nearly 1100 people by the close of submissions. The VAST Challenge received 40 submissions with participants from 12 different countries, and 14 awards were given.


international conference on virtual augmented and mixed reality | 2014

Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent

Joseph B. Lyons; Paul R. Havig

Advances in autonomy have the potential to reshape the landscape of the modern world. Yet, research on human-machine interaction is needed to better understand the dynamic exchanges required between humans and machines in order to optimize human reliance on novel technologies. A key aspect of that exchange involves the notion of transparency as humans and machines require shared awareness and shared intent for optimal team work. Questions remain however, regarding how to represent information in order to generate shared awareness and intent in a human-machine context. The current paper will review a recent model of human-robot transparency and will propose a number of methods to foster transparency between humans and machines.


Human Factors | 2010

Visual Search Performance With 3-D Auditory Cues: Effects of Motion, Target Location, and Practice

John P. McIntire; Paul R. Havig; Scott N. J. Watamaniuk; Robert H. Gilkey

Objectives: We evaluate visual search performance in both static (nonmoving) and dynamic (moving) search environments with and without spatial (3-D) auditory cues to target location. Additionally, the effects of target trajectory, target location, and practice are assessed. Background: Previous research on aurally aided visual search has shown a significant reduction in response times when 3-D auditory cues are displayed, relative to unaided search. However, the vast majority of this research has examined only searches for static targets in static visual environments. The present experiment was conducted to examine the effect of dynamic stimuli upon aurally aided visual search performance. Method: The 8 participants conducted repeated searches for a single visual target hidden among 15 distracting stimuli. The four main conditions of the experiment consisted of the four possible combinations of 3-D auditory cues (present or absent) and search environment (static or dynamic). Results: The auditory cues were comparably effective at reducing search times in dynamic environments (—25%) as in static environments (—22%). Audio cues helped all participants. The cues were most beneficial when the target appeared at large eccentricities and on the horizontal plane. After a brief initial exposure to 3-D audio, no training or practice effects with 3-D audio were found. Conclusion: We conclude that 3-D audio is as beneficial in environments comprising moving stimuli as in those comprising static stimuli. Application: Operators in dynamic environments, such as aircraft cockpits, ground vehicles, and command-and-control centers, could benefit greatly from 3-D auditory technology when searching their environments for visual targets or other time-critical information.


Technologies, systems, and architectures for transnational defense . Conference | 2002

Flight test evaluation of the nondistributed flight reference off-boresight helmet-mounted display symbology

J. Chris Jenkins; Andrew J. Thurling; Paul R. Havig; Eric E. Geiselman

The Air Force Research Laboratory (AFRL) has been working to optimize helmet-mounted display (HMD) symbology for off-boresight use. One candidate symbology is called the non-distributed flight reference (NDFR). NDFR symbology allows ownship status information to be directly referenced from the HMD regardless of pilot line of sight. The symbology is designed to aid pilot maintenance of aircraft state awareness during the performance of off-boresight tasks such as air-to-ground and air-to-air target acquisition. Previous HMD symbology research has shown that pilots spend longer periods of time off-boresight when using an HMD and therefore less time referencing primary displays in the aircraft cockpit. NDFR may provide needed information for the pilot to safely spend longer periods of search time off-boresight. Recently, NDFR was flight tested by the USAF Test Pilot School at Edwards AFB, CA, aboard the VISTA F-16 (Variable Stability In-flight Simulator Test Aircraft) during operationally representative air-to-air and air-to-ground tasks, as well as unusual attitude recoveries. The Mil-Std-1787B head-up display (HUD) symbology and another off-boresight HMD symbology called the Visually Coupled Acquisition and Targeting System (VCATS) were evaluated as comparison symbol sets. The results of the flight test indicate a clear performance advantage afforded by the use of off-boresight symbology compared to HUD use alone. There was a significant increase in the amount of time pilots looked off-boresight with both the NDFR and VCATS symbologies. With the NDFR, this increase was achieved without an associated primary task performance tradeoff. This was true for both air-to-ground and air-to-air tasks.


collaboration technologies and systems | 2009

A variety of automated turing tests for network security: Using AI-hard problems in perception and cognition to ensure secure collaborations

John P. McIntire; Lindsey K. McIntire; Paul R. Havig

There are a multitude of collaborative and network applications that are vulnerable to interference, infiltration, or attack by automated computer programs. Malicious programs can spam or otherwise disrupt email systems, blogs, and file sharing networks. They can cheat at online gaming, skew the results of online polls, or conduct denial-of-service attacks. And sophisticated AI “chat-bots” can pose as humans in order to gather intelligence from unsuspecting targets. Thus, a recurring problem in collaborative systems is how to verify that a user is a human and not a computer. Following the work of Coates et al. [1], von Ahn et al. [2], and others, we propose several AI-hard problems in perception and cognition that can serve as “CAPTCHAs,” or tests capable of distinguishing between human-level intelligence and artificial intelligence, ensuring that all collaborators interfacing a particular system are humans and not nefarious computer programs.


collaboration technologies and systems | 2010

Methods for chatbot detection in distributed text-based communications

John P. McIntire; Lindsey K. McIntire; Paul R. Havig

Distributed text-based communications (e.g., chat, instant-messaging) are facing the growing problem of malicious “chatbots” or “chatterbots” (automated communication programs posing as humans) attempting social engineering, gathering intelligence, mounting phishing attacks, spreading malware and spam, and threatening the usability and security of collaborative communication platforms. We provide supporting evidence for the suggestion that gross communication and behavioral patterns (e.g., message size, inter-message delays) can be used to passively distinguish between humans and chatbots. Further, we discuss several potential interrogation strategies for users and chat room administrators who may need to actively distinguish between a human and a chatbot, quickly and reliably, during distributed communication sessions. Interestingly, these issues are in many ways analogous to the identification problem faced by interrogators in a Turing Test, and the proposed methods and strategies might find application to and inspiration from this topic as well.


Proceedings of SPIE | 2011

Rise of the HMD: the need to review our human factors guidelines

Eric E. Geiselman; Paul R. Havig

Recent years have brought on a new breed of HMDs. They have high resolution, are daylight readable, and some even have color. While these are all welcomed advances to the field we must remember to review our history. Here we review some the of the research from years past that was done before these advances and discuss them so as to make sure the past is not forgotten and mistakes are not repeated.


national aerospace and electronics conference | 2009

Ideas on authenticating humanness in collaborative systems using AI-hard problems in perception and cognition

John P. McIntire; Paul R. Havig; Lindsey K. McIntire; Henry M. Jackson

Collaborative applications including email, chat, file-sharing networks, blogs, and gaming are under constant threat of automated programs that are gaining access to, attacking, degrading, or otherwise disrupting the intended communications and interactions. Thus, an important issue in collaborative systems security is how to verify that a user is a human, and not a computer attempting to access the system for malicious purposes. We propose and discuss several AI-hard examples from perception and cognition that may be useful for distinguishing between human-level intelligence and artificial intelligence.


Helmet- and Head-Mounted Displays VIII: Technologies and Applications | 2003

Psychophysical measurement of night vision goggle noise

Rachael L. Glasgow; Peter L. Marasco; Paul R. Havig; Gary L. Martinsen; George A. Reis; Eric L. Heft

Pilots, developers, and other users of night-vision goggles (NVGs) have pointed out that different NVG image intensifier tubes have different subjective noise characteristics. Currently, no good model of the visual impact of NVG noise exists. Because it is very difficult to objectively measure the noise of a NVG, a method for assessing noise subjectively using simple psychophysical procedures was developed. This paper discusses the use of a computer program to generate noise images similar to what an observer sees through an NVG, based on filtered white noise. The images generated were based on 1/f (where f is frequency) filtered white noise with several adjustable parameters. Adjusting each of these parameters varied different characteristics of the noise. This paper discusses a study where observers compared the computer-generated noise images to true NVG noise and were asked to determine which computer-generated image was the best representation of the true noise. This method was repeated with different types of NVGs and at different luminance levels to study what NVG parameters cause variations in NVG noise.

Collaboration


Dive into the Paul R. Havig's collaboration.

Top Co-Authors

Avatar

John P. McIntire

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Eric E. Geiselman

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Eric L. Heft

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

George A. Reis

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

Peter L. Marasco

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary L. Martinsen

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David L. Post

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. Chris Jenkins

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge