Jonathan W. Decker
United States Naval Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan W. Decker.
ieee virtual reality conference | 2012
Mark A. Livingston; Jay Sebastian; Zhuming Ai; Jonathan W. Decker
The Microsoft Kinect for Xbox 360 (“Kinect”) provides a convenient and inexpensive depth sensor and, with the Microsoft software development kit, a skeleton tracker (Figure 2). These have great potential to be useful as virtual environment (VE) control interfaces for avatars or for viewpoint control. In order to determine its suitability for our applications, we devised and conducted tests to measure standard performance specifications for tracking systems. We evaluated the noise, accuracy, resolution, and latency of the skeleton tracking software. We also measured the range in which the person being tracked must be in order to achieve these values.
eurographics workshop on parallel graphics and visualization | 2008
John Kloetzli; Brian Strege; Jonathan W. Decker; Marc Olano
We present an algorithm for solving the Longest Common Subsequence problem using graphics hardware accel- eration. We identify a parallel memory access pattern which enables us to run efficiently on multiple layers of parallel hardware by matching each layer to the best sub-algorithm, which is determined using a mix of theoretical and experimental data including knowledge of the specific hardware and memory structure of each layer. We implement a linear-space, cache-coherent algorithm on the CPU, using a two-level algorithm on the GPU to com- pute subproblems quickly. The combination of all three running on a CPU/GPU pair is a fast, flexible and scalable solution to the Longest Common Subsequence problem. Our design method is applicable to other algorithms in the Gaussian Elimination Paradigm, and can be generalized to more levels of parallel computation such as GPU clusters.
IEEE Transactions on Visualization and Computer Graphics | 2012
Mark A. Livingston; Jonathan W. Decker; Zhuming Ai
Multivariate visualization techniques have attracted great interest as the dimensionality of data sets grows. One premise of such techniques is that simultaneous visual representation of multiple variables will enable the data analyst to detect patterns amongst multiple variables. Such insights could lead to development of new techniques for rigorous (numerical) analysis of complex relationships hidden within the data. Two natural questions arise from this premise: Which multivariate visualization techniques are the most effective for high-dimensional data sets? How does the analysis task change this utility ranking? We present a user study with a new task to answer the first question. We provide some insights to the second question based on the results of our study and results available in the literature. Our task led to significant differences in error, response time, and subjective workload ratings amongst four visualization techniques. We implemented three integrated techniques (Data-driven Spots, Oriented Slivers, and Attribute Blocks), as well as a baseline case of separate grayscale images. The baseline case fared poorly on all three measures, whereas Datadriven Spots yielded the best accuracy and was among the best in response time. These results differ from comparisons of similar techniques with other tasks, and we review all the techniques, tasks, and results (from our work and previous work) to understand the reasons for this discrepancy.
international symposium on mixed and augmented reality | 2009
Mark A. Livingston; Zhuming Ai; Jonathan W. Decker
Properly perceived stereo display is often assumed to be vital in augmented reality (AR) displays used for close distances, echoing the general understanding from the perception literature. However, the accuracy of the perception of stereo in head-worn AR displays has not been studied greatly. We conducted a user study to elicit the precision of stereo perception in AR and its dependency on the size and contrast of the stimulus. We found a strong effect of contrast on the disparity users desired to make a virtual target verge at the distance of a real reference object. We also found that whether the target began behind or in front of the reference in a method of adjustments protocol made a significant difference. The mean disparity in the rendering that users preferred had a strong linear relationship with their IPD. We present our results and infer stereoacuity thresholds.
visualization and data analysis | 2011
Mark A. Livingston; Jonathan W. Decker; Zhuming Ai
Datasets over a spatial domain are common in a number of fields, often with multiple layers (or variables) within data that must be understood together via spatial locality. Thus one area of long-standing interest is increasing the number of variables encoded by properties of the visualization. A number of properties have been demonstrated and/or proven successful with specific tasks or data, but there has been relatively little work comparing the utility of diverse techniques for multi-layer visualization. As part of our efforts to evaluate the applicability of such visualizations, we implemented five techniques which represent a broad range of existing research (Color Blending, Oriented Slivers, Data-Driven Spots, Brush Strokes, and Stick Figures). Then we conducted a user study wherein subjects were presented with composites of three, four, and five layers (variables) using one of these methods and asked to perform a task common to our intended end users (GIS analysts). We found that the Oriented Slivers and Data-Driven Spots performed the best, with Stick Figures yielding the lowest accuracy. Through analyzing our data, we hope to gain insight into which techniques merit further exploration and offer promise for visualization of data sets with ever-increasing size.
visualization and data analysis | 2013
Mark A. Livingston; Jonathan W. Decker; Zhuming Ai
Multivariate visualization techniques have been applied to a wide variety of visual analysis tasks and a broad range of data types and sources. Their utility has been evaluated in a modest range of simple analysis tasks. In this work, we extend our previous task to a case of time-varying data. We implemented ve visualizations of our synthetic test data: three previously evaluated techniques (Data-driven Spots, Oriented Slivers, and Attribute Blocks), one hybrid of the rst two that we call Oriented Data-driven Spots, and an implementation of Attribute Blocks that merges the temporal slices. We conducted a user study of these ve techniques. Our previous nding (with static data) was that users performed best when the density of the target (as encoded in the visualization) was either highest or had the highest ratio to non-target features. The time-varying presentations gave us a wider range of density and density gains from which to draw conclusions; we now see evidence for the density gain as the perceptual measure, rather than the absolute density.
Proceedings of SPIE | 2018
G. Erbert; P. Crump; Jonathan W. Decker; M. Winterfeldt; Philipp Albrodt; Marc Hanna; Patrick Georges; Gaëlle Lucas-Leclin; G. Blume; Frédéric Moron
Improved diode laser beam combining techniques are in strong demand for applications in material processing. Coherent beam combining (CBC) is the only combining approach that has the potential to maintain or even improve all laser properties, and thus has high potential for future systems. As part of our ongoing studies into CBC of diode lasers, we present recent progress in the coherent superposition of high-power single-pass tapered laser amplifiers. The amplifiers are seeded by a DFB laser at λ = 976 nm, where the seed is injected into a laterally single-mode ridge-waveguide input section. The phase pistons on each beam are actively controlled by varying the current in the ridge section of each amplifier, using a sequential hill-climbing algorithm, resulting in a combined beam with power fluctuations of below 1%. The currents into the tapered sections of the amplifiers are separately controlled, and remain constant. In contrast to our previous studies, we favour a limited number of individual high-power amplifiers, in order to preserve a high extracted power per emitter in a simple, low-loss coupling arrangement. Specifically, a multi-arm interferometer architecture with only three devices is used, constructed using 6 mm-long tapered amplifiers, mounted junction up on C-mounts, to allow separate contact to single mode and amplifier sections. A maximum coherently combined power of 12.9 W is demonstrated in a nearly diffraction-limited beam, corresponding to a 65% combining efficiency, with power mainly limited by the intrinsic beam quality of the amplifiers. Further increased combined power is currently sought.
International Conference on Applied Human Factors and Ergonomics | 2018
Mark A. Livingston; Zhuming Ai; Jonathan W. Decker
Research into the human factors of augmented reality (AR) systems goes back nearly as far as research into AR. This makes intuitive sense for an interactive system, but human factors investigations are by most estimates still relatively rare in the field. Our AR research used the human-centered design paradigm and thus was driven by human factors issues for significant portions of the development of our prototype system. As a result of early research and more recent prototype development, mobile AR is now being incorporated into military training and studied for operational uses. In this presentation, we will review human factors evaluations conducted for military applications of mobile AR systems as well as other relevant evaluations.
international conference on augmented cognition | 2017
Andre Harrison; Mark A. Livingston; Derek Brock; Jonathan W. Decker; Dennis Perzanowski; Christopher Van Dolson; Joseph Mathews; Alexander Lulushi; Adrienne Raglin
Statistical graphs are images that display quantitative information in a visual format that allows for the easy and consistent interpretation of the information. Often, statistical graphs are in the form of line graphs or bar graphs. In fields, such as cybersecurity, sets of statistical graphs are used to present complex information; however, the interpretation of these more complex graphs is often not obvious. Unless the viewer has been trained to understand each graph used, the interpretation of the data may be limited or incomplete [1]. In order to study the perception of statistical graphs, we tracked users’ eyes while studying simple statistical graphs. Participants studied a graph, and later viewed a graph purporting to be a subset of the data. They were asked to look for a substantive change in the meaning of the second graph compared to the first.
Archive | 2016
Benjamin Newsom; Ranjeev Mittu; Mark A. Livingston; Stephen Russell; Jonathan W. Decker; Eric Leadbetter; Ira S. Moskowitz; Antonio Gilliam; Ciara Sibley; Joseph Coyne; Myriam Abramson
The problem of automatically recognizing a user’s operational context, the implications of its shifting properties, and reacting in a dynamic manner are at the core of mission intelligence and decision making. Environments such as the OZONE Widget Framework (http://www.owfgoss.org) (OWF) provide the foundation for capturing the objectives, actions, and activities of both the mission analyst and the decision maker. By utilizing a “context container” that envelops an OZONE Application, we hypothesize that both user action and intent can be used to characterize user context with respect to operational modality (strategic, tactical, opportunistic, or random). As the analyst moves from one operational modality to another, we propose that information visualization techniques should adapt and present data and analysis pertinent to the new modality and to the trend of the shift. As a system captures the analyst’s actions and decisions in response to the new visualizations, the context container has the opportunity to assess the analyst’s perception of the information value, risk, uncertainty, prioritization, projection, and insight with respect to the current context stage. This paper will describe a conceptual architecture for an adaptive work environment for inferring user behavior and interaction within the OZONE framework, in order to provide the decision maker with context relevant information. We then bridge from our more conceptual OWF discussion to specific examples describing the role of context in decision making. Our first concrete example demonstrates how the Web analytics of a user’s browsing behavior can be used to authenticate users. The second example briefly examines the role of context in cyber security. Our third example illustrates how to capture the behavior of expert analysts in exploratory data analysis, which coupled with a recommender system, advises domain experts of “standard” analytical operations in order to suggest operations novel to the domain, but consistent with analytical goals. Finally, our fourth example discusses the role of context in a supervisory control problem when managing multiple autonomous systems.