Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amy Banic is active.

Publication


Featured researches published by Amy Banic.


ieee virtual reality conference | 2015

3DTouch: A wearable 3D input device for 3D applications

Anh Mai Nguyen; Amy Banic

3D applications appear in every corner of life in the current technology era. There is a need for an ubiquitous 3D input device that works with many different platforms, from head-mounted displays (HMDs) to mobile touch devices, 3DTVs, and even the Cave Automatic Virtual Environments. We present 3DTouch [1], a novel wearable 3D input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally works across various 3D platforms. This video presents a working prototype of our solution, which is described in details in the paper [1]. Our approach relies on a relative positioning technique using an optical laser sensor (OPS) and a 9-DOF inertial measurement unit (IMU). The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the devices tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. We envision that modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.


international conference on human interface and management of information | 2014

Selection Classification for Interaction with Immersive Volumetric Visualizations

Amy Banic

Visualization enables scientists to transform data in its raw form to a visual form that will facilitate discoveries and insights. Although there are advantages for displaying inherently 3-dimensional (3D) data in immersive environments, those advantages are hampered by the challenges involved in selecting volumes of that data for exploration or analysis. Selection involves the user identifying a set of points for a specific task. This paper preliminary data collection on natural user actions for volume selection. This paper also presents a research agenda outlining an extension for volume selection classification, as well as challenges, for designing components for a direct selection of volumes of data points.


international conference on virtual reality and visualization | 2014

Effects on Performance of Analytical Tools for Visually Demanding Tasks through Direct and Indirect Touch Interaction in an Immersive Visualization

Zhibo Sun; Ashish Dhital; Nattaya Areejitkasem; Neera Pradhan; Amy Banic

In this paper, we present an investigation on the performance effects of analytical tools through visual and non-visual interaction in an immersive visualization. We explored two types of touch-based input device (with a display screen as direct and without a display screen as indirect), and compared these two touch-based input devices with a 6-degrees of freedom (DOF) tracked input device and a 2DOF input device, where a user could interact in 6DOF spatial context but the degrees of freedom were constrained. The results revealed that for visually demanding tasks, touch input is comparable to 6DOF, however it is important to use physical means to constrain degrees of freedom to retain performance levels using analytical tools involving selection. Furthermore results revealed that precision can be negatively affected by the design of the direct touch interface. Our results will have implications on touch-based interface design as well as design considerations when reducing degrees of freedom control.


symposium on spatial user interaction | 2018

An Exploration of Altered Muscle Mappings of Arm to Finger Control for 3D Selection

Elliot O. Hunt; Amy Banic

In this poster, we present a novel 3-dimensional (3D) interaction technique, Altered Muscle Mapping (AMM), to re-map muscle movements of hands/arms to fingers/wrists. We implemented an initial design of AMM as a 3-Dimensional (3D) selection technique, where finger movements translate a virtual cursor (in 3-degrees-of-freedom) for selection. Direct Manipulation performance benefits may be preserved yet reduce physical fatigue. We designed an initial set of mapping variations. Our results from an initial pilot study provide initial performance insights of mapping configurations. AMM has potential for direct hand interaction in virtual and augmented reality and for users with a limited range of motion.


Proceedings of the Practice and Experience on Advanced Research Computing | 2018

Evaluation of Scientific Workflow Effectiveness for a Distributed Multi-User Multi-Platform Support System for Collaborative Visualization

Rajiv Khadka; James H. Money; Amy Banic

Collaboration among research scientists across multiple types of visualizations and platforms is useful to enhance scientific workflow and lead to unique analysis and discovery. However, current analytic tools and visualization infrastructure lack sufficient capabilities to fully support collaboration across multiple types of visualizations, display/interactive systems, and geographically distributed researchers. We have combined, adapted, and enhanced several emerging immersive and visualization technologies into a novel collaboration system that will provide scientists with the ability to connect with other scientists to work together across multiple visualization platforms (i.e. stereoscopic versus monoscopic), multiple datasets (i.e. 3-Dimensional versus 2-Dimensional data), and multiple visualization techniques (i.e. volumetric rendering versus 2D plots). We have demonstrated several use cases of this system in materials science, manufacturing, planning, and others. In one such use case, our collaboration system imports material science data (i.e., graphite billet) and enable multiple scientists to analyze and explore the density change of graphite across immersive and non-immersive systems, which will help to understand the potential structural problem in it. We recruited scientists that work with the datasets we demonstrate in three use case scenarios and conducted an experimental user study to evaluate our novel collaboration system on scientific visualization workflow effectiveness. In this paper, we present the results on task completion time, task performance, user experience, and feedback among multiple and feedback among multiple geographically distributed collaborators using different multiple-platform for collaboration.


international conference on virtual, augmented and mixed reality | 2013

VWSocialLab: Prototype Virtual World (VW) Toolkit for Social and Behavioral Science Experimental Set-Up and Control

Lana Jaff; Austen L. Hayes; Amy Banic

There are benefits for social and behavioral researchers to conduct studies in online virtual worlds. However, typically learning scripting takes additional time or money to hire a consultant. We propose a prototype Virtual World Toolkit for to help researchers design, set up, and run experiments in Virtual Worlds, with little coding or scripting experience needed. We explored three types of prototype designs, focused on a traditional interface with pilot results. We also present results of initial expert user study of our toolkit to determine the learnability, usability, and feasibility of our toolkit to conduct experiments. Results suggest that our toolkit requires little training and sufficient capabilities for a basic experiment. The toolkit received a great feedback from a number of expert users who thought that it is a promising first version that lays the foundation to more future improvements. This toolkit prototype contributes to enhancing researchers’ capabilities in conducting social/behavioral studies in virtual worlds and hopefully will empower social and behavioral researchers by proving a toolkit prototype that requires less time, efforts and costs to setup stimulus responses types of human subject studies in virtual worlds.


international conference of design user experience and usability | 2013

Investigation of interaction modalities designed for immersive visualizations using commodity devices in the classroom

Kira Lawrence; Alisa Maas; Neera Pradhan; Treschiel Ford; Jacqueline J. Shinker; Amy Banic

In this paper we present initial research of the investigation in the design collaborative interaction modalities for classroom-based immersive visualizations of 3D spatial data, with an initial implementation for geo-spatial applications. Additionally we allowed some pilot testing to gain a sense of our design decisions and where user error might occur. Valuable feedback will allow us to redesign and refine implementation for a much more formal long-term evaluation of the system. Initial results give indications that our interaction modalities may facilitate teaching and learning, but the use of devices should be different for user type.


Journal of Computing and Information Science in Engineering | 2013

Evaluation of System-Directed Multimodal Systems for Vehicle Inspection

Lauren Cairco Dukes; Amy Banic; Jerome McClendon; Toni Bloodworth Pence; James L. Mathieson; Joshua D. Summers; Larry F. Hodges

ed the inspection task. Three concentric geometric shapes comprised one symbol that simulated a checkpoint, called an “inspection item marker” (Fig. 5(b)). The inspection item markers were used as indicators for the location and status of the item and how to perform inspection. The outermost shape, called the “location indicator,” matched a shape description on the checklist. Location indicators had three parameters: size (large, inscribed inside an 8.5 11 in. piece of paper, or small, inscribed inside a quarter-sheet of paper), color (red, yellow, green, or blue), and shape (triangle, circle, or square). For example, through text-tospeech and/or screen output, a participant is informed that the current item to check is a “Large Red Triangle.” The participant would then find that shape on the car. No two location indicators were the same on any checklist so that the location of the current checkpoint was unambiguous. Once the participant found the shape, the participant would then look at the shapes within the location indicator. The next concentric shape, called the “task indicator,” indicated how the participant should inspect the item. Depending on whether the shape is a square, pentagon, or hexagon, the participant should either look at but not touch the marker, touch the marker with only one hand, or touch the marker with two hands, respectively. Finally, within the task indicator, there was a set of four dots in a row called the “defect indicator.” The shading of these four dots indicated to the participant whether an item had a defect. An odd number or zero shaded dots indicated an item should pass inspection while an even number of shaded dots indicated a defect, which should fail inspection. In designing the abstracted task, our goal was to simulate the cognitive load and time requirements for actual vehicle inspection. We chose three-word item identifiers to roughly match the phrase length of items that occur on a typical inspection checklist and the time it would take text-to-speech to read them. Requiring the user to look at shapes to determine what action to take, and whether the item passed inspection, simulated the time and cognitive load it would take an inspector to determine the status of an actual inspection item, since in real vehicle inspection an associate must look at a part, recall how to inspect it, and then determine whether the part passes or fails inspection. Inspection item markers were placed throughout the vehicle frame to simulate the various checkpoint locations in a vehicle. The percentage of defective items per checklist approximated the defect rate reported through actual vehicle inspection at BMW. Since inspectors are provided with reference material at their stations should they forget how to inspect a particular checkpoint, we provided our participants with a lanyard holding a reference sheet for the meanings of the task and defect indicators. For each trial, there were 20 inspection item markers placed on the vehicle, providing the 15 items on the checklist plus five items as distracters, since expert inspectors would not inspect each item present on a vehicle. The participant was presented with one checklist of 15 items, ordered by their location moving counterclockwise around the vehicle. This simulated the actual inspection process, since an inspector is directed to check only certain vehicle features. Participants were given 106 s to complete the checklist, based on the time frame standard for factory inspections. If the participant finished before time was up, he or she said stop to complete the trial. If the participant did not finish in time, the system automatically stopped accepting input. We created unique checklists for ten trials. The first checklist was used for a practice trial before using any devices for input. In this first trial, the participant inspected all 20 checkpoints with the help of an experimenter to help familiarize them with the inspection item markers and the locations of the checkpoints on the vehicle. The user then completed nine trials, with three trials per device. 4.3 Measures. For each participant, we gathered data through preand postquestionnaires, a debriefing interview, Fig. 5 (a) Vehicle body used for experimental evaluation. (b) Example of shapes used for abstracted inspection task. This item would be called “Small Blue Square,” would require a one-handed touch for the inspection action since the shape is a pentagon and would pass inspection since an odd number of dots are shaded. RealVNC: http://www.realvnc.com/. Journal of Computing and Information Science in Engineering MARCH 2013, Vol. 13 / 011002-5 Downloaded From: http://computingengineering.asmedigitalcollection.asme.org/ on 10/04/2016 Terms of Use: http://www.asme.org/about-asme/terms-of-use experimenter logs, and software logs. The prequestionnaire gathered information about demographics, level of use of various hardware configurations, vehicle knowledge, and learning styles. We recorded the number of items inspected and not inspected, the number of items inspected correctly and incorrectly, the time it took for inspection completion, and each voice command the user spoke. While the participant conducted an inspection, an experimenter marked each checkpoint on a clipboard to indicate if the user had inspected the correct item with the correct action (look, one-hand touch, or two-hand touch). Finally, in the postquestionnaire and debriefing interview, we asked questions related to usability, preferences, and effectiveness of the interfaces and hardware configurations. 4.4 Results. Of 25 participants from Clemson University, there were 10 females and 15 males, aged 18–53 (mean1⁄4 23). Participants rated themselves as having low (N1⁄4 12), average (N1⁄4 8), and high (N1⁄4 5) levels of vehicle knowledge. Twentyfour participants were college students and one participant was a postdoctoral fellow. Since the time to complete each trial was limited to 106 s, many participants did not complete all trials for an input device. Overall, 13 participants completed all three trials using the handheld configuration (H), 14 participants completed all three trials with the large screen configuration (L), and 18 participants completed all three trials with the monocular display configuration (M). The task performance data were treated with a repeated measures 3 3 analysis of variance (ANOVA) to test for the within subject effects of hardware configurations and the within subject effects by trial. Data reported from the postquestionnaires were analyzed using the Chi-square test. The F and v tests that are reported for analysis used an alpha level of 0.05 to indicate significance. 4.4.1 Accuracy. The accuracy percentage of correctly checked items was determined by dividing the number of items that were correctly checked by the total number of items checked in each trial. Correctly checked items are those that the participant both performed the correct inspection action and reported the correct inspection result. There was no significant main effect of hardware configuration type for mean accuracy percentage of correctly checked items, F(2,38)1⁄4 1.58, p1⁄4 0.22, n 1⁄4 0:08. All configurations allowed for high accuracy with monocular display (M) as the highest and handheld (H) being the lowest. There was no significant main effect found among the sets of the trials or interaction effect of device by trial. The defect detection percentage was determined by dividing the number of defects that were correctly detected by the total number of defects for each trial. Since the number of defects varied over each trial, the total accuracy of defect detection for each trial was averaged across trials and participants, and then analyzed using a one-way ANOVA. There was no significant main effect found for the defect detection accuracy nor any significant interaction effect of device by trial. However, all configurations allowed for high accuracy as listed from highest to lowest: monocular display (M), large screen (L), and handheld (H). No significant differences were found for accuracy grouped by vehicle knowledge, device usage, or learning style. Unfortunately, no accuracy data are recorded for BMW inspectors, so we could not compare our accuracy results to the baseline accuracy achieved in the manufacturing environment. 4.4.2 Task Completion Times. Analysis revealed a significant main effect for hardware configuration type for overall task completion time. The handheld configuration (H) allowed for significantly faster overall completion time than the monocular display configuration (M) and the large screen configuration (L) (Table 1). In addition, participants’ overall performance became significantly faster by the third and last trial, F(2,38)1⁄4 4.38, p1⁄4 0.019, n 1⁄4 0:19. There was no significant interaction effect of hardware configuration by trial. A few participants discontinued the inspection task accidentally, indicating that participants were having difficulty or accidentally executed a command. A participant possibly discontinued the task accidentally if the overall task completion time was less than 105.5 s (due to rounding error) and did not check all 15 items on the list. There were eight accidental discontinuations for the handheld configuration (H) possibly due to difficulties with the touch screen interface, while there were no accidental discontinuations of the task for the monocular (M) or large screen (L) configurations, likely due to the Wizard-of-Oz setup. As a result of several participants not completing the full trial, it may be more informative to analyze completion time per individual item. This was calculated as a result of each participant’s overall time divided by the number of items each participant actually inspected. We did not find a significant main effect for hardware configuration type for task completion time per item, F(2,38)1⁄4 1.83, p1⁄4 0.18, n 1⁄4 0:09. Ho


Archive | 2014

Low-cost Augmented Reality prototype for controlling network devices.

Anh Mai Nguyen; Amy Banic


arXiv: Human-Computer Interaction | 2014

3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit.

Anh Mai Nguyen; Amy Banic

Collaboration


Dive into the Amy Banic's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James H. Money

Idaho National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge