Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan David Hincapié-Ramos is active.

Publication


Featured researches published by Juan David Hincapié-Ramos.


human factors in computing systems | 2014

Consumed endurance: a metric to quantify arm fatigue of mid-air interactions

Juan David Hincapié-Ramos; Xiang Guo; Paymahn Moghadasian; Pourang Irani

Mid-air interactions are prone to fatigue and lead to a feeling of heaviness in the upper limbs, a condition casually termed as the gorilla-arm effect. Designers have often associated limitations of their mid-air interactions with arm fatigue, but do not possess a quantitative method to assess and therefore mitigate it. In this paper we propose a novel metric, Consumed Endurance (CE), derived from the biomechanical structure of the upper arm and aimed at characterizing the gorilla-arm effect. We present a method to capture CE in a non-intrusive manner using an off-the-shelf camera-based skeleton tracking system, and demonstrate that CE correlates strongly with the Borg CR10 scale of perceived exertion. We show how designers can use CE as a complementary metric for evaluating existing and designing novel mid-air interactions, including tasks with repetitive input such as mid-air text-entry. Finally, we propose a series of guidelines for the design of fatigue-efficient mid-air interfaces.


interactive tabletops and surfaces | 2012

Branch-explore-merge: facilitating real-time revision control in collaborative visual exploration

Will McGrath; Brian R. Bowman; David C. McCallum; Juan David Hincapié-Ramos; Niklas Elmqvist; Pourang Irani

Collaborative work is characterized by participants seamlessly transitioning from working together (coupled) to working alone (decoupled). Groupware should therefore facilitate smoothly varying coupling throughout the entire collaborative session. Towards achieving such transitions for collaborative exploration and search, we propose a protocol based on managing revisions for each collaborator exploring a dataset. The protocol allows participants to diverge from the shared analysis path (branch), study the data independently (explore), and then contribute back their findings onto the shared display (merge). We apply this concept to collaborative search in multidimensional data, and propose an implementation where the public view is a tabletop display and the private views are embedded in handheld tablets. We then use this implementation to perform a qualitative user study involving a real estate dataset. Results show that participants leverage the BEM protocol, spend \ significant time using their private views (40% to 80% of total task time), and apply public view changes for consultation with collaborators.


symposium on spatial user interaction | 2014

Ethereal planes: a design framework for 2D information space in 3D mixed reality environments

Barrett Ens; Juan David Hincapié-Ramos; Pourang Irani

Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the frameworks utility during the design of the Personal Cockpit, a window management system for head-worn displays.


virtual reality software and technology | 2013

Color correction for optical see-through displays using display color profiles

Srikanth Kirshnamachari Sridharan; Juan David Hincapié-Ramos; David R. Flatla; Pourang Irani

In optical see-through displays, light coming from background objects mixes with the light originating from the display, causing what is known as the color blending problem. Color blending negatively affects the usability of such displays as it impacts the legibility and color encodings of digital content. Color correction aims at reducing the impact of color blending by finding an alternative display color which, once mixed with the background, results in the color originally intended. In this paper we model color blending based on two distortions induced by the optical see-through display. The render distortion explains how the display renders colors. The material distortion explains how background colors are changed by the display material. We show the render distortion has a higher impact on color blending and propose binned-profiles (BP) - descriptors of how a display renders colors - to address it. Results show that color blending predictions using BP have a low error rate - within nine just noticeable differences (JND) in the worst case. We introduce a color correction algorithm based on predictions using BP and measure its correction capacity. Results show light display colors can be better corrected for all backgrounds. For high intensity backgrounds light colors in the neutral and CyanBlue regions perform better. Finally, we elaborate on the applicability, design and hardware implications of our approach.


symposium on spatial user interaction | 2016

Combining Ring Input with Hand Tracking for Precise, Natural Interaction with Spatial Analytic Interfaces

Barrett Ens; Ahmad Byagowi; Teng Han; Juan David Hincapié-Ramos; Pourang Irani

Current wearable interfaces are designed to support short-duration tasks known as micro-interactions. To support productive interfaces for everyday analytic tasks, designers can leverage natural input methods such as direct manipulation and pointing. Such natural methods are now available in virtual, mobile environments thanks to miniature depth cameras mounted on head-worn displays (HWDs). However, these techniques have drawbacks, such as fatigue and limited precision. To overcome these limitations, we explore combined input: hand tracking data from a head-mounted depth camera, and input from a small ring device. We demonstrate how a variety of input techniques can be implemented using this novel combination of devices. We harness these techniques for use with Spatial Analytic Interfaces: multi-application, spatial UIs for in-situ, analytic taskwork on wearable devices. This research demonstrates how combined input from multiple wearable devices holds promise for supporting high-precision, low-fatigue interaction techniques, to support Spatial Analytic Interfaces on HWDs.


designing interactive systems | 2014

tPad: designing transparent-display mobile interactions

Juan David Hincapié-Ramos; Sophie Roscher; Wolfgang Büschel; Ulrike Kister; Raimund Dachselt; Pourang Irani

As a novel class of mobile devices with rich interaction capabilities we introduce tPads -- transparent display tablets. tPads are the result of a systematic design investigation into the ways and benefits of interacting with transparent mobiles which goes beyond traditional mobile interactions and augmented reality (AR) applications. Through a user-centered design process we explored interaction techniques for transparent-display mobiles and classified them into four categories: overlay, dual display & input, surface capture and model-based interactions. We investigated the technical feasibility of such interactions by designing and building two touch-enabled semi-transparent tablets called tPads and a range of tPad applications. Further, a user study shows that tPad interactions applied to everyday mobile tasks (application switching and image capture) outperform current mobile interactions and were preferred by users. Our hands-on design process and experimental evaluation demonstrate that transparent displays provide valuable interaction opportunities for mobile devices.


international symposium on mixed and augmented reality | 2014

SmartColor: Real-time color correction and contrast for optical see-through head-mounted displays

Juan David Hincapié-Ramos; Levko Ivanchuk; Srikanth Kirshnamachari Sridharan; Pourang Irani

Users of optical see-through head-mounted displays (OHMD) perceive color as a blend of the display color and the background. Color-blending is a major usability challenge as it leads to loss of color encodings and poor text legibility. Color correction aims at mitigating color blending by producing an alternative color which, when blended with the background, more closely approaches the color originally intended. To date, approaches to color correction do not yield optimal results or do not work in real-time. This paper makes two contributions. First, we present QuickCorrection, a realtime color correction algorithm based on display profiles. We describe the algorithm, measure its accuracy and analyze two implementations for the OpenGL graphics pipeline. Second, we present SmartColor, a middleware for color management of userinterface components in OHMD. SmartColor uses color correction to provide three management strategies: correction, contrast, and show-up-on-contrast. Correction determines the alternate color which best preserves the original color. Contrast determines the color which best warranties text legibility while preserving as much of the original hue. Show-up-on-contrast makes a component visible when a related component does not have enough contrast to be legible. We describe the SmartColors architecture and illustrate the color strategies for various types of display content.


symposium on spatial user interaction | 2015

GyroWand: IMU-based Raycasting for Augmented Reality Head-Mounted Displays

Juan David Hincapié-Ramos; Kasım Özacar; Pourang Irani; Yoshifumi Kitamura

We present GyroWand, a raycasting technique for 3D interactions in self-contained augmented reality (AR) head-mounted displays. Unlike traditional raycasting which requires absolute spatial and rotational tracking of a users hand or controller to direct the ray, GyroWand relies on the relative rotation values captured by an inertial measurement unit (IMU) on a handheld controller. These values cannot be directly mapped to the ray direction due to the phenomenon of sensor drift and the mismatch between the orientations of the physical controller and the virtual content. To address these challenges GyroWand 1) interprets the relative rotational values using a state machine which includes an anchor, an active, an out-of-sight and a disambiguation state; 2) handles drift by resetting the default rotation when the user moves between the anchor and active states; 3) does not initiate raycasting from the users hand, but rather from other spatial coordinates (e.g. chin, shoulder, or chest); and 4) provides three new disambiguation mechanisms: Lock&Twist, Lock&Drag, and AutoTwist. In a series of controlled user studies we evaluated the performance and convenience of different GyroWand design parameters. Results show that a ray originating from the users chin facilitates selection. Results also show that Lock&Twist is faster and more accurate than other disambiguation mechanisms. We conclude with a summary of the lessons learned for the adoption of raycasting in mobile augmented reality head-mounted displays.


designing interactive systems | 2014

The consumed endurance workbench: a tool to assess arm fatigue during mid-air interactions

Juan David Hincapié-Ramos; Xiang Guo; Pourang Irani

Consumed Endurance (CE) [8] is a metric that captures the degree of arm fatigue during mid-air interactions. Research has shown that CE can assist with the design of new and minimally fatiguing gestural interfaces. We introduce the Consumed Endurance Workbench, an open source application that calculates CE in real time using an off-the-shelf skeleton tracking system. The CE Workbench tracks a persons arm as it is moved in mid-air, determining the forces involved and calculating CE over the length of the interaction. Our demonstration focuses on how to use the CE Workbench to evaluate alternative mid-air gesture designs, how to integrate the CE Workbench with existing applications, and how to prepare the CE data for statistical analysis. We also demonstrate a mid-air text-entry layout, SEATO, which we created taking CE as the main design factor.


IEEE Transactions on Visualization and Computer Graphics | 2015

SmartColor: Real-Time Color and Contrast Correction for Optical See-Through Head-Mounted Displays

Juan David Hincapié-Ramos; Levko Ivanchuk; Srikanth Kirshnamachari Sridharan; Pourang Irani

Users of optical see-through head-mounted displays (OHMD) perceive color as a blend of the display color and the background. Color-blending is a major usability challenge as it leads to loss of color encodings and poor text legibility. Color correction aims at mitigating color blending by producing an alternative color which, when blended with the background, more closely approximates the color originally intended. In this paper we present an end-to-end approach to the color blending problem addressing the distortions introduced by the transparent material of the display efficiently and in realtime. We also present a user evaluation of correction efficiency. Finally, we present a graphics library called SmartColor showcasing the use of color correction for different types of display content. SmartColor uses color correction to provide three management strategies: correction, contrast, and show-up-on-contrast. Correction determines the alternate color which best preserves the original color. Contrast determines the color which best supports text legibility while preserving as much of the original hue. Show-up-on-contrast makes a component visible when a related component does not have enough contrast to be legible. We describe SmartColors architecture and illustrate the color strategies for various types of display content.

Collaboration


Dive into the Juan David Hincapié-Ramos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Guo

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barrett Ens

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raimund Dachselt

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sophie Roscher

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Ulrike Kister

Dresden University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge