Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ke Huo is active.

Publication


Featured researches published by Ke Huo.


tangible and embedded interaction | 2015

TIMMi: Finger-worn Textile Input Device with Multimodal Sensing in Mobile Interaction

Sang Ho Yoon; Ke Huo; Vinh P. Nguyen; Karthik Ramani

We introduce TIMMi, a textile input device for mobile interactions. TIMMi is worn on the index finger to provide a multimodal sensing input metaphor. The prototype is fabricated on a single layer of textile where the conductive silicone rubber is painted and the conductive threads are stitched. The sensing area comprises of three equally spaced dots and a separate wide line. Strain and pressure values are extracted from the line and three dots, respectively via voltage dividers. Regression analysis is performed to model the relationship between sensing values and finger pressure and bending. A multi-level thresholding is applied to capture different levels of finger bending and pressure. A temporal position tracking algorithm is implemented to capture the swipe gesture. In this preliminary study, we demonstrate TIMMi as a finger-worn input device with two applications: controlling music player and interacting with smartglasses.


user interface software and technology | 2016

TRing: Instant and Customizable Interactions with Objects Using an Embedded Magnet and a Finger-Worn Device

Sang Ho Yoon; Yunbo Zhang; Ke Huo; Karthik Ramani

We present TRing, a finger-worn input device which provides instant and customizable interactions. TRing offers a novel method for making plain objects interactive using an embedded magnet and a finger-worn device. With a particle filter integrated magnetic sensing technique, we compute the fingertips position relative to the embedded magnet. We also offer a magnet placement algorithm that guides the magnet installation location based upon the users interface customization. By simply inserting or attaching a small magnet, we bring interactivity to both fabricated and existing objects. In our evaluations, TRing shows an average tracking error of 8.6 mm in 3D space and a 2D targeting error of 4.96 mm, which are sufficient for implementing average-sized conventional controls such as buttons and sliders. A user study validates the input performance with TRing on a targeting task (92% accuracy within 45 mm distance) and a cursor control task (91% accuracy for a 10 mm target). Furthermore, we show examples that highlight the interaction capability of our approach.


user interface software and technology | 2017

iSoft: A Customizable Soft Sensor with Real-time Continuous Contact and Stretching Sensing

Sang Ho Yoon; Ke Huo; Yunbo Zhang; Guiming Chen; Luis Paredes; Subramanian Chidambaram; Karthik Ramani

We present iSoft, a single volume soft sensor capable of sensing real-time continuous contact and unidirectional stretching. We propose a low-cost and an easy way to fabricate such piezoresistive elastomer-based soft sensors for instant interactions. We employ an electrical impedance tomography (EIT) technique to estimate changes of resistance distribution on the sensor caused by fingertip contact. To compensate for the rebound elasticity of the elastomer and achieve real-time continuous contact sensing, we apply a dynamic baseline update for EIT. The baseline updates are triggered by fingertip contact and movement detections. Further, we support unidirectional stretching sensing using a model-based approach which works separately with continuous contact sensing. We also provide a software toolkit for users to design and deploy personalized interfaces with customized sensors. Through a series of experiments and evaluations, we validate the performance of contact and stretching sensing. Through example applications, we show the variety of examples enabled by iSoft.


tangible and embedded interaction | 2017

Window-Shaping: 3D Design Ideation by Creating on, Borrowing from, and Looking at the Physical World

Ke Huo; Vinayak; Karthik Ramani

We present, Window-Shaping, a tangible mixed-reality (MR) interaction metaphor for design ideation that allows for the direct creation of 3D shapes on and around physical objects. Using the sketch-and-inflate scheme, our metaphor enables quick design of dimensionally consistent and visually coherent 3D models by borrowing visual and dimensional attributes from existing physical objects without the need for 3D reconstruction or fiducial markers. Through a preliminary evaluation of our prototype application we demonstrate the expressiveness provided by our design workflow, the effectiveness of our interaction scheme, and the potential of our metaphor.


Pervasive and Mobile Computing | 2016

Wearable textile input device with multimodal sensing for eyes-free mobile interaction during daily activities

Sang Ho Yoon; Ke Huo; Karthik Ramani

Abstract As pervasive computing is widely available during daily activities, wearable input devices which promote an eyes-free interaction are needed for easy access and safety. We propose a textile wearable device which enables a multimodal sensing input for an eyes-free mobile interaction during daily activities. Although existing input devices possess multimodal sensing capabilities with a small form factor, they still suffer from deficiencies in compactness and softness due to the nature of embedded materials and components. For our prototype, we paint a conductive silicone rubber on a single layer of textile and stitch conductive threads. From a single layer of the textile, multimodal sensing (strain and pressure) values are extracted via voltage dividers. A regression analysis, multi-level thresholding and a temporal position tracking algorithm are applied to capture the different levels and modes of finger interactions to support the input taxonomy. We then demonstrate example applications with interaction design allowing users to control existing mobile, wearable, and digital devices. The evaluation results confirm that the prototype can achieve an accuracy of ≥ 80 % for demonstrating all input types, ≥ 88 % for locating the specific interaction areas for eyes-free interaction, and the robustness during daily activity related motions. Multitasking study reveals that our prototype promotes relatively fast response with low perceived workload comparing to existing eyes-free input.


ubiquitous computing | 2014

Plex: finger-worn textile sensor for mobile interaction during activities

Sang Ho Yoon; Ke Huo; Karthik Ramani

We present Plex, a finger-worn textile sensor for eyes-free mobile interaction during daily activities. Although existing products like a data glove possess multiple sensing capabilities, they are not designed for environments where body and finger motion are dynamic. Usually an interaction with fingers couples bending and pressing. In Plex, we separate bending and pressing by placing each sensing element in discrete faces of a finger. Our proposed simple and low-cost fabrication process using conductive elastomers and threads transforms an elastic fabric into a finger-worn interaction tool. Plex preserves an inter-finger natural tactile feedback and proprioception. We also explore the interaction design and implement applications allowing users to interact with existing mobile and wearable devices using Plex.


intelligent robots and systems | 2014

HexaMorph: A reconfigurable and foldable hexapod robot inspired by origami

Wei Gao; Ke Huo; Jasjeet Singh Seehra; Karthik Ramani; Raymond J. Cipra

Origami affords the creation of diverse 3D objects through explicit folding processes from 2D sheets of material. Originally as a paper craft from 17th century AD, origami designs reveal the rudimentary characteristics of sheet folding: it is lightweight, inexpensive, compact and combinatorial. In this paper, we present “HexaMorph”, a novel starfish-like hexapod robot designed for modularity, foldability and reconfigurability. Our folding scheme encompasses periodic foldable tetrahedral units, called “Basic Structural Units” (BSU), for constructing a family of closed-loop spatial mechanisms and robotic forms. The proposed hexapod robot is fabricated using single sheets of cardboard. The electronic and battery components for actuation are allowed to be preassembled on the flattened crease-cut pattern and enclosed inside when the tetrahedral modules are folded. The self-deploying characteristic and the mobility of the robot are investigated, and we discuss the motion planning and control strategies for its squirming locomotion. Our design and folding paradigm provides a novel approach for building reconfigurable robots using a range of lightweight foldable sheets.


symposium on spatial user interaction | 2016

Window-Shaping: 3D Design Ideation in Mixed Reality

Ke Huo; Vinayak; Karthik Ramani

We present, Window-Shaping, a mobile, markerless, mixed-reality (MR) interface for creative design ideation that allows for the direct creation of 3D shapes on and around physical objects. Using the sketch-and-inflate scheme, we present a design workflow where users can create dimensionally consistent and visually coherent 3D models by borrowing visual and dimensional attributes from existing physical objects.


human factors in computing systems | 2018

Scenariot: Spatially Mapping Smart Things Within Augmented Reality Scenes

Ke Huo; Yuanzhi Cao; Sang Ho Yoon; Zhuangying Xu; Guiming Chen; Karthik Ramani

The emerging simultaneous localizing and mapping (SLAM) based tracking technique allows the mobile AR device spatial awareness of the physical world. Still, smart things are not fully supported with the spatial awareness in AR. Therefore, we present Scenariot, a method that enables instant discovery and localization of the surrounding smart things while also spatially registering them with a SLAM based mobile AR system. By exploiting the spatial relationships between mobile AR systems and smart things, Scenariot fosters in-situ interactions with connected devices. We embed Ultra-Wide Band (UWB) RF units into the AR device and the controllers of the smart things, which allows for measuring the distances between them. With a one-time initial calibration, users localize multiple IoT devices and map them within the AR scenes. Through a series of experiments and evaluations, we validate the localization accuracy as well as the performance of the enabled spatial aware interactions. Further, we demonstrate various use cases through Scenariot.


tangible and embedded interaction | 2018

Ani-Bot: A Modular Robotics System Supporting Creation, Tweaking, and Usage with Mixed-Reality Interactions

Yuanzhi Cao; Zhuangying Xu; Terrell Glenn; Ke Huo; Karthik Ramani

Ani-Bot is a modular robotics system that allows users to control their DIY robots using Mixed-Reality Interaction (MRI). This system takes advantage of MRI to enable users to visually program the robot through the augmented view of a Head-Mounted Display (HMD). In this paper, we first explain the design of the Mixed-Reality (MR) ready modular robotics system, which allows users to instantly perform MRI once they finish assembling the robot. Then, we elaborate the augmentations provided by the MR system in the three primary phases of a construction kits lifecycle: Creation, Tweaking, and Usage. Finally, we demonstrate Ani-Bot with four application examples and evaluate the system with a two-session user study. The results of our evaluation indicate that Ani-Bot does successfully embed MRI into the lifecycle (Creation, Tweaking, Usage) of DIY robotics and that it does show strong potential for delivering an enhanced user experience.

Collaboration


Dive into the Ke Huo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge