Featured Researches

Human Computer Interaction

Augment Yourself: Mixed Reality Self-Augmentation Using Optical See-through Head-mounted Displays and Physical Mirrors

Optical see-though head-mounted displays (OST HMDs) are one of the key technologies for merging virtual objects and physical scenes to provide an immersive mixed reality (MR) environment to its user. A fundamental limitation of HMDs is, that the user itself cannot be augmented conveniently as, in casual posture, only the distal upper extremities are within the field of view of the HMD. Consequently, most MR applications that are centered around the user, such as virtual dressing rooms or learning of body movements, cannot be realized with HMDs. In this paper, we propose a novel concept and prototype system that combines OST HMDs and physical mirrors to enable self-augmentation and provide an immersive MR environment centered around the user. Our system, to the best of our knowledge the first of its kind, estimates the user's pose in the virtual image generated by the mirror using an RGBD camera attached to the HMD and anchors virtual objects to the reflection rather than the user directly. We evaluate our system quantitatively with respect to calibration accuracy and infrared signal degradation effects due to the mirror, and show its potential in applications where large mirrors are already an integral part of the facility. Particularly, we demonstrate its use for virtual fitting rooms, gaming applications, anatomy learning, and personal fitness. In contrast to competing devices such as LCD-equipped smart mirrors, the proposed system consists of only an HMD with RGBD camera and, thus, does not require a prepared environment making it very flexible and generic. In future work, we will aim to investigate how the system can be optimally used for physical rehabilitation and personal training as a promising application.

Read more
Human Computer Interaction

Augmented Reality Chess Analyzer (ARChessAnalyzer): In-Device Inference of Physical Chess Game Positions through Board Segmentation and Piece Recognition using Convolutional Neural Network

Chess game position analysis is important in improving ones game. It requires entry of moves into a chess engine which is, cumbersome and error prone. We present ARChessAnalyzer, a complete pipeline from live image capture of a physical chess game, to board and piece recognition, to move analysis and finally to Augmented Reality (AR) overlay of the chess diagram position and move on the physical board. ARChessAnalyzer is like a scene analyzer - it uses an ensemble of traditional image and vision techniques to segment the scene (ie the chess game) and uses Convolution Neural Networks (CNNs) to predict the segmented pieces and combine it together to analyze the game. This paper advances the state of the art in the first of its kind end to end integration of robust detection and segmentation of the board, chess piece detection using the fine-tuned AlexNet CNN and chess engine analyzer in a handheld device app. The accuracy of the entire chess position prediction pipeline is 93.45\% and takes 3-4.5sec from live capture to AR overlay. We also validated our hypothesis that ARChessAnalyzer, is faster at analysis than manual entry for all board positions for valid outcomes. Our hope is that the instantaneous feedback this app provides will help chess learners worldwide at all levels improve their game.

Read more
Human Computer Interaction

Augmented Reality-Based Advanced Driver-Assistance System for Connected Vehicles

With the development of advanced communication technology, connected vehicles become increasingly popular in our transportation systems, which can conduct cooperative maneuvers with each other as well as road entities through vehicle-to-everything communication. A lot of research interests have been drawn to other building blocks of a connected vehicle system, such as communication, planning, and control. However, less research studies were focused on the human-machine cooperation and interface, namely how to visualize the guidance information to the driver as an advanced driver-assistance system (ADAS). In this study, we propose an augmented reality (AR)-based ADAS, which visualizes the guidance information calculated cooperatively by multiple connected vehicles. An unsignalized intersection scenario is adopted as the use case of this system, where the driver can drive the connected vehicle crossing the intersection under the AR guidance, without any full stop at the intersection. A simulation environment is built in Unity game engine based on the road network of San Francisco, and human-in-the-loop (HITL) simulation is conducted to validate the effectiveness of our proposed system regarding travel time and energy consumption.

Read more
Human Computer Interaction

Augmenting Scientific Papers with Just-in-Time, Position-Sensitive Definitions of Terms and Symbols

Despite the central importance of research papers to scientific progress, they can be difficult to read. Comprehension is often stymied when the information needed to understand a passage resides somewhere else: in another section, or in another paper. In this work, we envision how interfaces can bring definitions of technical terms and symbols to readers when and where they need them most. We introduce ScholarPhi, an augmented reading interface with four novel features: (1) tooltips that surface position-sensitive definitions from elsewhere in a paper, (2) a filter over the paper that "declutters" it to reveal how the term or symbol is used across the paper, (3) automatic equation diagrams that expose multiple definitions in parallel, and (4) an automatically generated glossary of important terms and symbols. A usability study showed that the tool helps researchers of all experience levels read papers. Furthermore, researchers were eager to have ScholarPhi's definitions available to support their everyday reading.

Read more
Human Computer Interaction

Augmenting Sheet Music with Rhythmic Fingerprints

In this paper, we bridge the gap between visualization and musicology by focusing on rhythm analysis tasks, which are tedious due to the complex visual encoding of the well-established Common Music Notation (CMN). Instead of replacing the CMN, we augment sheet music with rhythmic fingerprints to mitigate the complexity originating from the simultaneous encoding of musical features. The proposed visual design exploits music theory concepts such as the rhythm tree to facilitate the understanding of rhythmic information. Juxtaposing sheet music and the rhythmic fingerprints maintains the connection to the familiar representation. To investigate the usefulness of the rhythmic fingerprint design for identifying and comparing rhythmic patterns, we conducted a controlled user study with four experts and four novices. The results show that the rhythmic fingerprints enable novice users to recognize rhythmic patterns that only experts can identify using non-augmented sheet music.

Read more
Human Computer Interaction

Augmentix -- An Augmented Reality System for asymmetric Teleteaching

Using augmented reality in education is already a common concept, as it has the potential to turn learning into a motivational learning experience. However, current research only covers the students site of learning. Almost no research focuses on the teachers' site and whether augmented reality could potentially improve his/her workflow of teaching the students or not. Many researchers do not differentiate between multiple user roles, like a student and a teacher. To allow investigation into these lacks of research, a teaching system "Augmentix" is presented, which includes a differentiation between the two user roles "teacher" and "student" to potentially enhances the teachers workflow by using augmented reality. In this system's setting the student can explore a virtual city in virtual reality and the teacher can guide him with augmented reality.

Read more
Human Computer Interaction

AutoDS: Towards Human-Centered Automation of Data Science

Data science (DS) projects often follow a lifecycle that consists of laborious tasks for data scientists and domain experts (e.g., data exploration, model training, etc.). Only till recently, machine learning(ML) researchers have developed promising automation techniques to aid data workers in these tasks. This paper introduces AutoDS, an automated machine learning (AutoML) system that aims to leverage the latest ML automation techniques to support data science projects. Data workers only need to upload their dataset, then the system can automatically suggest ML configurations, preprocess data, select algorithm, and train the model. These suggestions are presented to the user via a web-based graphical user interface and a notebook-based programming user interface. We studied AutoDS with 30 professional data scientists, where one group used AutoDS, and the other did not, to complete a data science project. As expected, AutoDS improves productivity; Yet surprisingly, we find that the models produced by the AutoDS group have higher quality and less errors, but lower human confidence scores. We reflect on the findings by presenting design implications for incorporating automation techniques into human work in the data science lifecycle.

Read more
Human Computer Interaction

Automating Gamification Personalization: To the User and Beyond

Personalized gamification explores knowledge about the users to tailor gamification designs to improve one-size-fits-all gamification. The tailoring process should simultaneously consider user and contextual characteristics (e.g., activity to be done and geographic location), which leads to several occasions to tailor. Consequently, tools for automating gamification personalization are needed. The problems that emerge are that which of those characteristics are relevant and how to do such tailoring are open questions, and that the required automating tools are lacking. We tackled these problems in two steps. First, we conducted an exploratory study, collecting participants' opinions on the game elements they consider the most useful for different learning activity types (LAT) via survey. Then, we modeled opinions through conditional decision trees to address the aforementioned tailoring process. Second, as a product from the first step, we implemented a recommender system that suggests personalized gamification designs (which game elements to use), addressing the problem of automating gamification personalization. Our findings i) present empirical evidence that LAT, geographic locations, and other user characteristics affect users' preferences, ii) enable defining gamification designs tailored to user and contextual features simultaneously, and iii) provide technological aid for those interested in designing personalized gamification. The main implications are that demographics, game-related characteristics, geographic location, and LAT to be done, as well as the interaction between different kinds of information (user and contextual characteristics), should be considered in defining gamification designs and that personalizing gamification designs can be improved with aid from our recommender system.

Read more
Human Computer Interaction

Back to the Future: Revisiting Mouse and Keyboard Interaction for HMD-based Immersive Analytics

With the rise of natural user interfaces, immersive analytics applications often focus on novel forms of interaction modalities such as mid-air gestures, gaze or tangible interaction utilizing input devices such as depth-sensors, touch screens and eye-trackers. At the same time, traditional input devices such as the physical keyboard and mouse are used to a lesser extent. We argue, that for certain work scenarios, such as conducting analytic tasks at stationary desktop settings, it can be valuable to combine the benefits of novel and established input devices as well as input modalities to create productive immersive analytics environments.

Read more
Human Computer Interaction

Balancing simulation and gameplay -- applying game user research to LeukemiaSIM

A bioinformatics researcher and a game design researcher walk into a lab... This paper shares two case-studies of a collaboration between a bioinformatics researcher who is developing a set of educational VR simulations for youth and a consultative game design researcher with a background in games User Research (GUR) techniques who assesses and iteratively improves the player experience in the simulations. By introducing games-based player engagement strategies, the two researchers improve the (re)playability of these VR simulations to encourage greater player engagement and retention.

Read more

Ready to get started?

Join us today