Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Koji Yatani is active.

Publication


Featured researches published by Koji Yatani.


user interface software and technology | 2010

Pen + touch = new tools

Ken Hinckley; Koji Yatani; Michel Pahud; Nicole Coddington; Jenny Rodenhouse; Andrew D. Wilson; Hrvoje Benko; Bill Buxton

We describe techniques for direct pen+touch input. We observe peoples manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the glue that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the users focus on the workspace.


user interface software and technology | 2009

SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices

Koji Yatani; Khai N. Truong

One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.


human factors in computing systems | 2008

Escape: a target selection technique using visually-cued gestures

Koji Yatani; Kurt Partridge; Marshall W. Bern; Mark W. Newman

Many mobile devices have touch-sensitive screens that people interact with using fingers or thumbs. However, such interaction is difficult because targets become occluded, and because fingers and thumbs have low input resolution. Recent research has addressed occlusion through visual techniques. However, the poor resolution of finger and thumb selection still limits selection speed. In this paper, we address the selection speed problem through a new target selection technique called Escape. In Escape, targets are selected by gestures cued by icon position and appearance. A user study shows that for targets six to twelve pixels wide, Escape performs at a similar error rate and at least 30% faster than Shift, an alternative technique, on a similar task. We evaluate Escapes performance in different circumstances, including different icon sizes, icon overlap, use of color, and gesture direction. We also describe an algorithm that assigns icons to targets, thereby improving Escapes performance.


user interface software and technology | 2011

The 1line keyboard: a QWERTY layout in a single line

Frank Chun Yat Li; Richard T. Guy; Koji Yatani; Khai N. Truong

Current soft QWERTY keyboards often consume a large portion of the screen space on portable touchscreens. This space consumption can diminish the overall user experi-ence on these devices. In this paper, we present the 1Line keyboard, a soft QWERTY keyboard that is 140 pixels tall (in landscape mode) and 40% of the height of the native iPad QWERTY keyboard. Our keyboard condenses the three rows of keys in the normal QWERTY layout into a single line with eight keys. The sizing of the eight keys is based on users mental layout of a QWERTY keyboard on an iPad. The system disambiguates the word the user types based on the sequence of keys pressed. The user can use flick gestures to perform backspace and enter, and tap on the bezel below the keyboard to input a space. Through an evaluation, we show that participants are able to quickly learn how to use the 1Line keyboard and type at a rate of over 30 WPM after just five 20-minute typing sessions. Using a keystroke level model, we predict the peak expert text entry rate with the 1Line keyboard to be 66--68 WPM.


human factors in computing systems | 2011

Review spotlight: a user interface for summarizing user-generated reviews using adjective-noun word pairs

Koji Yatani; Michael Novati; Andrew Trusty; Khai N. Truong

Many people read online reviews written by other users to learn more about a product or venue. However, the overwhelming amount of user-generated reviews and variance in length, detail and quality across the reviews make it difficult to glean useful information. In this paper, we present the iterative design of our system, called Review Spotlight. It provides a brief overview of reviews using adjective-noun word pairs, and allows the user to quickly explore the reviews in greater detail. Through a laboratory user study which required participants to perform decision making tasks, we showed that participants could form detailed impressions about restaurants and decide between two options significantly faster with Review Spotlight than with traditional review webpages.


user interface software and technology | 2010

Sensing foot gestures from the pocket

Jeremy A. Scott; David Dearman; Koji Yatani; Khai N. Truong

Visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the users attention when they are focusing on another task and impede accessibility for visually impaired users. Because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. In this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. Building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the users pocket or in a holster on their hip. Our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. Through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy.


human factors in computing systems | 2010

Manual deskterity: an exploration of simultaneous pen + touch direct input

Ken Hinckley; Koji Yatani; Michel Pahud; Nicole Coddington; Jenny Rodenhouse; Andrew D. Wilson; Hrvoje Benko; Bill Buxton

Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.


human computer interaction with mobile devices and services | 2007

An evaluation of stylus-based text entry methods on handheld devices in stationary and mobile settings

Koji Yatani; Khai N. Truong

Effective text entry on handheld devices remains a significant problem in the field of mobile computing. On a personal digital assistant (PDA), text entry methods traditionally support input through the motion of a stylus held in the users dominant hand. In this paper, we present the design of a two-handed software keyboard for a PDA which specifically takes advantage of the thumb in the non-dominant hand. We compare our chorded keyboard design to other stylus-based text entry methods in an evaluation that studies user input in both stationary and mobile settings. Our study shows that users type fastest using the miniqwerty keyboard, and most accurately using our two-handed keyboard. We also discovered a difference in input performance with the mini-qwerty keyboard between stationary and mobile settings. As a user walks, text input speed decreases while error rates and mental workload increases; however, these metrics remain relatively stable in our two-handed technique despite user mobility.


IEEE Pervasive Computing | 2009

Understanding Mobile Phone Situated Sustainability: The Influence of Local Constraints and Practices on Transferability

Elaine M. Huang; Koji Yatani; Khai N. Truong; Julie A. Kientz; Shwetak N. Patel

Consumers discard roughly 125 million mobile phones into landfills every year. The authors explore how local and community factors affect mobile phone sustainability.


interactive tabletops and surfaces | 2011

Design of unimanual multi-finger pie menu interaction

Nikola Banovic; Frank Chun Yat Li; David Dearman; Koji Yatani; Khai N. Truong

Context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. Context menus make a subset of commands in the application quickly available to the user. However, on tabletop touchscreen computers, context menus have all but disappeared. In this paper, we investigate how to design context menus for efficient unimanual multi-touch use. We investigate the limitations of the arm, wrist, and fingers and how it relates to human performance of multi-targets selection tasks on multi-touch surface. We show that selecting targets with multiple fingers simultaneously improves the performance of target selection compared to traditional single finger selection, but also increases errors. Informed by these results, we present our own context menu design for horizontal tabletop surfaces.

Collaboration


Dive into the Koji Yatani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge