Daniel Ashbrook
Nokia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Ashbrook.
ubiquitous computing | 2003
Daniel Ashbrook; Thad Starner
Wearable computers have the potential to act as intelligent agents in everyday life and to assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the users task. However, another potential use of location context is the creation of a predictive model of the users future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single-user and collaborative scenarios.
international symposium on wearable computers | 2002
Daniel Ashbrook; Thad Starner
Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.
international symposium on wearable computers | 2000
Thad Starner; Jake Auxier; Daniel Ashbrook; Maribeth Gandy
In this paper we present a wearable device for control of home automation systems via hand gestures. This solution has many advantages over traditional home automation interfaces in that it can be used by those with loss of vision, motor skills, and mobility. By combining other sources of context with the pendant we can reduce the number and complexity of gestures while maintaining functionality. As users input gestures, the system can also analyze their movements for pathological tremors. This information can then be used for medical diagnosis, therapy, and emergency services.Currently, the Gesture Pendant can recognize control gestures with an accuracy of 95% and userdefined gestures with an accuracy of 97% It can detect tremors above 2HZ within .1 Hz.
user interface software and technology | 2004
Kent Lyons; Christopher Skeels; Thad Starner; Cornelis M. Snoeck; Benjamin A. Wong; Daniel Ashbrook
In this paper, we explore the concept of dual-purpose speech: speech that is socially appropriate in the context of a human-to-human conversation which also provides meaningful input to a computer. We motivate the use of dual-purpose speech and explore issues of privacy and technological challenges related to mobile speech recognition. We present three applications that utilize dual-purpose speech to assist a user in conversational tasks: the Calendar Navigator Agent, DialogTabs, and Speech Courier. The Calendar Navigator Agent navigates a users calendar based on socially appropriate speech used while scheduling appointments. DialogTabs allows a user to postpone cognitive processing of conversational material by proving short-term capture of transient information. Finally, Speech Courier allows asynchronous delivery of relevant conversational information to a third party.
human factors in computing systems | 2010
Daniel Ashbrook; Thad Starner
Devices capable of gestural interaction through motion sensing are increasingly becoming available to consumers; however, motion gesture control has yet to appear outside of game consoles. Interaction designers are frequently not expert in pattern recognition, which may be one reason for this lack of availability. Another issue is how to effectively test gestures to ensure that they are not unintentionally activated by a users normal movements during everyday usage. We present MAGIC, a gesture design tool that addresses both of these issues, and detail the results of an evaluation.
user interface software and technology | 2012
Kent Lyons; David Nguyen; Daniel Ashbrook; Sean White
We present Facet, a multi-display wrist worn system consisting of multiple independent touch-sensitive segments joined into a bracelet. Facet automatically determines the pose of the system as a whole and of each segment individually. It further supports multi-segment touch, yielding a rich set of touch input techniques. Our work builds on these two primitives to allow the user to control how applications use segments alone and in coordination. Applications can expand to use more segments, collapses to encompass fewer, and be swapped with other segments. We also explore how the concepts from Facet could apply to other devices in this design space.
human factors in computing systems | 2008
Daniel Ashbrook; James Clawson; Kent Lyons; Thad Starner; Nirmal Patel
We investigate the effect of placement and user mobility on the time required to access an on-body interface. In our study, a wrist-mounted system was significantly faster to access than a device stored in the pocket or mounted on the hip. In the latter two conditions, 78% of the time it took to access the device was spent retrieving the device from its holder. As mobile devices are beginning to include peripherals (for example, Bluetooth headsets and watches connected to a mobile phone stored in the pocket), these results may help guide interface designers with respect to distributing functions across the body between peripherals.
human computer interaction with mobile devices and services | 2008
Daniel Ashbrook; Kent Lyons; Thad Starner
The wristwatch is a device that is quick to access, but is currently under-utilized as a platform for interaction. We investigate interaction on a circular touchscreen wristwatch, empirically determining the error rate for variously-sized buttons placed around the rim. We consider three types of inter-target movements, and derive a mathematical model for error rate given a movement type and angular and radial button widths.
wearable and implantable body sensor networks | 2007
David Minnen; Tracy L. Westeyn; Daniel Ashbrook; Peter Presti; Thad Starner
We describe the activity recognition component of the Soldier Assist System (SAS), which was built to meet the goals of DARPA’s Advanced Soldier Sensor Information System and Technology (ASSIST) program. As a whole, SAS provides an integrated solution that includes on-body data capture, automatic recognition of soldier activity, and a multimedia interface that combines data search and exploration. The recognition component analyzes readings from six on-body accelerometers to identify activity. The activities are modeled by boosted 1D classifiers, which allows efficient selection of the most useful features within the learning algorithm. We present empirical results based on data collected at Georgia Tech and at the Army’s Aberdeen Proving Grounds during official testing by a DARPA appointed NIST evaluation team. Our approach achieves 78.7% for continuous event recognition and 70.3% frame level accuracy. The accuracy increases to 90.3% and 90.3% respectively when considering only the modeled activities. In addition to standard error metrics, we discuss error division diagrams (EDDs) for several Aberdeen data sequences to provide a rich visual representation of the performance of our system.
user interface software and technology | 2011
Felix Xiaozhu Lin; Daniel Ashbrook; Sean White
We present RhythmLink, a system that improves the wireless pairing user experience. Users can link devices such as phones and headsets together by tapping a known rhythm on each device. In contrast to current solutions, RhythmLink does not require user interaction with the host device during the pairing process; and it only requires binary input on the peripheral, making it appropriate for small devices with minimal physical affordances. We describe the challenges in enabling this user experience and our solution, an algorithm that allows two devices to compare imprecisely-entered tap sequences while maintaining the secrecy of those sequences. We also discuss our prototype implementation of RhythmLink and review the results of initial user tests.