Amy K. Karlson
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amy K. Karlson.
ACM Computing Surveys | 2009
Andy Cockburn; Amy K. Karlson; Benjamin B. Bederson
There are many interface schemes that allow users to work at, and move between, focused and contextual views of a dataset. We review and categorize these schemes according to the interface mechanisms used to separate and blend views. The four approaches are overview+detail, which uses a spatial separation between focused and contextual views; zooming, which uses a temporal separation; focus+context, which minimizes the seam between views by displaying the focus within the context; and cue-based techniques which selectively highlight or suppress items within the information space. Critical features of these categories, and empirical evidence of their success, are discussed. The aim is to provide a succinct summary of the state-of-the-art, to illuminate both successful and unsuccessful interface strategies, and to identify potentially fruitful areas for further work.
human-computer interaction with mobile devices and services | 2006
Pekka Parhi; Amy K. Karlson; Benjamin B. Bederson
This paper describes a two-phase study conducted to determine optimal target sizes for one-handed thumb use of mobile handheld devices equipped with a touch-sensitive screen. Similar studies have provided recommendations for target sizes when using a mobile device with two hands plus a stylus, and interacting with a desktop-sized display with an index finger, but never for thumbs when holding a small device in a single hand. The first phase explored the required target size for single-target (discrete) pointing tasks, such as activating buttons, radio buttons or checkboxes. The second phase investigated optimal sizes for widgets used for tasks that involve a sequence of taps (serial), such as text entry. Since holding a device in one hand constrains thumb movement, we varied target positions to determine if performance depended on screen location. The results showed that while speed generally improved as targets grew, there were no significant differences in error rate between target sizes =9.6 mm in discrete tasks and targets =7.7 mm in serial tasks. Along with subjective ratings and the findings on hit response variability, we found that target size of 9.2 mm for discrete tasks and targets of 9.6 mm for serial tasks should be sufficiently large for one-handed thumb use on touchscreen-based handhelds without degrading performance and preference.
human factors in computing systems | 2005
Amy K. Karlson; Benjamin B. Bederson; John SanGiovanni
We present two interfaces to support one-handed thumb use for PDAs and cell phones. Both use Scalable User Interface (ScUI) techniques to support multiple devices with different resolutions and aspect ratios. The designs use variations of zooming interface techniques to provide multiple views of application data: AppLens uses tabular fisheye to access nine applications, while LaunchTile uses pure zoom to access thirty-six applications. We introduce two sets of thumb gestures, each representing different philosophies for one-handed interaction. We conducted two studies to evaluate our designs. In the first study, we explored whether users could learn and execute the AppLens gesture set with minimal training. Participants performed more accurately and efficiently using gestures for directional navigation than using gestures for object interaction. In the second study, we gathered user reactions to each interface, as well as comparative preferences. With minimal exposure to each design, most users favored AppLenss tabular fisheye interface.
human factors in computing systems | 2012
Daniel McDuff; Amy K. Karlson; Ashish Kapoor; Asta Roseway; Mary Czerwinski
We present AffectAura, an emotional prosthetic that allows users to reflect on their emotional states over long periods of time. We designed a multimodal sensor set-up for continuous logging of audio, visual, physiological and contextual data, a classification scheme for predicting user affective state and an interface for user reflection. The system continuously predicts a users valence, arousal and engage-ment, and correlates this with information on events, communications and data interactions. We evaluate the interface through a user study consisting of six users and over 240 hours of data, and demonstrate the utility of such a reflection tool. We show that users could reason forward and backward in time about their emotional experiences using the interface, and found this useful.
human factors in computing systems | 2009
Amy K. Karlson; A. J. Bernheim Brush; Stuart E. Schechter
Mobile phones are becoming increasingly personalized in terms of the data they store and the types of services they provide. At the same time, field studies have reported that there are a variety of situations in which it is natural for people to share their phones with others. However, most mobile phones support a binary security model that offers all-or-nothing access to the phone. We interviewed 12 smartphone users to explore how security and data privacy concerns affected their willingness to share their mobile phones. The diversity of guest user categorizations and associated security constraints expressed by the participants suggests the need for a security model richer than todays binary model.
international conference on pervasive computing | 2011
Hong Lu; A. J. Bernheim Brush; Bodhi Priyantha; Amy K. Karlson; Jie Liu
Automatically identifying the person you are talking with using continuous audio sensing has the potential to enable many pervasive computing applications from memory assistance to annotating life logging data. However, a number of challenges, including energy efficiency and training data acquisition, must be addressed before unobtrusive audio sensing is practical on mobile devices. We built SpeakerSense, a speaker identification prototype that uses a heterogeneous multi-processor hardware architecture that splits computation between a low power processor and the phones application processor to enable continuous background sensing with minimal power requirements. Using SpeakerSense, we benchmarked several system parameters (sampling rate, GMM complexity, smoothing window size, and amount of training data needed) to identify thresholds that balance computation cost with performance. We also investigated channel compensation methods that make it feasible to acquire training data from phone calls and an automatic segmentation method for training speaker models based on one-to-one conversations.
international conference on human computer interaction | 2007
Amy K. Karlson; Benjamin B. Bederson
In this paper, we present ThumbSpace, a software-based interaction technique that provides general one-handed thumb operation of touchscreenbased mobile devices. Our goal is to provide accurate selection of all interface objects, especially small and far targets, which are traditionally difficult to interact with using the thumb. We present the ThumbSpace design and a comparative evaluation against direct interaction for target selection. Our results show that ThumbSpace was well-received, improved accuracy for selecting targets that are out of thumb reach, and made users as effective at selecting small targets as large targets. The results further suggest user practice and design iterations hold potential to close the gap in access time between the two input methods, where ThumbSpace did not do as well as direct interaction.
human factors in computing systems | 2006
Amy K. Karlson; George G. Robertson; Daniel C. Robbins; Mary Czerwinski; Greg Smith
In this paper we describe a novel approach for searching large data sets from a mobile phone. Existing interfaces for mobile search require keyword text entry and are not suited for browsing. Our alternative uses a hybrid model to de-emphasize tedious keyword entry in favor of iterative data filtering. We propose navigation and selection of hierarchical metadata (facet navigation), with incremental text entry to further narrow the results. We conducted a formative evaluation to understand the relative advantages of keyword entry versus facet navigation for both browse and search tasks on the phone. We found keyword entry to be more powerful when the name of the search target is known, while facet navigation is otherwise more effective and strongly preferred.
international conference on pervasive computing | 2009
Amy K. Karlson; Brian Meyers; Andy Jacobs; Paul Johns; Shaun K. Kane
Research has demonstrated that information workers often manage several different computing devices in an effort to balance convenience, mobility, input efficiency, and content readability throughout their day. The high portability of the mobile phone has made it an increasingly valuable member of this ecosystem of devices. To understand how future technologies might better support productivity tasks as people transition between devices, we examined the mobile phone and PC usage patterns of sixteen information workers across several weeks. Our data logs, together with follow-up interview feedback from four of the participants, confirm that the phone is highly leveraged for digital information needs beyond calls and SMS, but suggest that these users do not currently traverse the device boundary within a given task.
visual analytics science and technology | 2006
Jerry Alan Fails; Amy K. Karlson; Layla Shahamat; Ben Shneiderman
Finding patterns of events over time is important in searching patient histories, Web logs, news stories, and criminal activities. This paper presents PatternFinder, an integrated interface for query and result-set visualization for search and discovery of temporal patterns within multivariate and categorical data sets. We define temporal patterns as sequences of events with inter-event time spans. PatternFinder allows users to specify the attributes of events and time spans to produce powerful pattern queries that are difficult to express with other formalisms. We characterize the range of queries PatternFinder supports as users vary the specificity at which events and time spans are defined. Pattern Finders query capabilities together with coupled ball-and-chain and tabular visualizations enable users to effectively query, explore and analyze event patterns both within and across data entities (e.g. patient histories, terrorist groups, Web logs, etc.)