Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Larson is active.

Publication


Featured researches published by Kevin Larson.


user interface software and technology | 1998

Data mountain: using spatial memory for document management

George G. Robertson; Mary Czerwinski; Kevin Larson; Daniel C. Robbins; David Thiel; Maarten van Dantzich

Effective management of documents on computers has been a central user interface problem for many years. One common approach involves using 2D spatial layouts of icons representing the documents, particularly for information workspace tasks. This approach takes advantage of human 2D spatial cognition. More recently, several 3D spatial layouts have engaged 3D spatial cognition capabilities. Some have attempted to use spatial memory in 3D virtual environments. However, there has been no proof to date that spatial memory works the same way in 3D virtual environments as it does in the real world. We describe a new technique for document management called the Data Mountain, which allows users to place documents at arbitrary positions on an inclined plane in a 3D desktop virtual environment using a simple 2D interaction technique. We discuss how the design evolved in response to user feedback. We also describe a user study that shows that the Data Mountain does take advantage of spatial memory. Our study shows that the Data Mountain has statistically reliable advantages over the Microsoft Internet Explorer Favorites mechanism for managing documents of interest in an information workspace.


human factors in computing systems | 1998

Web page design: implications of memory, structure and scent for information retrieval

Kevin Larson; Mary Czerwinski

ABSTNACT Much is known about depth and breadth tradeoff issues in graphical user interface menu design. We describe an experiment to see if large breadth and decreased depth is preferable, both subjectively and via performance data, while attempting to design for optimal scent throughout different structures of a website. A study is reported which modified previous procedures for investigating depth/breadth tradeoffs in content design for the web. Results showed that, while increased depth did harm search performance on the web, a medium condition of depth and breadth outperformed the broadesf shallow web structure overall.


human factors in computing systems | 2005

Designing human friendly human interaction proofs (HIPs)

Kumar Chellapilla; Kevin Larson; Patrice Y. Simard; Mary Czerwinski

HIPs, or Human Interactive Proofs, are challenges meant to be easily solved by humans, while remaining too hard to be economically solved by computers. HIPs are increasingly used to protect services against automatic script attacks. To be effective, a HIP must be difficult enough to discourage script attacks by raising the computation and/or development cost of breaking the HIP to an unprofitable level. At the same time, the HIP must be easy enough to solve in order to not discourage humans from using the service. Early HIP designs have successfully met these criteria [1]. However, the growing sophistication of attackers and correspondingly increasing profit incentives have rendered most of the currently deployed HIPs vulnerable to attack [2,7,12]. Yet, most companies have been reluctant to increase the difficulty of their HIPs for fear of making them too complex or unappealing to humans. The purpose of this study is to find the visual distortions that are most effective at foiling computer attacks without hindering humans. The contribution of this research is that we discovered that 1) automatically generating HIPs by varying particular distortion parameters renders HIPs that are too easy for computer hackers to break, yet humans still have difficulty recognizing them, and 2) it is possible to build segmentation-based HIPs that are extremely difficult and expensive for computers to solve, while remaining relatively easy for humans.


Lecture Notes in Computer Science | 2005

Building segmentation based human-friendly human interaction proofs (HIPs)

Kumar Chellapilla; Kevin Larson; Patrice Y. Simard; Mary Czerwinski

Human interaction proofs (HIPs) have become common place on the internet due to their effectiveness in deterring automated abuse of online services intended for humans. However, there is a co-evolutionary arms race in progress and these proofs are becoming more difficult for genuine users while attackers are getting better at breaking existing HIPs. We studied various popular HIPs on the internet to understand their strength and human friendliness. To determine HIP strength, we adopted a direct approach of building computer attacks using image processing and machine learning techniques. To understand human-friendliness, a sequence of users studies were conducted to investigate HIP character recognition by humans under a variety of visual distortions and clutter commonly employed in reading-based HIPs. We found that many of the online HIPs are pure recognition tasks that can be easily broken using machine learning. The stronger HIPs tend to pose a combination of segmentation and recognition challenges. Further, the HIP user studies show that given correct segmentation, computers are much better at HIP character recognition than humans. In light of these results, we propose that segmentation-based reading challenges are the future for building stronger human-friendly HIPs. An example of such a segmentation-based HIP is presented with a preliminary assessment of its strength and human-friendliness.


IEEE Spectrum | 2007

The Technology of Text

Kevin Larson

Computer screens have a big problem in terms of resolution, even LCD screens on most laptops and desktop computers have a resolution of only about 100 pixels per inch. It is observed that a printed page has much better quality. This paper discusses how to achieve a much better resolution on a computer screen to improve ones reading experience.


human factors in computing systems | 2000

Text in 3D: some legibility results

Kevin Larson; Maarten van Dantzich; Mary Czerwinski; George G. Robertson

3D user interfaces for productivity applications often display object labels or whole documents in arrangements where the text is rotated instead of screen aligned. Rotating a document sideways saves screen real estate while allowing inspection of the documents content. This paper reports on an initial reading speed study of text rotated around a vertical axis and manipulated in size. We found that with sufficient rendering quality small text can be substantially rotated before reading performance suffers, and large text legibility is nearly unaffected by rotation. The empirically derived guidelines we present are the first published for 3D text and important for the design of 3D information visualizations.


IEEE\/OSA Journal of Display Technology | 2008

A Display Simulation Toolbox for Image Quality Evaluation

Joyce E. Farrell; Gregory Ng; Xiaowei Ding; Kevin Larson; Brian A. Wandell

The output of image coding and rendering algorithms are presented on a diverse array of display devices. To evaluate these algorithms, image quality metrics should include more information about the spatial and chromatic properties of displays. To understand how to best incorporate such display information, we need a computational and empirical framework to characterize displays. Here we describe a set of principles and an integrated suite of software tools that provide such a framework. The display simulation toolbox (DST) is an integrated suite of software tools that help the user characterize the key properties of display devices and predict the radiance of displayed images. Assuming that pixel emissions are independent, the DST uses the sub-pixel point spread functions, spectral power distributions, and gamma curves to calculate display image radiance. We tested the assumption of pixel independence for two liquid crystal device (LCD) displays and two cathode-ray tube (CRT) displays. For the LCD displays, the independence assumption is reasonably accurate. For the CRT displays it is not. The simulations and measurements agree well for displays that meet the model assumptions and provide information about the nature of the failures for displays that do not meet these assumptions.


Interactions | 1998

Business: trends in future Web designs: what's next for the HCI professional?

Mary Czerwinski; Kevin Larson

Information ExplosionOne thing is certain—the amount of information available on the Web continues to grow at adizzying pace. For a Web site to add significant value to the user, it must provide an overview ofaccessible information to all users of varying Web expertise. This article attempts to outline manyof the new interaction trends for information management that can now be observed on theWorld Wide Web. This overview was prepared with an eye toward attempting to ferret outresearch techniques and methods that might be most advantageous to a HCI (human–computerinteraction) professional working on future Web designs.


Journal of The Society for Information Display | 2011

Optimizing subpixel rendering using a perceptual metric

Joyce E. Farrell; Shalomi Eldar; Kevin Larson; Tanya Matskewich; Brian A. Wandell

— ClearType is a subpixel-rendering method designed to improve the perceived quality of text. The method renders text at subpixel resolution and then applies a one-dimensional symmetric mean-preserving filter to reduce color artifacts. This paper describes a computational method and experimental tests to assess user preferences for different filter parameters. The computational method uses a physical display simulation and a perceptual metric that includes a model of human spatial and chromatic sensitivity. The method predicts experimentally measured preferences for filters for a range of characters, fonts, and displays.


SID Symposium Digest of Technical Papers | 2008

59.1: Invited Paper: A Display Simulation Toolbox

Joyce E. Farrell; Gregory Ng; Kevin Larson; Brian A. Wandell

The Display Simulation Toolbox (DST) is an integrated suite of software tools that help the user characterize the key properties of display devices and predict the radiance of displayed images. Assuming that pixel emissions are independent, the DST uses the sub-pixel point spread functions, spectral power distributions, and gamma curves to calculate display image radiance. for LCD displays, the assumption of pixel independence assumption is reasonably accurate. for CRT displays it is not.

Collaboration


Dive into the Kevin Larson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge