Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brandon Paulson is active.

Publication


Featured researches published by Brandon Paulson.


Ksii Transactions on Internet and Information Systems | 2011

Recognizing sketched multistroke primitives

Tracy Hammond; Brandon Paulson

Sketch recognition attempts to interpret the hand-sketched markings made by users on an electronic medium. Through recognition, sketches and diagrams can be interpreted and sent to simulators or other meaningful analyzers. Primitives are the basic building block shapes used by high-level visual grammars to describe the symbols of a given sketch domain. However, one limitation of these primitive recognizers is that they often only support basic shapes drawn with a single stroke. Furthermore, recognizers that do support multistroke primitives place additional constraints on users, such as temporal timeouts or modal button presses to signal shape completion. The goal of this research is twofold. First, we wanted to determine the drawing habits of most users. Our studies found multistroke primitives to be more prevalent than multiple primitives drawn in a single stroke. Additionally, our studies confirmed that threading is less frequent when there are more sides to a figure. Next, we developed an algorithm that is capable of recognizing multistroke primitives without requiring special drawing constraints. The algorithm uses a graph-building and search technique that takes advantage of Tarjans linear search algorithm, along with principles to determine the goodness of a fit. Our novel, constraint-free recognizer achieves accuracy rates of 96% on freely-drawn primitives.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2011

Object interaction detection using hand posture cues in an office setting

Brandon Paulson; Danielle Cummings; Tracy Hammond

Activity recognition plays a key role in providing information for context-aware applications. When attempting to model activities, some researchers have looked towards Activity Theory, which theorizes that activities have objectives and are accomplished through interactions with tools and objects. The goal of this paper is to determine if hand posture can be used as a cue to determine the types of interactions a user has with objects in a desk/office environment. Furthermore, we wish to determine if hand posture is user-independent across all users when interacting with the same objects in a natural manner. Our experiments indicate that (a) hand posture can be used to determine object interaction, with accuracy rates around 97%, and (b) hand posture is dependent upon the individual user when users are allowed to interact with objects as they would naturally.


human factors in computing systems | 2008

Free-sketch recognition: putting the chi in sketching

Tracy Hammond; Brian Eoff; Brandon Paulson; Aaron Wolin; Katie Dahmen; Joshua Johnston; Pankaj Rajan

Sketch recognition techniques have generally fallen into two camps. Gesture-based techniques, such as those used by the Palm Pilots Graffiti, can provide high-accuracy, but require the user to learn a particular drawing style in order for shapes to be recognized. Free-sketch recognition allows users to draw shapes as they would naturally, but most current techniques have low accuracies or require significant domain-level tweaking to make them usable. Our goal is to recognize free-hand sketches with high accuracy by developing generalized techniques that work for a variety of domains, including design and education. This is a work-in-progress, but we have made significant advancements toward our over-arching goal.


interaction design and children | 2008

Sketch-based educational games: "drawing" kids away from traditional interfaces

Brandon Paulson; Brian Eoff; Aaron Wolin; Joshua Johnston; Tracy Hammond

Computer-based games and technologies can be significant aids for helping children learn. However, most computer-based games simply address the learning styles of visual and auditory learners. Sketch-based interfaces, however, can also address the needs of those children who learn better through tactile and kinesthetic approaches. Furthermore, sketch recognition can allow for automatic feedback to aid children without the explicit need for teacher to be present. In this paper, we present various sketch-based tools and games that promote tactile learning and entertainment for children.


sketch based interfaces and modeling | 2009

Sort, merge, repeat: an algorithm for effectively finding corners in hand-sketched strokes

Aaron Wolin; Brandon Paulson; Tracy Hammond

Free-sketch recognition systems attempt to recognize freely-drawn sketches without placing stylistic constraints on the users. Such systems often recognize shapes by using geometric primitives that describe the shapes appearance rather than how it was drawn. A free-sketch recognition system necessarily allows users to draw several primitives using a single stroke. Corner finding, or vertex detection, is used to segment these strokes into their underlying primitives (lines and arcs), which in turn can be passed to the geometric recognizers. In this paper, we present a new multi-pass corner finding algorithm called MergeCF that is based on continually merging smaller stroke segments with similar, larger stroke segments in order to eliminate false positive corners. We compare MergeCF to two benchmark corner finders with substantial improvements in corner detection.


sketch based interfaces and modeling | 2008

SOUSA: sketch-based online user study applet

Brandon Paulson; Aaron Wolin; Joshua Johnston; Tracy Hammond

Although existing domain-specific datasets are readily available, most sketch recognition researchers are forced to collect new data for their particular domain. Creating tools to collect and label sketched data can take time, and, if every researcher creates their own toolset, much time is wasted that could be better suited toward advanced research. Additionally, it is often the case that other researchers have performed collection studies and collected the same types of sketch data, resulting in large duplications of effort. We propose, and have built, a generalpurpose sketch collection and verification tool that allows researchers to design custom user studies through an online applet residing on our groups web page. By hosting such a tool through our site, we hope to provide researchers with a quick and easy way of collecting data. Additionally, our tool serves to create a universal repository of sketch data that can be made readily available to other sketch recognition researchers.


human factors in computing systems | 2010

A sketch recognition interface that recognizes hundreds of shapes in course-of-action diagrams

Tracy Hammond; Drew Logsdon; Joshua M. Peschel; Joshua Johnston; Paul Taele; Aaron Wolin; Brandon Paulson

Sketch recognition is the automated recognition of hand drawn diagrams. Military course-of-action (COA) diagrams are used to depict battle scenarios. The domain of military course of action diagrams is particularly interesting because it includes tens of thousands of different geometric shapes, complete with many additional textual and designator modifiers. Existing sketch recognition systems recognize on the order of at most 20 different shapes. Our sketch recognition interface recognizes 485 different freely drawn military course-of-action diagram symbols in real time, with each shape containing its own elaborate set of text labels and other variations. We are able to do this by combining multiple recognition techniques in a single system. When the variations (not allowable by other systems) are factored in, our system is several orders of magnitude larger than the next biggest system. On 5,900 hand-drawn symbols drawn by 8 researchers, the system achieves an accuracy of 90% when considering the top 3 interpretations and requiring every aspect of the shape (variations, text, symbol, location, orientation) to be correct.


SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition | 2008

Gesture Recognition Based on Manifold Learning

Heeyoul Choi; Brandon Paulson; Tracy Hammond

Current feature-based gesture recognition systems use human-chosen features to perform recognition. Effective features for classification can also be automatically learned and chosen by the computer. In other recognition domains, such as face recognition, manifold learning methods have been found to be good nonlinear feature extractors. Few manifold learning algorithms, however, have been applied to gesture recognition. Current manifold learning techniques focus only on spatial information, making them undesirable for use in the domain of gesture recognition where stroke timing data can provide helpful insight into the recognition of hand-drawn symbols. In this paper, we develop a new algorithm for multi-stroke gesture recognition, which integrates timing data into a manifold learning algorithm based on a kernel Isomap. Experimental results show it to perform better than traditional human-chosen feature-based systems.


Journal on Multimodal User Interfaces | 2008

MARQS: retrieving sketches learned from a single example using a dual-classifier

Brandon Paulson; Tracy Hammond

Mouse and keyboard interfaces handle traditional text-based queries, and standard search engines provide for effective text-based search. However, everyday documents are filled with not only text, but photos, cartoons, diagrams, and sketches. These images can often be easier to recall than the surrounding text. In an effort to make human computer interaction handle more forms of human-human interaction, sketching has recently become an important means of interacting with computer systems. We propose extending the traditional monomodal model of text-based search to include the capabilities of sketch-based search. Our goal is to create a sketch-based search that can find documents from a single query sketch. We imagine an important use for this technology would be to allow users to search a computerized laboratory notebook for a previously drawn sketch. Because such as sketch will have initially been drawn only a single time, it is important that the search-by-sketch system (1) recognize a wide range of shapes that are not necessarily geometric nor drawn in the same way each time, (2) recognize a query example from only one initial training example, and (3) learn from successful queries to improve accuracy over time. We present here such an algorithm. To test the algorithm, we implemented a proof-of-concept-system: MARQS, a system that uses sketches to query existing media albums. Preliminary results show that the system yielded an average search rank of 1.51, indicating that the correct sketch is presented as either the top or second search result on average.


creativity and cognition | 2009

A surfaceless pen-based interface

Joshua M. Peschel; Brandon Paulson; Tracy Hammond

Freehand drawing on a computer screen allows users to provide input through a natural mode of human interaction. With this freedom of expression, however, there exists a paradoxical limitation: the user is bound through the existing interface to the fixed drawing surface. In this work, we overcome this limitation by presenting a surfaceless pen-based interface with an application in the field of sketch recognition. A pilot study was conducted to examine the usability of the surfaceless pen-based interface. Results indicated that learning to use the device is relatively straightforward, but that interaction difficulty increases in a directly proportionally manner with drawing complexity.

Collaboration


Dive into the Brandon Paulson's collaboration.

Researchain Logo
Decentralizing Knowledge