Edward Clarkson
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edward Clarkson.
human factors in computing systems | 2005
Edward Clarkson; James Clawson; Kent Lyons; Thad Starner
We present a longitudinal study of mini-QWERTY keyboard use, examining the learning rates of novice mini-QWERTY users. The study consists of 20 twenty-minute typing sessions using two different-sized keyboard models. Subjects average over 31 words per minute (WPM) for the first session and increase to an average of 60 WPM by the twentieth. Individual subjects also exceed the upper bound of 60.74 WPM suggested by MacKenzie and Soukoreffs model of two-thumb text entry [5]. We discuss our results in the context of this model.
IEEE Transactions on Visualization and Computer Graphics | 2009
Edward Clarkson; Krishna Desai; James D. Foley
Hierarchical representations are common in digital repositories, yet are not always fully leveraged in their online search interfaces. This work describes ResultMaps, which use hierarchical treemap representations with query string-driven digital library search engines. We describe two lab experiments, which find that ResultsMap users yield significantly better results over a control condition on some subjective measures, and we find evidence that ResultMaps have ancillary benefits via increased understanding of some aspects of repository content. The ResultMap system and experiments contribute an understanding of the benefits-direct and indirect-of the ResultMap approach to repository search visualization.
acm/ieee joint conference on digital libraries | 2006
Edward Clarkson; James D. Foley
Browsing is a widespread user behavior in the digital library (DL) environment; there are an array of existing techniques that afford browsing and are readily applicable to digital libraries. We outline the designs of two such methods based on well-known techniques: treemaps and ScentTrails
principles of advanced discrete simulation | 2013
Edward Clarkson; Jennifer Hurt; Jason Zutty; Christopher Skeels; Brian Parise; Greg Rohling
We present the Test Matrix Tool (TMT) framework, a simulation-agnostic framework providing end-to-end support for robust analysis of complex systems. The need to execute a large number of simulations is common to many problem environments, even those already reduced by Design of Experiments or similar methodologies. TMT addresses key end-user needs in easing the specification, execution and analysis of simulation workloads in ways that are consistent between specific applications of the framework. The TMT design contributes modular specifications for key data communicated between and within the specification, execution and analysis components. Our TMT implementation is an instantiation of those formats freely available for general use. TMTs data analysis component provides a variety of features data filtering, comparison, transformation and visualization for analytic tasks on any TMT-embedded model. We provide a brief case study as an example of its use in a real-world application.
human factors in computing systems | 2006
Edward Clarkson; Jason Allan Day; James D. Foley
Digital libraries have great potential to improve the educational experience. As a result, there are a wide variety of such repositories, especially those that focus specifically on education. But relatively few focus on topics as specific as Human-Computer Interaction (HCI) or Human-Centered Computing (HCC). In addition, support for browsing behavior, with a few exceptions, is both weak and not suitable for user needs. This paper presents our work to create a repository of educational materials for a relatively narrowly-targeted field (HCC/HCI), including our requirements gathering methods and results. Finally, we discuss the HCC Education Digital Library (HCC EDL) as a platform for investigating broader digital library research questions, such as exploring alternative designs for content browsing mechanisms.
Journal of Medical Systems | 2018
Edward Clarkson; Jason Zutty; Mehul V. Raval
Appendectomy is the most common abdominal surgical procedure performed in children in the United States. In order to assist care providers in creating treatment plans for the postoperative management of pediatric appendicitis, we have developed a predictive statistical model of outcomes on which we have built a prototype decision aid application. The model, trained on 3724 anonymized care records and evaluated on a separate set of 2205 cases from a tertiary care center, achieves 97.0% specificity, 25.1% true sensitivity, and 58.8% precision. We have also built an interactive decision support tool augmented with simple visualization techniques designed for clinicians to use in the course of making care decisions (e.g., discharge) and in patient/stakeholder communication. Its focus is on end-user ease of use and integration into existing clinician workflows, and is designed to evolve its predictions as more and better data become available.
ACM Transactions on Knowledge Discovery From Data | 2018
Jaegul Choo; Hannah Kim; Edward Clarkson; Zhicheng Liu; Changhyun Lee; Fuxin Li; Hanseung Lee; Ramakrishnan Kannan; Charles D. Stolper; John T. Stasko; Haesun Park
In this article, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents, VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Additionally, we present preliminary user study results for evaluating the effectiveness of the system.
Online Journal of Public Health Informatics | 2017
Erica Briscoe; Scott Appling; Edward Clarkson; Nikolay Lipskiy; James Tyson; Jacqueline Burkholder
Objective The objective of this analysis is to leverage recent advances in natural language processing (NLP) to develop new methods and system capabilities for processing social media (Twitter messages) for situational awareness (SA), syndromic surveillance (SS), and event-based surveillance (EBS). Specifically, we evaluated the use of human-in-the-loop semantic analysis to assist public health (PH) SA stakeholders in SS and EBS using massive amounts of publicly available social media data. Introduction Social media messages are often short, informal, and ungrammatical. They frequently involve text, images, audio, or video, which makes the identification of useful information difficult. This complexity reduces the efficacy of standard information extraction techniques 1 . However, recent advances in NLP, especially methods tailored to social media 2 , have shown promise in improving real-time PH surveillance and emergency response 3 . Surveillance data derived from semantic analysis combined with traditional surveillance processes has potential to improve event detection and characterization. The CDC Office of Public Health Preparedness and Response (OPHPR), Division of Emergency Operations (DEO) and the Georgia Tech Research Institute have collaborated on the advancement of PH SA through development of new approaches in using semantic analysis for social media. Methods To understand how computational methods may benefit SS and EBS, we studied an iterative refinement process, in which the data user actively cultivated text-based topics (“semantic culling”) in a semi-automated SS process. This ‘human-in-the-loop’ process was critical for creating accurate and efficient extraction functions in large, dynamic volumes of data. The general process involved identifying a set of expert-supplied keywords, which were used to collect an initial set of social media messages. For purposes of this analysis researchers applied topic modeling to categorize related messages into clusters. Topic modeling uses statistical techniques to semantically cluster and automatically determine salient aggregations. A user then semantically culled messages according to their PH relevance. In June 2016, researchers collected 7,489 worldwide English- language Twitter messages (tweets) and compared three sampling methods: a baseline random sample (C1, n=2700), a keyword-based sample (C2, n=2689), and one gathered after semantically culling C2 topics of irrelevant messages (C3, n=2100). Researchers utilized a software tool, Luminoso Compass 4 , to sample and perform topic modeling using its real-time modeling and Twitter integration features. For C2 and C3, researchers sampled tweets that the Luminoso service matched to both clinical and layman definitions of Rash, Gastro-Intestinal syndromes 5 , and Zika-like symptoms. Layman terms were derived from clinical definitions from plain language medical thesauri. ANOVA statistics were calculated using SPSS software, version. Post-hoc pairwise comparisons were completed using ANOVA Turkey’s honest significant difference (HSD) test. Results An ANOVA was conducted, finding the following mean relevance values: 3% (+/- 0.01%), 24% (+/- 6.6%) and 27% (+/- 9.4%) respectively for C1, C2, and C3. Post-hoc pairwise comparison tests showed the percentages of discovered messages related to the event tweets using C2 and C3 methods were significantly higher than for the C1 method (random sampling) (p<0.05). This indicates that the human-in-the-loop approach provides benefits in filtering social media data for SS and ESB; notably, this increase is on the basis of a single iteration of semantic culling; subsequent iterations could be expected to increase the benefits. Conclusions This work demonstrates the benefits of incorporating non- traditional data sources into SS and EBS. It was shown that an NLP- based extraction method in combination with human-in-the-loop semantic analysis may enhance the potential value of social media (Twitter) for SS and EBS. It also supports the claim that advanced analytical tools for processing non-traditional SA, SS, and EBS sources, including social media, have the potential to enhance disease detection, risk assessment, and decision support, by reducing the time it takes to identify public health events.
knowledge discovery and data mining | 2013
Edward Clarkson; Jaegul Choo; John Turgeson; Ray Decuir; Haesun Park
We present Lytic, a domain-independent, faceted visual analytic (VA) system for interactive exploration of large datasets. It combines a flexible UI that adapts to arbitrary character-separated value (CSV) datasets with algorithmic preprocessing to compute unsupervised dimension reduction and cluster data from high-dimensional fields. It provides a variety of visualization options that require minimal user effort to configure and a consistent user experience between visualization types and underlying datasets. Filtering, comparison and visualization operations work in concert, allowing users to hop seamlessly between actions and pursue answers to expected and unexpected data hypotheses.
international symposium on wearable computers | 2005
James Clawson; Kent Lyons; Thad Starner; Edward Clarkson