Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Ribarsky is active.

Publication


Featured researches published by William Ribarsky.


visual analytics science and technology | 2012

LeadLine: Interactive visual analysis of text data through event identification and exploration

Wenwen Dou; Xiaoyu Wang; Drew Skau; William Ribarsky; Michelle X. Zhou

Text data such as online news and microblogs bear valuable insights regarding important events and responses to such events. Events are inherently temporal, evolving over time. Existing visual text analysis systems have provided temporal views of changes based on topical themes extracted from text data. But few have associated topical themes with events that cause the changes. In this paper, we propose an interactive visual analytics system, LeadLine, to automatically identify meaningful events in news and social media data and support exploration of the events. To characterize events, LeadLine integrates topic modeling, event detection, and named entity recognition techniques to automatically extract information regarding the investigative 4 Ws: who, what, when, and where for each event. To further support analysis of the text corpora through events, LeadLine allows users to interactively examine meaningful events using the 4 Ws to develop an understanding of how and why. Through representing large-scale text corpora in the form of meaningful events, LeadLine provides a concise summary of the corpora. LeadLine also supports the construction of simple narratives through the exploration of events. To demonstrate the efficacy of LeadLine in identifying events and supporting exploration, two case studies were conducted using news and social media data.


advanced visual interfaces | 2012

Towards the establishment of a framework for intuitive multi-touch interaction design

Amy Ingram; Xiaoyu Wang; William Ribarsky

Intuition is an important yet ill-defined factor when designing effective multi-touch interactions. Throughout the research community, there is a lack of consensus regarding both the nature of intuition and, more importantly, how to systematically incorporate it into the design of multi-touch gestural interactions. To strengthen our understanding of intuition, we surveyed various domains to determine the level of consensus among researchers, commercial developers, and the general public regarding which multi-touch gestures are intuitive, and which of these gestures intuitively lead to which interaction outcomes. We reviewed more than one hundred papers regarding multi-touch interaction, approximately thirty of which contained key findings we report herein. Based on these findings, we have constructed a framework of five factors that determine the intuition of multi-touch interactions, including direct manipulation, physics, feedback, previous knowledge, and physical motion. We further provide both design recommendations for multi-touch developers and an evaluation of research problems which remain due to the limitations of present research regarding these factors. We expect our survey and discussion of intuition will raise awareness of its importance, and lead to the active pursuit of intuitive multi-touch interaction design.


2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV) | 2013

Less After-the-Fact: Investigative visual analysis of events from streaming twitter

Thomas Kraft; Derek Xiaoyu Wang; Jeffrey Delawder; Wenwen Dou; Yu Li; William Ribarsky

News and events are traditionally broadcasted in an “After-the-Fact” manner, where the masses react to news formulated by a group of professionals. However, the deluge of information and real-time online social media sites have significantly changed this information input-output cycle, allowing the masses to report real-time events around the world. Specifically, the use of Twitter has resulted in the creation of a digital wealth of knowledge that directly associates to such events. Although governments and industries acknowledge the value of extracting events from the TwitterSphere, unfortunately the sheer velocity and volume of tweets poses significant challenges to the desired event analysis. In this paper, we present our Geo and Temporal Association Creator (GTAC) which extracts structured representations of events from the Twitter stream. GTAC further supports event-level investigative analysis of social media data through interactively visualizing the event indicators (who, when, where, and what). Using GTAC, we are trying to create a near real-time analysis environment for analysts to identify event structures, geographical distributions, and key indicators of emerging events.


IEEE Transactions on Visualization and Computer Graphics | 2013

How Visualization Layout Relates to Locus of Control and Other Personality Factors

Caroline Ziemkiewicz; Alvitta Ottley; R. J. Crouser; A. R. Yauilla; Sara L. Su; William Ribarsky; Remco Chang

Existing research suggests that individual personality differences are correlated with a users speed and accuracy in solving problems with different types of complex visualization systems. We extend this research by isolating factors in personality traits as well as in the visualizations that could have contributed to the observed correlation. We focus on a personality trait known as locus of control” (LOC), which represents a persons tendency to see themselves as controlled by or in control of external events. To isolate variables of the visualization design, we control extraneous factors such as color, interaction, and labeling. We conduct a user study with four visualizations that gradually shift from a list metaphor to a containment metaphor and compare the participants speed, accuracy, and preference with their locus of control and other personality factors. Our findings demonstrate that there is indeed a correlation between the two: participants with an internal locus of control perform more poorly with visualizations that employ a containment metaphor, while those with an external locus of control perform well with such visualizations. These results provide evidence for the externalization theory of visualization. Finally, we propose applications of these findings to adaptive visual analytics and visualization evaluation.


advanced visual interfaces | 2012

Evaluating depth perception of volumetric data in semi-immersive VR

Isaac Cho; Wenwen Dou; Zachary Wartell; William Ribarsky; Xiaoyu Wang

Displays supporting stereoscopy and head-coupled motion parallax can enhance human perception of complex 3D datasets. This has been studied extensively for datasets containing 3D surfaces and 3D networks but less for so volumetric data. Volumetric data is characterized by a heavy presence of transparency, occlusion and highly ambiguous spatial structure. There are many different rendering and visualization algorithms and interactive techniques that enhance perception of volume data and these techniques effectiveness have been evaluated. However, the effect of VR displays on perception of volume data is less well studied. Therefore, we conduct two experiments on how various display conditions affect a participants depth perception accuracy of a volumetric dataset. A demographic pre-questionnaire also allows us to separate the accuracy differences between participants with more and less experience with 3D games and VR. Our results show an overall benefit for stereo with head-tracking for enhancing perception of depth in volumetric data. Our study also suggests that familiarity with 3D games and VR type technology affects the usersability to perceive such data and affects the accuracy boost due to VR displays.


Information Visualization | 2015

Interactive analysis and visualization of situationally aware building evacuations

Jack Guest; Todd Eaglin; Kalpathi R. Subramanian; William Ribarsky

Evacuation of large urban structures, such as campus buildings, arenas, or stadiums, is of prime interest to emergency responders and planners. Although there is a large body of work on evacuation algorithms and their application, most of these methods are impractical to use in real-world scenarios (nonreal-time, for instance) or have difficulty handling scenarios with dynamically changing conditions. Our overall goal in this work is toward developing computer visualizations and real-time visual analytic tools for evacuations of large groups of buildings, and in the long term, integrate this with the street networks in the surrounding areas. A key aspect of our system is to provide situational awareness and decision support to first responders and emergency planners. In our earlier work, we demonstrated an evacuation system that employed a modified variant of a heuristic-based evacuation algorithm, which (1) facilitated real-time complex user interaction with first responder teams, in response to information received during the emergency; (2) automatically supported visual reporting tools for spatial occupancy, temporal cues, and procedural recommendations; and (3) multi-scale building models, heuristic evacuation models, and unique graph manipulation techniques for producing near real-time situational awareness. The system was tested in collaboration with our campus police and safety personnel, via a tabletop exercise consisting of three different scenarios. In this work, we have redesigned the system to be able to handle larger groups of buildings, in order to move toward a full-campus evacuation system. We demonstrate an evacuation simulation involving 22 buildings in the University of North Carolina, Charlotte campus. Second, the implementation has been redesigned as a WebGL application, facilitating easy dissemination and use by stakeholders.


visualization and data analysis | 2013

Visual analysis of situationally aware building evacuations

Jack Guest; Todd Eaglin; Kalpathi R. Subramanian; William Ribarsky

Rapid evacuation of large urban structures (campus buildings, arenas, stadiums, etc.) is a complex operation and of prime interest to emergency responders and planners. Although there is a considerable body of work in evacuation algorithms and methods, most of these are impractical to use in real-world scenarios (non real-time, for instance) or have difficulty handling scenarios with dynamically changing conditions. Our goal in this work is towards developing computer visualizations and real-time visual analytic tools for building evacuations, in order to provide situational awareness and decision support to first responders and emergency planners. We have augmented traditional evacuation algorithms in the following important ways, (1) facilitate real-time complex user interaction with first responder teams, as information is received during an emergency, (2) visual reporting tools for spatial occupancy, temporal cues, and procedural recommendations are provided automatically and at adjustable levels, and (3) multi-scale building models, heuristic evacuation models, and unique graph manipulation techniques for producing near real-time situational awareness. We describe our system, methods and their application using campus buildings as an example. We also report the results of evaluating our system in collaboration with our campus police and safety personnel, via a table-top exercise consisting of 3 different scenarios, and their resulting assessment of the system.


international conference on multimedia retrieval | 2012

Efficient graffiti image retrieval

Chunlei Yang; Pak Chung Wong; William Ribarsky; Jianping Fan

Research of graffiti character recognition and retrieval, as a branch of traditional optical character recognition (OCR), has started to gain attention in recent years. We have investigated the special challenge of the graffiti image retrieval problem and propose a series of novel techniques to overcome the challenges. The proposed bounding box framework locates the character components in the graffiti images to construct meaningful character strings and conduct image-wise and semantic-wise retrieval on the strings rather than the entire image. Using real world data provided by the law enforcement community to the Pacific Northwest National Laboratory, we show that the proposed framework outperforms the traditional image retrieval framework with better retrieval results and improved computational efficiency.


hawaii international conference on system sciences | 2014

Towards a Visual Analytics Framework for Handling Complex Business Processes

William Ribarsky; Derek Xiaoyu Wang; Wenwen Dou; William J. Tolone

Organizing data that can come from anywhere in the complex business process in a variety of types is a challenging task. To tackle the challenge, we introduce the concepts of virtual sensors and process events. In addition, a visual interface is presented in this paper to aid deploying the virtual sensors and analyzing process events information. The virtual sensors permit collection from the streams of data at any point in the process and transmission of the data in a form ready to be analyzed by the central analytics engine. Process events provide a uniform expression of data of different types in a form that can be automatically prioritized and that is readily meaningful to the users. Through the visual interface, the user can place the virtual sensors, interact with and group the process events, and delve into the details of the process at any point. The visual interface provides a multiview investigative environment for sense making and decisive action by the user.


international conference on weblogs and social media | 2013

Discover Diamonds-in-the-Rough Using Interactive Visual Analytics System-Tweets as a Collective Diary of the Occupy Movement

Wenwen Dou; Derek Xiaoyu Wang; Zhiqiang Ma; William Ribarsky

Collaboration


Dive into the William Ribarsky's collaboration.

Top Co-Authors

Avatar

Wenwen Dou

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Derek Xiaoyu Wang

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Xiaoyu Wang

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Jack Guest

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Kalpathi R. Subramanian

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Todd Eaglin

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Ingram

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Caroline Ziemkiewicz

University of North Carolina at Charlotte

View shared research outputs
Researchain Logo
Decentralizing Knowledge