Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark A. Whiting is active.

Publication


Featured researches published by Mark A. Whiting.


ieee symposium on information visualization | 2004

IN-SPIRE InfoVis 2004 Contest Entry

Pak Chung Wong; Christian Posse; Mark A. Whiting; Susan L. Havre; Nick Cramer; Anuj R. Shah; Mudita Singhal; Alan E. Turner; James J. Thomas

This is the first part (summary) of a three-part contest entry submitted to IEEE InfoVis 2004. The contest topic is visualizing InfoVis symposium papers from 1995 to 2002 and their references. The paper introduces the visualization tool IN-SPIRE, the visualization process and results, and presents lessons learned.


visual analytics science and technology | 2008

VAST 2008 Challenge: Introducing mini-challenges

Georges G. Grinstein; Catherine Plaisant; Sharon J. Laskowski; Teresa O'Connell; Jean Scholtz; Mark A. Whiting

Visual analytics experts realize that one effective way to push the field forward and to develop metrics for measuring the performance of various visual analytics components is to hold an annual competition. The VAST 2008 Challenge is the third year that such a competition was held in conjunction with the IEEE Visual Analytics Science and Technology (VAST) symposium. The authors restructured the contest format used in 2006 and 2007 to reduce the barriers to participation and offered four mini-challenges and a Grand Challenge. Mini Challenge participants were to use visual analytic tools to explore one of four heterogeneous data collections to analyze specific activities of a fictitious, controversial movement. Questions asked in the Grand Challenge required the participants to synthesize data from all four data sets. In this paper we give a brief overview of the data sets, the tasks, the participation, the judging, and the results.


visual analytics science and technology | 2006

VAST 2006 Contest - A Tale of Alderwood

Georges G. Grinstein; Theresa O'Connell; Sharon J. Laskowski; Catherine Plaisant; Jean Scholtz; Mark A. Whiting

Visual analytics experts realize that one effective way to push the field forward and to develop metrics for measuring the performance of various visual analytics components is to hold an annual competition. The first visual analytics science and technology (VAST) contest was held in conjunction with the 2006 IEEE VAST Symposium. The competition entailed the identification of possible political shenanigans in the fictitious town of Alderwood. A synthetic data set was made available as well as tasks. We summarize how we prepared and advertised the contest, developed some initial metrics for evaluation, and selected the winners. The winners were invited to participate at an additional live competition at the symposium to provide them with feedback from senior analysts


visual analytics science and technology | 2012

VAST Challenge 2012: Visual analytics for big data

Kristin A. Cook; Georges G. Grinstein; Mark A. Whiting; Michael Cooper; Paul R. Havig; Kristen Liggett; Bohdan Nebesh; Celeste Lyn Paul

The 2012 Visual Analytics Science and Technology (VAST) Challenge posed two challenge problems for participants to solve using a combination of visual analytics software and their own analytic reasoning abilities. Challenge 1 (C1) involved visualizing the network health of the fictitious Bank of Money to provide situation awareness and identify emerging trends that could signify network issues. Challenge 2 (C2) involved identifying the issues of concern within a region of the Bank of Money network experiencing operational difficulties utilizing the provided network logs. Participants were asked to analyze the data and provide solutions and explanations for both challenges. The data sets were downloaded by nearly 1100 people by the close of submissions. The VAST Challenge received 40 submissions with participants from 12 different countries, and 14 awards were given.


Information Visualization | 2014

The VAST Challenge: history, scope, and outcomes: An introduction to the Special Issue

Kristin A. Cook; Georges G. Grinstein; Mark A. Whiting

The annual Visual Analytics Science and Technology (VAST) challenge provides Visual Analytics researchers, developers, and designers an opportunity to apply their best tools and techniques against invented problems that include a realistic scenario, data, tasks, and questions to be answered. Submissions are processed much like conference papers, contestants are provided reviewer feedback, and excellence is recognized with awards. A day-long VAST Challenge workshop takes place each year at the IEEE VAST conference to share results and recognize outstanding submissions. Short papers are published each year in the annual VAST proceedings. Over the history of the challenge, participants have investigated a wide variety of scenarios, such as bioterrorism, epidemics, arms smuggling, social unrest, and computer network attacks, among many others. Contestants have been provided with large numbers of realistic but synthetic Coast Guard interdiction records, intelligence reports, hospitalization records, microblog records, personal RFID tag locations, huge amounts of cyber security log data, and several hours of video. This paper describes the process for developing the synthetic VAST Challenge datasets and conducting the annual challenges. This paper also provides an introduction to this special issue of Information Visualization, focusing on the impacts of the VAST Challenge.


workshop on beyond time and errors | 2012

A reflection on seven years of the VAST challenge

Jean Scholtz; Mark A. Whiting; Catherine Plaisant; Georges G. Grinstein

We describe the evolution of the IEEE Visual Analytics Science and Technology (VAST) Challenge from its origin in 2006 to present (2012). The VAST Challenge has provided an opportunity for visual analytics researchers to test their innovative thoughts on approaching problems in a wide range of subject domains against realistic datasets and problem scenarios. Over time, the Challenge has changed to correspond to the needs of researchers and users. We describe those changes and the impacts they have had on topics selected, data and questions offered, submissions received, and the Challenge format.


visual analytics science and technology | 2007

VAST 2007 Contest - Blue Iguanodon

Georges G. Grinstein; Catherine Plaisant; Sharon J. Laskowski; Theresa O'Connell; Jean Scholtz; Mark A. Whiting

Visual analytics experts realize that one effective way to push the field forward and to develop metrics for measuring the performance of various visual analytics components is to hold an annual competition. The second visual analytics science and technology (VAST) contest was held in conjunction with the 2007 IEEE VAST symposium. In this contest participants were to use visual analytic tools to explore a large heterogeneous data collection to construct a scenario and find evidence buried in the data of illegal and terrorist activities that were occurring. A synthetic data set was made available as well as tasks. In this paper we describe some of the advances we have made from the first competition held in 2006.


Information Visualization | 2014

Evaluation of visual analytics environments: The road to the Visual Analytics Science and Technology challenge evaluation methodology

Jean Scholtz; Catherine Plaisant; Mark A. Whiting; Georges G. Grinstein

Evaluation of software can take many forms ranging from algorithm correctness and performance to evaluations that focus on the value to the end user. This article presents a discussion of the development of an evaluation methodology for visual analytics environments. The Visual Analytics Science and Technology Challenge was created as a community evaluation resource. This resource is available to researchers and developers of visual analytics environments and allows them to test out their designs and visualization and compare the results with the solution and the entries prepared by others. Sharing results allows the community to learn from each other and to hopefully advance more quickly. In this article, we discuss the original challenge and its evolution during the 7 years since its inception. While the Visual Analytics Science and Technology Challenge is the focus of this article, there are lessons for many involved in setting up a community evaluation program, including the need to understand the purpose of the evaluation, decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators. For ongoing evaluations, it is also necessary to track the evolution and to ensure that the evaluation methodologies are keeping pace with the science being evaluated. The discussions on the Visual Analytics Science and Technology Challenge on these topics should be pertinent to many interested in community evaluations.


visual analytics science and technology | 2009

VAST contest dataset use in education

Mark A. Whiting; Chris North; Alex Endert; Jean Scholtz; Jereme N. Haack; Caroline F. Varley; James J. Thomas

The IEEE Visual Analytics Science and Technology (VAST) Symposium has held a contest each year since its inception in 2006. These events are designed to provide visual analytics researchers and developers with analytic challenges similar to those encountered by professional information analysts. The VAST contest has had an extended life outside of the symposium, however, as materials are being used in universities and other educational settings, either to help teachers of visual analytics-related classes or for student projects. We describe how we develop VAST contest datasets that results in products that can be used in different settings and review some specific examples of the adoption of the VAST contest materials in the classroom. The examples are drawn from graduate and undergraduate courses at Virginia Tech and from the Visual Analytics “Summer Camp” run by the National Visualization and Analytics Center in 2008. We finish with a brief discussion on evaluation metrics for education.


document engineering | 2005

Enabling massive scale document transformation for the semantic web: the universal parsing agent ™

Mark A. Whiting; Wendy E. Cowley; Nick Cramer; Alex G. Gibson; Ryan E. Hohimer; Ryan T. Scott; Stephen C. Tratz

The Universal Parsing Agent (UPA) is a document analysis and transformation program that supports massive scale conversion of information into forms suitable for the semantic web. UPA provides reusable tools to analyze text documents; identify and extract important information elements; enhance text with semantically descriptive tags; and output the information that is needed in the format and structure that is needed.

Collaboration


Dive into the Mark A. Whiting's collaboration.

Top Co-Authors

Avatar

Jean Scholtz

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Georges G. Grinstein

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Kristin A. Cook

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Sharon J. Laskowski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Theresa O'Connell

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Jereme N. Haack

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Wendy E. Cowley

Battelle Memorial Institute

View shared research outputs
Top Co-Authors

Avatar

Caroline F. Varley

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

John Fallon

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Kristen Liggett

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge