Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean Scholtz is active.

Publication


Featured researches published by Jean Scholtz.


Information Visualization | 2011

Collaborative visualization: definition, challenges, and research agenda

Petra Isenberg; Niklas Elmqvist; Jean Scholtz; Daniel Cernea; Kwan-Liu Ma; Hans Hagen

The conflux of two growing areas of technology – collaboration and visualization – into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another – in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges, but it is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques. The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception.


systems man and cybernetics | 2004

Final report for the DARPA/NSF interdisciplinary study on human-robot interaction

Jennifer L. Burke; Robin R. Murphy; Erika Rogers; Vladimir J. Lumelsky; Jean Scholtz

As part of a Defense Advanced Research Projects Agency/National Science Foundation study on human-robot interaction (HRI), over sixty representatives from academia, government, and industry participated in an interdisciplinary workshop, which allowed roboticists to interact with psychologists, sociologists, cognitive scientists, communication experts and human-computer interaction specialists to discuss common interests in the field of HRI, and to establish a dialogue across the disciplines for future collaborations. We include initial work that was done in preparation for the workshop, links to keynote and other presentations, and a summary of the findings, outcomes, and recommendations that were generated by the participants. Findings of the study include-the need for more extensive interdisciplinary interaction, identification of basic taxonomies and research issues, social informatics, establishment of a small number of common application domains, and field experience for members of the HRI community. An overall conclusion of the workshop was expressed as the following-HRI is a cross-disciplinary area, which poses barriers to meaningful research, synthesis, and technology transfer. The vocabularies, experiences, methodologies, and metrics of the communities are sufficiently different that cross-disciplinary research is unlikely to happen without sustained funding and an infrastructure to establish a new HRI community.


systems, man and cybernetics | 2003

Human-robot interaction: development of an evaluation methodology for the bystander role of interaction

Jean Scholtz; Siavosh Bahrami

Various methods can be used for evaluating human-robot interaction. The appropriateness of those evaluation methodologies depends on the roles that people assume in interacting with robots. In this paper we focus on developing an evaluation strategy for the bystander role. In this role, the person has no training in interacting with the robot and must develop a mental model to co-exist in the same environment with the robot.


visual analytics science and technology | 2008

VAST 2008 Challenge: Introducing mini-challenges

Georges G. Grinstein; Catherine Plaisant; Sharon J. Laskowski; Teresa O'Connell; Jean Scholtz; Mark A. Whiting

Visual analytics experts realize that one effective way to push the field forward and to develop metrics for measuring the performance of various visual analytics components is to hold an annual competition. The VAST 2008 Challenge is the third year that such a competition was held in conjunction with the IEEE Visual Analytics Science and Technology (VAST) symposium. The authors restructured the contest format used in 2006 and 2007 to reduce the barriers to participation and offered four mini-challenges and a Grand Challenge. Mini Challenge participants were to use visual analytic tools to explore one of four heterogeneous data collections to analyze specific activities of a fictitious, controversial movement. Questions asked in the Grand Challenge required the participants to synthesize data from all four data sets. In this paper we give a brief overview of the data sets, the tasks, the participation, the judging, and the results.


human-robot interaction | 2007

Adapting GOMS to model human-robot interaction

Jill L. Drury; Jean Scholtz; David E. Kieras

A formal interaction modeling technique known as Goals, Operators, Methods, and Selection rules (GOMS) is well-established in human-computer interaction as a cost-effective way of evaluating designs without the participation of end users. This paper explores the use of GOMS for evaluating human-robot interaction. We provide a case study in the urban search-and-rescue domain and raise issues for developing GOMS models that have not been previously addressed. Further, we provide rationale for selecting different types of GOMS modeling techniques to help the analyst model human-robot interfaces.


visual analytics science and technology | 2006

VAST 2006 Contest - A Tale of Alderwood

Georges G. Grinstein; Theresa O'Connell; Sharon J. Laskowski; Catherine Plaisant; Jean Scholtz; Mark A. Whiting

Visual analytics experts realize that one effective way to push the field forward and to develop metrics for measuring the performance of various visual analytics components is to hold an annual competition. The first visual analytics science and technology (VAST) contest was held in conjunction with the 2006 IEEE VAST Symposium. The competition entailed the identification of possible political shenanigans in the fictitious town of Alderwood. A synthetic data set was made available as well as tasks. We summarize how we prepared and advertised the contest, developed some initial metrics for evaluation, and selected the winners. The winners were invited to participate at an additional live competition at the symposium to provide them with feedback from senior analysts


Information Visualization | 2009

Advancing user-centered evaluation of visual analytic environments through contests

Loura Costello; Georges G. Grinstein; Catherine Plaisant; Jean Scholtz

In this paper, the authors describe the Visual Analytics Science and Technology (VAST) Symposium contests run in 2006 and 2007 and the VAST 2008 and 2009 challenges. These contests were designed to provide researchers with a better understanding of the tasks and data that face potential end users. Access to these end users is limited because of time constraints and the classified nature of the tasks and data. In that respect, the contests serve as an intermediary, with the metrics and feedback serving as measures of utility to the end users. The authors summarize the lessons learned and the future directions for VAST Challenges.


Information Visualization | 2011

Developing guidelines for assessing visual analytics environments

Jean Scholtz

In this article, we develop guidelines for evaluating visual analytics environments based on a synthesis of reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge and from a user study with professional intelligence analysts. By analyzing the 2009 VAST Challenge reviews, we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. We also report on a small user study with professional analysts to determine the important factors that they use in evaluating visual analysis systems. We also looked at guidelines developed by researchers in various domains and synthesized the results from these three efforts into an initial set for use by others in the community. One challenge for future visual analytics systems is to help in the generation of reports. In our user study, we also worked with analysts to understand the criteria they used to evaluate the quality of analytic reports. We propose that this knowledge will be useful as researchers look at systems to automate some of the report generation.1 From these two efforts, we produced some initial guidelines for evaluating visual analytics environments and for the evaluation of analytic reports. It is important to understand that these guidelines are initial drafts and are limited in scope as the visual analytics systems we evaluated were used in specific tasks. We propose these guidelines as a starting point for the Visual Analytics Community.


ieee international conference on technologies for homeland security | 2013

Tools for understanding identity

Sadie Creese; Thomas Gibson-Robinson; Michael Goldsmith; Duncan Hodges; Dee Kim; Oriana J. Love; Jason R. C. Nurse; Bill Pike; Jean Scholtz

We present two tools for analysing identity in support of homeland security. Both are based upon the Superi-dentity model that brings together cyber and physical spaces into a single understanding of identity. Between them, the tools provide support for defensive, information gathering and capability planning operations. The first tool allows an analyst to explore and understand the model, and to apply it to risk-exposure assessment activities for a particular individual, e.g. an influential person in the intelligence or government community, or a commercial company board member. It can also be used to understand critical capabilities in an organizations identity-attribution process, and so used to plan resource investment. The second tool, referred to as Identity Map, is designed to support investigations requiring enrichment of identities and the making of attributions. Both are currently working prototypes.


workshop on beyond time and errors | 2012

A reflection on seven years of the VAST challenge

Jean Scholtz; Mark A. Whiting; Catherine Plaisant; Georges G. Grinstein

We describe the evolution of the IEEE Visual Analytics Science and Technology (VAST) Challenge from its origin in 2006 to present (2012). The VAST Challenge has provided an opportunity for visual analytics researchers to test their innovative thoughts on approaching problems in a wide range of subject domains against realistic datasets and problem scenarios. Over time, the Challenge has changed to correspond to the needs of researchers and users. We describe those changes and the impacts they have had on topics selected, data and questions offered, submissions received, and the Challenge format.

Collaboration


Dive into the Jean Scholtz's collaboration.

Top Co-Authors

Avatar

Mark A. Whiting

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Georges G. Grinstein

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Sharon J. Laskowski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Emile L. Morse

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Brian Antonishek

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michelle Potts Steves

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

J D. Young

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Kristin A. Cook

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Theresa O'Connell

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Lyndsey Franklin

Pacific Northwest National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge