Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott Ososky is active.

Publication


Featured researches published by Scott Ososky.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

From Tools to Teammates Toward the Development of Appropriate Mental Models for Intelligent Robots

Elizabeth Phillips; Scott Ososky; Janna Grove; Florian Jentsch

A transition in robotics from tools to teammates is underway, but, because it is in an early state, experience with intelligent robots and agents is limited. As such, human mental models of intelligent robots are primitive, easily influenced by superficial characteristics, and often incomplete or inaccurate. This paper investigates the factors that influence mental models of robots, and explores solutions for the formation of accurate and useful mental models with a specific focus on military applications. Humans must possess a clear and accurate understanding of how robots communicate and operate, particularly in military settings where intelligent, autonomous robotic agents are desired. Complete and accurate mental models in these hazardous and critical applications will reduce the inherent danger of automation disuse or misuse. Implications for training and developing appropriate trust are also discussed.


Proceedings of SPIE | 2012

The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates

Scott Ososky; David Schuster; Florian Jentsch; Stephen M. Fiore; Randall Shumaker; Christian Lebiere; Unmesh Kurup; Jean Oh; Anthony Stentz

Current ground robots are largely employed via tele-operation and provide their operators with useful tools to extend reach, improve sensing, and avoid dangers. To move from robots that are useful as tools to truly synergistic human-robot teaming, however, will require not only greater technical capabilities among robots, but also a better understanding of the ways in which the principles of teamwork can be applied from exclusively human teams to mixed teams of humans and robots. In this respect, a core characteristic that enables successful human teams to coordinate shared tasks is their ability to create, maintain, and act on a shared understanding of the world and the roles of the team and its members in it. The team performance literature clearly points towards two important cornerstones for shared understanding of team members: mental models and situation awareness. These constructs have been investigated as products of teams as well; amongst teams, they are shared mental models and shared situation awareness. Consequently, we are studying how these two constructs can be measured and instantiated in human-robot teams. In this paper, we report results from three related efforts that are investigating process and performance outcomes for human robot teams. Our investigations include: (a) how human mental models of tasks and teams change whether a teammate is human, a service animal, or an advanced automated system; (b) how computer modeling can lead to mental models being instantiated and used in robots; (c) how we can simulate the interactions between human and future robotic teammates on the basis of changes in shared mental models and situation assessment.


international conference on virtual, augmented and mixed reality | 2013

Cognitive Models of Decision Making Processes for Human-Robot Interaction

Christian Lebiere; Florian Jentsch; Scott Ososky

A fundamental aspect of human-robot interaction is the ability to generate expectations for the decisions of one’s teammate(s) in order to coordinate plans of actions. Cognitive models provide a promising approach by allowing both a robot to model a human teammate’s decision process as well as by modeling the process through which a human develops expectations regarding its robot partner’s actions. We describe a general cognitive model developed using the ACT-R cognitive architecture that can apply to any situation that could be formalized using decision trees expressed in the form of instructions for the model to execute. The model is composed of three general components: instructions on how to perform the task, situational knowledge, and past decision instances. The model is trained using decision instances from a human expert, and its performance is compared to that of the expert.


Proceedings of SPIE | 2014

Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems

Scott Ososky; Tracy Sanders; Florian Jentsch; Peter A. Hancock; Jessie Y. C. Chen

Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator’s ability to understand a robot’s behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans’ trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system’s action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans’ impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human–robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human–robot communication and human mental models regarding robots impact a human’s ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Human-animal teams as an analog for future human-robot teams

Elizabeth Phillips; Scott Ososky; Brittany Swigert; Florian Jentsch

Current military robotics research aims to transition the robot from tool to teammate, one that is more autonomous and acts with limited supervision within a highly complex and demanding environment. Investigating likely analogs to the human-robot team can provide guidance an inspiration into the simultaneous development of robot design and human training. Human-animal teams are one such metaphor that can provide insight into the capabilities of near-future robotic teammates. This paper explores the human-animal team metaphor, and describes a continuum of relevant human–animal team capabilities that can inform and guide the design of next-generation human–robot teams.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

An Investigation of Human Decision-Making in a Human—Robot Team Task

Elizabeth Phillips; Scott Ososky; Florian Jentsch

This paper presents initial insights from an exploratory analysis of human decision making in a human—robot teaming scenario. A cognitive model in the form of a decision tree was developed using local and national police foot pursuit protocols. Participants were asked to read through a series of hypothetical scenarios involving a Soldier and a robot engaging in a foot pursuit of a person of interest. Participants made decisions at each node of the decision tree and then a tactical decision concerning which member of the team should engage in the pursuit. Initial results revealed that individual decision nodes were not associated with participants’ choice in who should engage in the pursuit. Trust in robots, however, was significantly associated with the participants’ choices.


international conference on engineering psychology and cognitive ergonomics | 2013

The impact of type and level of automation on situation awareness and performance in human-robot interaction

David Schuster; Florian Jentsch; Thomas Fincannon; Scott Ososky

In highly autonomous robotic systems, human operators are able to attend to their own, separate tasks, rather than directly operating the robot to accomplish their immediate task(s). At the same time, as operators attend to their own, separate tasks that do not directly involve the robotic system, they can end up lacking situation awareness (SA) when called on to recover from automation failure or from an unexpected event. In this paper, we describe the mechanisms of this problem, known as the out-of-the-loop performance problem, and describe why the problem may still exist in future robotic systems. Existing solutions to the problem, which focus on the level of automation, are reviewed. We describe our current empirical work, which aims to expand upon taxonomies of levels of automation to better understand how engineers of robotic systems may mitigate the problem.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

A Picture is Worth a Thousand Mental Models: Evaluating Human Understanding of Robot Teammates

Scott Ososky; Elizabeth Philips; David Schuster; Florian Jentsch

Across the domains in which robots are prevalent, it is possible to imagine many different forms and functions of robots. The purpose of this investigation was to gain a better understanding of the scope and type of a priori knowledge structures humans hold of robots, among novice users of robotic systems. Participant mental models of a hypothetical robot in a military team scenario were elicited along the dimensions of form and function, taking prior individual experiences into consideration. Participants who conceived a robot with anthropomorphic or zoomorphic qualities reported more perceived knowledge of their robotic teammate, as well as of their human–robot team. Participants who had more experience with video games also believed that they had more knowledge of their imagined robot and their human–robot team. Insight into novice users’ understanding of robots has implications for HRI design and training.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2010

Some Good and Bad with Spatial Ability in Three Person Teams That Operate Multiple Unmanned Vehicles

Thomas Fincannon; Scott Ososky; Florian Jentsch; Joseph R. Keebler; Elizabeth Phillips

This study reports findings regarding the influence of spatial ability of each operator on a three person team on workload and performance. Sixty six participants were randomly assigned to the role of unmanned aerial vehicle (UAV) operator, unmanned ground vehicle (UGV) operator, and intelligence officer (leader) to create a total of 22 teams, and spatial ability was assessed with Part 5 of the Guilford-Zimmerman Aptitude Survey. Findings indicated that spatial ability of the UAV operator and UGV operator improved reconnaissance, and while spatial ability of the UAV operator improved reacquisition of objectives after reconnaissance, spatial ability of the intelligence officer hindered team performance on this second task. A rationale for these results was developed with findings from the Multiple Resource Questionnaire (MRQ). Discussion focuses on the relationship between spatial ability and visual perception in complex teams.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

An Evaluation of Human Mental Models of Tactical Robot Movement

Andrew B. Talone; Elizabeth Phillips; Scott Ososky; Florian Jentsch

In this paper, we describe an ongoing exploratory study investigating human mental models of tactical robot movement under different combinations of mission commands, constraints, and environmental features. In particular, we are assessing the relationship between participants’ mental models of robot form and their expectations for robot movement. The results of this study will inform the design of future experimentation with a soldier population and the design of tactical robot movement behaviors. Due to data collection being in its early stages, findings will be presented at the 2015 HFES Annual Meeting.

Collaboration


Dive into the Scott Ososky's collaboration.

Top Co-Authors

Avatar

Florian Jentsch

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Elizabeth Phillips

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Fincannon

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

David Schuster

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Stephen M. Fiore

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Christian Lebiere

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Randall Shumaker

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

A. William Evans

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Andrew B. Talone

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge