Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sean Owens is active.

Publication


Featured researches published by Sean Owens.


AIAA Infotech@Aerospace 2007 Conference and Exhibit | 2007

Geolocation of RF Emitters by Many UAVs

Paul Scerri; Robin Glinton; Sean Owens; David Scerri; Katia P. Sycara

This paper presents an approach to using a large team of UAVs to find radio frequency (RF) emitting targets in a large area. Small, inexpensive UAVs that can collectively and rapidly determine the approximate location of intermittently broadcasting and mobile RF emitters have a range of applications in both military, e.g., for finding SAM batteries, and civilian, e.g., for finding lost hikers, domains. Received Signal Strength Indicator (RSSI) sensors on board the UAVs measure the strength of RF signals across a range of frequencies. The signals, although noisy and ambiguous due to structural noise, e.g., multipath effects, overlapping signals and sensor noise, allow estimates to be made of emitter locations. Generating a probability distribution over emitter locations requires integrating multiple signals from different UAVs into a Bayesian filter, hence requiring cooperation between the UAVs. Once likely target locations are identified, EO-camera equipped UAVs must be tasked to provide a video stream of the area to allow a user to identify the emitter.


Information Fusion | 2009

An integrated approach to high-level information fusion

Katia P. Sycara; Robin Glinton; Bin Yu; Joseph A. Giampapa; Sean Owens; Michael Lewis; Ltc Charles Grindle

In todays fast paced military operational environment, vast amounts of information must be sorted out and fused not only to allow commanders to make situation assessments, but also to support the generation of hypotheses about enemy force disposition and enemy intent. Current information fusion technology has the following two limitations. First, current approaches do not consider the battlefield context as a first class entity. In contrast, we consider situational context in terms of terrain analysis and inference. Second, there are no integrated and implemented models of the high-level fusion process. This paper describes the HiLIFE (High-Level Information Fusion Environment) computational framework for seamless integration of high levels of fusion (levels 2, 3 and 4). The crucial components of HiLIFE that we present in this paper are: (1) multi-sensor fusion algorithms and their performance results that operate in heterogeneous sensor networks to determine not only single targets but also force aggregates, (2) computational approaches for terrain-based analysis and inference that automatically combine low-level terrain features (such as forested areas, rivers, etc.) and additional information, such as weather, and transforms them into high-level militarily relevant abstractions, such as NO-GO, SLOW-GO areas, avenues of approach, and engagement areas, (3) a model for inferring adversary intent by mapping sensor readings of opponent forces to possible opponent goals and actions, and (4) sensor management for positioning intelligence collection assets for further data acquisition. The HiLIFE framework closes the loop on information fusion by specifying how the different components can computationally work together in a coherent system. Furthermore, the framework is inspired by a military process, the Intelligence Preparation of the Battlefield, that grounds the framework in practice. HiLIFE is integrated with a distributed military simulation system, OTBSAF, and the RETSINA multi-agent infrastructure to provide agile and sophisticated reasoning. In addition, the paper presents validation results of the automated terrain analysis that were obtained through experiments using military intelligence Subject Matter Experts (SMEs).


human-robot interaction | 2011

Scalable target detection for large robot teams

Huadong Wang; Andreas Kolling; Nathan Brooks; Sean Owens; Shafiq Abedin; Paul Scerri; Pei-Ju Lee; Shih Yi Chien; Michael Lewis; Katia P. Sycara

In this paper, we present an asynchronous display method, coined image queue, which allows operators to search through a large amount of data gathered by autonomous robot teams. We discuss and investigate the advantages of an asynchronous display for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment in order to identify targets of interest such as injured victims. It fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. We compared the image queue to a traditional synchronous display with live video feeds and found that the image queue reduces errors and operators workload. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach we can scale up to very large multi-robot systems gathering huge amounts of data that is then distributed to multiple operators.


international conference on information fusion | 2007

A decentralized approach to space deconfliction

Paul Scerri; Sean Owens; Bin Yu; Katia P. Sycara

This paper presents a decentralized approach to path planning for large numbers of autonomous vehicles in sparse environments. Unlike existing approaches, which are either computationally expensive or communication intensive, the presented approach allows large numbers of vehicles to plan independently with low communication overhead. The key to the algorithm is to observe that, in sparse environments, collisions are exceptional and that most of the time vehicles will simply not hit each other. Hence, it is reasonable to allow vehicles to plan independently and then resolve the small number of conflicts. We operationalize this by having each vehicle send their planned paths to a small number of their team mates via tokens. Each team member is required to check for conflicting paths that they have been informed about via a token and inform those involved when any conflict is detected. Both analytic and empirical results show that the approach has very high probability of detecting all potential collisions for large numbers of vehicles in both 2D and 3D environments.


Simulation | 2004

Extending the ONESAF Testbed into a C4ISR Testbed

Joseph A. Giampapa; Katia P. Sycara; Sean Owens; Robin Glinton; Young-Woo Seo; Bin Yu; Charles E. Grindle; Michael Lewis

This article describes how the modeling and simulation environment of the OneSAF Testbed Baseline (OTB) v1.0 has been extended to enable the testing of heterogeneous algorithms that are being designed for real-world C4ISR applications. This has been accomplished by building an architecture that extends functional and logical components of the OTB system in the following ways: the use of the OTB Compact Terrain Database for terrain analysis and preliminary threat assessment, the addition of the RETSINA-OTB Bridge for the real-time query and control of OTB entities, and the addition of new DIS-based sensor entities for interoperation with Command and Control algorithms, to name a few. This article illustrates how to make a few small but general extensions to a modeling and simulation system to create a larger testbed system with minimum impact on the native system and with great potential for the range of applications that can exploit it.


international conference on information fusion | 2005

An evidential model of multisensor decision fusion for force aggregation and classification

Bin Yu; Joseph A. Giampapa; Sean Owens; Katia P. Sycara

This paper describes airborne sensor networks for target detection and identification in military applications. One challenge is how to process and aggregate data from many sensor sources to generate an accurate and timely picture of the battlefield. The majority of research in data fusion has focused primarily on level 1 fusion, e.g., using multisensor data to determine the position, velocity, attributes, and identity of individual targets. In this paper we present a novel approach to military force aggregation and classification using the mathematical theory of evidence and doctrinal templates. Our approach helps commanders understand operational pictures of the battlefield, e.g., enemy force levels and deployment, and make better decisions than adversaries in the battlefield. A simple application of our approach is illustrated in the simulated testbed OTBSAF and RETSINA system.


international conference on information fusion | 2005

Intent inference using a potential field model of environmental influences

Robin Glinton; Sean Owens; Joseph A. Giampapa; Katia P. Sycara; Michael Lewis; Chuck Grindle

Intent inferencing is the ability to predict an opposing forces (OPFOR) high level goals. This is accomplished by the interpretation of the OPFORs disposition, movements, and actions within the context of known OPFOR doctrine and knowledge of the environment. For example, given likely OPFOR force size, composition, disposition, observations of recent activity, obstacles in the terrain, cultural features such as bridges, roads, and key terrain, intent inferencing will be able to predict the opposing forces high level goal and likely behavior for achieving it. This paper describes an algorithm for intent inferencing on an enemy force with track data, recent movements by OPFOR forces across terrain, terrain from a GIS database, and OPFOR doctrine as input. This algorithm uses artificial potential fields to discover field parameters of paths that best relate sensed track data from the movements of individual enemy aggregates to hypothesized goals. Hypothesized goals for individual aggregates are then combined with enemy doctrine to discover the intent of several aggregates acting in concert.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2004

Automating Terrain Analysis: Algorithms for Intelligence Preparation of the Battlefield

Charles E. Grindle; Michael Lewis; Robin Glinton; Joseph A. Giampapa; Sean Owens; Katia P. Sycara

Terrain information supplies an important context for ground operations. The layout of terrain is a determining factor in arraying of forces, both friendly and enemy, and the structuring of Courses of Action (COAs). For example, key terrain, such as a bridge over an unfordable river, or terrain that allows observation of the opposing forces line of advance, is likely to give a big military advantage to the force that holds it. Combining information about terrain features with hypotheses about enemy assets can lead to inferences about possible avenues of approach, areas that provide cover and concealment, areas that are vulnerable to enemy observation, or choke points. Currently, intelligence officers manually combine terrain-based information, information about the tactical significance of certain terrain features as well as information regarding enemy assets and doctrine to form hypotheses about the disposition of enemy forces and enemy intent. In this paper, we present a set of algorithms and tools for automating terrain analysis and compare their results with those of experienced intelligence analysts.


AIAA Infotech@Aerospace Conference | 2009

Using Immersive 3D Terrain Models For Fusion Of UAV Surveillance Imagery

Sean Owens; Katia P. Sycara; Paul Scerri

job to constantly monitor the video and coordinate with other operators to ensure the region of interest in covered. This paper presents initial steps towards an approach that would allow a single operator to utilize data from several UAVs and interact with the data in a more natural and less stressful way. The concept is to paint video directly onto a 3D model of the environment and allow the operator to interact with the model as they would a computer game. The location of any of the UAVs need not be known to the operator. The operator might eventually mark areas of the environment to be searched more or less carefully or often and allow the UAVs to cooperatively and autonomously determine paths that achieve this.


AIAA Infotech@Aerospace Conference | 2009

Environmental Factors Aecting Situation Awareness in Unmanned Aerial Vehicles

Prasanna Velagapudi; Sean Owens; Paul Scerri; Katia P. Sycara; Michael Lewis

eld also entails additional environmental stresses, such as less optimal use of computer equipment, variations in weather, and the physical demands of the terrain. In this paper, a pilot study is conducted to determine if any of these factors signicantly impact situation awareness, by comparing operator performance in a visual identication task in a live eld test with operators performing an identical task in a lab environment. Metric results suggest that performance is similar across the two conditions, but qualitative responses from participants suggest that the underlying strategies employed dier in the two conditions.

Collaboration


Dive into the Sean Owens's collaboration.

Top Co-Authors

Avatar

Katia P. Sycara

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Paul Scerri

Information Sciences Institute

View shared research outputs
Top Co-Authors

Avatar

Michael Lewis

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Robin Glinton

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Yu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nathan Brooks

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chuck Grindle

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Shafiq Abedin

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Steven Okamoto

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge