Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander M. Morison is active.

Publication


Featured researches published by Alexander M. Morison.


machine vision applications | 2007

An adaptive focus-of-attention model for video surveillance and monitoring

James W. Davis; Alexander M. Morison; David D. Woods

In current video surveillance systems, commercial pan/tilt/zoom (PTZ) cameras typically provide naive (or no) automatic scanning functionality to move a camera across its complete viewable field. However, the lack of scene-specific information inherently handicaps these scanning algorithms. We address this issue by automatically building an adaptive, focus-of-attention, scene-specific model using standard PTZ camera hardware. The adaptive model is constructed by first detecting local human activity (i.e., any translating object with a specific temporal signature) at discrete locations across a PTZ camera’s entire viewable field. The temporal signature of translating objects is extracted using motion history images (MHIs) and an original, efficient algorithm based on an iterative candidacy-classification-reduction process to separate the target motion from noise. The target motion at each location is then quantified and employed in the construction of a global activity map for the camera. We additionally present four new camera scanning algorithms which exploit this activity map to maximize a PTZ camera’s opportunity of observing human activity within the camera’s overall field of view. We expect that these efficient and effective algorithms are implementable within current commercial camera systems.


workshop on applications of computer vision | 2007

Building Adaptive Camera Models for Video Surveillance

James W. Davis; Alexander M. Morison; David D. Woods

We address the limited automatic scanning functionality of standard PTZ camera systems. We present an adaptive, scene-specific model using standard PTZ camera hardware. The adaptive model is constructed automatically by detecting human activity in motion history images (MHIs) using an iterative candidacy-classification-reduction process. The target motion is quantified and employed in the construction of a global activity map, which in turn is used to direct or navigate the camera


Human Factors and Ergonomics Society Annual Meeting Proceedings | 2009

How panoramic visualization can support human supervision of intelligent surveillance

Alexander M. Morison; David D. Woods; James W. Davis

In video-based surveillance people monitor a wide spatial area through video sensors for anomalous events related to safety and security. The size of the area, the number of video sensors, and the cameras narrow field-of-view make this a challenging cognitive task. Computer vision researchers have developed a wide range of algorithms to recognize patterns in the video stream (intelligent cameras). These advances create a challenge for human supervision of these intelligent surveillance camera networks. This paper presents a new visualization that has been developed and implemented to integrate video-based computer vision algorithms with control of pan-tilt-zoom cameras in a manner that supports the human supervisory role.


Informatics (Basel) | 2016

Opening up the Black Box of Sensor Processing Algorithms through New Visualizations

Alexander M. Morison; David D. Woods

Vehicles and platforms with multiple sensors connect people in multiple roles with different responsibilities to scenes of interest. For many of these human–sensor systems there are a variety of algorithms that transform, select, and filter the sensor data prior to human intervention. Emergency response, precision agriculture, and intelligence, surveillance and reconnaissance (ISR) are examples of these human–computation–sensor systems. The authors examined a case of the latter to understand how people in various roles utilize the algorithms output to identify meaningful properties in data streams given uncertainty. The investigations revealed: (a) that increasingly complex interactions occur across agents in the human–computation–sensor system; and (b) analysts struggling to interpret the output of “black box” algorithms given uncertainty and change in the scenes of interest. The paper presents a new interactive visualization concept designed to “open up the black box” of sensor processing algorithms to support human analysts as they look for meaning in feeds from sensors.


international conference on human computer interaction | 2016

Seeing Through Multiple Sensors into Distant Scenes: The Essential Power of Viewpoint Control

Alexander M. Morison; Taylor Murphy; David D. Woods

Sensors are being attached to almost every device and vehicle and integrated together to form sensor systems that extend human reach into distant environments. This means human stakeholders have the potential to see into previously inaccessible environments and to take new vantage points and perspectives. However, current designs of these human-sensor systems suffer from basic deficiencies such as an inability to keep pace with activities in the world, the keyhole problem, high re-orienting costs, and the multiple feeds problem. Principled approaches to the development of human-sensor systems are necessary to overcome these challenges. Principles for viewpoint control provide the key to overcome the limitations of current designs.


international conference on human computer interaction | 2016

Can I Reach that? An Affordance Based Metric of Human-Sensor-Robot System Effectiveness

Taylor Murphy; Alexander M. Morison

A persons ability to perceive and act fluently in a remote environment through teleoperation of a robotic platform is clearly limited when compared to acting directly in an immediate environment. Despite the contrast between teleoperation and direct action, there are few metrics in the human robot interaction literature that are sensitive to these differences. Existing human-robot assessment studies rely on observational accounts and studies that simulate domain tasks, then applying ad hoc metrics to assess performance. These metrics are typically properties of the task like completion time, number of targets found, and operator mental workload. This study introduces a formal method and metric based on the perception of affordances. The study assesses a human-robot systems ability to perceive the reachability of an object using a mechanical arm. Affordance-based metrics are a new tool to quantify the effectiveness of different teleoperated sensor-robot systems designs.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Affordances as a Means to Assess Human-Sensor-Robot Performance

Taylor Murphy; Alexander M. Morison

Tele-operated sensor-robot systems consist of sensors on a robotic platform present in some location with a human operator located somewhere outside the scene of interest. One common theme from observing operators of these systems in the field is that they must explicitly deliberate on the fit between a robot’s capabilities and the surrounding environment. Even simple tasks like moving through an aperture or reaching for an object present a challenge. In perceptual psychology the fit between capability and environment is called an affordance (Gibson, 1986). The theory of affordances provides a way to frame the difficulty robot operators have with teleoperation. From this perspective, the design of many robotic platforms appears to hobble an operator’s natural ability to perceive affordances, impairing task performance (Morison, Woods & Murphy, 2014). This insight about the source of operator difficulty suggests that assessing the performance of any robotic system must also measure the ability of the operator to perceive affordances in the remote environment. The authors developed a method to measure the perceptual performance of a robot operator judging the reachability of an object. The method includes a software simulation of a robotic platform, including a robotic arm, and its environment. People acting as Robot Handlers (Woods et al., 2004) make reachability judgments from images captured by the robot’s camera. This method can generalize to assess the ability of any human-sensor-robot system to perceive other affordances. The measurement procedure is built on well-established methods developed in perceptual psychology (Warren, 1987; Heft 1993). The procedure uses a within subjects, repeated measures design. Target objects are presented at different distances from the robot, and participants are asked to judge the reachability of the object. The placement of the target object is based on a staircase sampling algorithm that adapts to the participant’s performance (Garcia-Perez, 2001). The sampling resulted in estimates for two different points on a psychometric function for each participating judge. The position and distance between these two points is a measure of the accuracy and precision of the participant’s ability to perceive the reachability affordance. The measurement procedure tests the impact of different visual cues available when the view into a remote environment is mediated by a sensor platform. The first test of the method, reported here, used three different environments that share attributes with real world operating environments. For example, the low condition was similar to operating underwater with no ground plane or shadows present. The other two conditions, medium and high, approximate the visual cues available when operating in a deconstructed environment, like a collapsed building, and in a constructed environment, respectively. The results from the method were psychophysical functions for the human-sensor-robot system, where performance was directly proportional to the number of visual cues to depth. Interestingly, judgements of reachability based on sensor feeds from a robotic platform are poorer than making the same judgment when directly viewing the target and scene as in classic studies of reachability perception (Warren, 1987). The average distance between the two estimated points for the high condition was slightly below half the robot arm’s length (44%). And for the low condition this width increased to nearly the entire length of the robot arm (84%). In conclusion, the results show two important points. First, it is possible to perform a psychophysical assessment of an affordance for an operator whose perception is mediated by a robotic system. And second, the limited ability of participants to perceive reachability corroborates observations of operators having difficulty tele-operating robotic platforms. The test is the first in a planned series that will continue to refine the method for human-sensor-robot system assessment. Additionally future studies will adapt this method to compare different sensor platform designs to assess whether sensor platform design can systematically alter the ability of robot handlers to perceive affordances in the remote environment.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

The Artificial Attention Model and Algorithm Baseline Testing over the Parameter Space

Daniel Roberts; Alexander M. Morison

Computational models of attention can be used to mitigate data overload especially when multiple sensors provide feeds to a human observer who is not present in the same environment as the network of sensors. Computational models of attention use a variety of functional components to find a balance between reorienting to new events and stimuli and focusing on currently active and relevant events by guiding one or more sampling processes. This research reports the results from tests of the performance of several functional components of one computational attention model that has been designed to address overload from multiple sensor feeds. The functional components tested include parallel center and surround sampling processes, exploratory drive, and temporal dependencies. The tests map algorithm sampling behavior over its parameter space independent of environmental input – establishing baseline performance prior to testing performance when multiple objects move and new events occur.


Archive | 2015

Human-Robot Interaction as Extending Human Perception to New Scales

Alexander M. Morison; David D. Woods; Taylor Murphy; Robert R. Hoffman; Peter A. Hancock; Mark W. Scerbo; Raja Parasuraman; James L. Szalma


Archive | 2010

SPHERICAL VIEW POINT CONTROLLER AND METHOD FOR NAVIGATING A NETWORK OF SENSORS

Alexander M. Morison; David D. Woods; Axel Roesler

Collaboration


Dive into the Alexander M. Morison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James L. Szalma

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter A. Hancock

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge