Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Claude L. Fennema is active.

Publication


Featured researches published by Claude L. Fennema.


Computer Graphics and Image Processing | 1979

Velocity determination in scenes containing several moving objects

Claude L. Fennema; William B. Thompson

Abstract A method is described which quantifies the speed and direction of several moving objects in a sequence of digital images. A relationship between the time variation of intensity, the spatial gradient, and velocity has been developed which allows the determination of motion using clustering techniques. This paper describes these relationships, the clustering technique, and provides examples of the technique on real images containing several moving objects.


Artificial Intelligence | 1970

Scene analysis using regions

Claude R. Brice; Claude L. Fennema

Abstract One of the vision projects of the Stanford Research Institute Artificial Intelligence Group is described. The method employed uses regions as basic data and progresses by successive partitioning of the picture toward an interpretable “goal partition”, which is then explored by a heuristic decision tree. A general structure is discussed and an example problem is shown in detail.


systems man and cybernetics | 1990

Model-directed mobile robot navigation

Claude L. Fennema; Allen R. Hanson; Edward M. Riseman; J.R. Beveridge; R. Kumar

The authors report on the system and methods used by UMass Mobile Robot Project. Model-based processing of the visual sensory data is the primary mechanism used for controlling movement of an autonomous land vehicle through the environment, measuring progress towards a given goal, and avoiding obstacles. Goal-oriented navigation takes place through a partially modeled, unchanging environment that contains no unmodeled obstacles; this simplified environment provides a foundation for research in more complicated domains. The navigation system integrates perception, planning, and execution of actions. Of particular importance is that the planning processes are reactive and reason about landmarks that should be perceived at various stages of task execution. Correspondence between image features and expected landmark locations are used at several abstraction levels to ensure proper plan execution. The system and some experiments that demonstrate the performance of its components is described. >


ieee virtual reality conference | 2010

Influence of tactile feedback and presence on egocentric distance perception in virtual environments

Farahnaz Ahmed; Joseph D. Cohen; Katherine S. Binder; Claude L. Fennema

A number of studies have reported that distance judgments are underestimated in virtual environments (VE) when compared to those made in the real world. Studies have also reported that providing users with visual feedback in the VE improves their distance perception and made them feel more immersed in the virtual world. In this study, we investigated the effect of tactile feedback and visual manipulation of the VE on egocentric distance perception. In contrast to previous studies which have focused on task-specific and error-corrective feedback (for example, providing knowledge about the errors in distance estimations), we demonstrate that exploratory feedback is sufficient for reducing errors in distance estimation. In Experiment 1, the effects of different types of feedback (visual, tactile and visual plus tactile) on distance judgments were studied. Tactile feedback was given to participants as they explored and touched objects in a VE. Results showed that distance judgments improved in the VE regardless of the type of sensory feedback provided. In Experiment 2, we presented a real world environment to the participants and then situated them in a VE that was either a replica or an altered representation of the real world environment. Results showed that participants made significant underestimation in their distance judgments when the VE was not a replica of the physical space. We further found that providing both visual and tactile feedback did not reduce distance compression in such a situation. These results are discussed in the light of the nature of feedback provided and how assumptions about the VE may affect distance perception in virtual environments.


applied perception in graphics and visualization | 2007

Orthographic and perspective projection influences linear vection in large screen virtual environments

Laura C. Trutoiu; Silvia-Dana Marin; Betty J. Mohler; Claude L. Fennema

Vection is defined as the visually induced illusion of self motion [Fischer and Kornmüller 1930]. Previous research has suggested that linear vection (the illusion of self-translation) is harder to achieve than circular vection (the illusion of self-rotation) in both laboratory settings (typically using 2D stimuli such as black and white stripes) [Rieser 2006] and virtual environment setups [Schulte-Pelkum 2007; Mohler et al. 2005]. In real a life situation when experiencing circular vection all objects rotate around the observer with the same angular velocity. For linear motion, however, the change in the oberver position results in a change in the observed position of closer objects with respect to farther away objects or the background. This phenomenon, motion parallax, provides pictorial depth cues as closer objects appear to be moving faster compared to more distant objects.


Intelligent Robots and Computer Vision XII: Algorithms and Techniques | 1993

Finding landmark features under a broad range of lighting conditions

Claude L. Fennema

Whether computer vision is used to steer a robot, to determine the location of an object, to model the environment, or to perform recognition, it is usually necessary to have a simple, yet robust method for finding features. Over the last few decades many methods have been devised for locating these features using line or region based approaches. A problem that faces most approaches, however, is that variations in lighting due to changes in ambient light or to shadows make the procedure very complex or error prone. This paper describes a relatively simple method for finding features that is tolerant of wide variations in ambient lighting and works well in scenes containing shadows. The method makes use of correlation based template matching but derives most of its strength from the way it transforms the image data as matching is performed. In addition to a description of the method, the paper presents results of its use in experiments performed under a broad variety of conditions and discusses its role in model-based navigation and stereo matching.


Intelligent Robots and Computer Vision XI: Algorithms, Techniques, and Active Vision | 1992

Interweaving Reason, Action and Perception

Claude L. Fennema; Allen R. Hanson


Proceedings of a workshop on Image understanding workshop | 1989

Towards Autonomous Mobile Robot Navigation

Claude L. Fennema; Allen R. Hanson; Edward M. Riseman


1988 Robotics Conferences | 1989

Planning With Perceptual Milestones To Control Uncertainty In Robot Navigation

Claude L. Fennema; Edward M. Riseman; Allen R. Hanson


Archive | 1994

Integration for navigation on the UMASS mobile perception lab

Bruce A. Draper; Claude L. Fennema; Benny Rochwerger; Edward M. Riseman; Allen R. Hanson

Collaboration


Dive into the Claude L. Fennema's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward M. Riseman

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Bruce A. Draper

Colorado State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura C. Trutoiu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mei C. Chuah

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge