Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles J. Cohen is active.

Publication


Featured researches published by Charles J. Cohen.


Applications in Optical Science and Engineering | 1993

Comprehensive study of three-object triangulation

Charles J. Cohen; Frank V. Koss

Given three landmarks whose location in Cartesian space is known, and a robot with the ability to detect each landmarks heading with respect to the robots internal orientation, the task is to determine the robots location and orientation in Cartesian space. Although methods are mentioned in literature, none are completely detailed to the point where useful algorithms to compute the solution are given. This paper presents four methods: (1) Iterative search, (2) Geometric circle intersection, (3) Geometric triangulation, and (4) Newton-Raphson iterative method. All four methods are presented in detail and compared in terms of robustness and computation time. For example, circle intersection fails when all three landmarks lie on a circle, while the Newton-Raphson method fails when the initial guess of the robots position and orientation is beyond a certain bound. The shortcomings and strengths of each method are discussed. A sensitivity analysis is performed on each method, with noise added to the landmark locations. The authors were involved in a mobile robotics project that demanded a robust three landmark triangulation algorithm. None of the above methods alone was adequate, but an intelligent combination of each method served to overcome their individual weaknesses. The implementation of this overall absolute positioning system is detailed.


international conference on automatic face and gesture recognition | 1996

Dynamical system representation, generation, and recognition of basic oscillatory motion gestures

Charles J. Cohen; Lynn Conway; Daniel E. Koditschek

We present a system for generation and recognition of oscillatory gestures. Inspired by gestures used in two representative human-to-human control areas, we consider a set of oscillatory motions and refine from them a 24 gesture lexicon. Each gesture is modeled as a dynamical system with added geometric constraints to allow for real time gesture recognition using a small amount of processing time and memory. The gestures are used to control a pan-tilt camera neck. We propose extensions for use in areas such as mobile robot control and telerobotics.


applied imagery pattern recognition workshop | 2003

Quantum image processing (QuIP)

Glenn J. Beach; Chris C. Lomont; Charles J. Cohen

Moores law states that computing performance doubles every 18 months. While this has held true for 40 years, it is widely believed that this will soon come to an end. Quantum computation offers a potential solution to the eventual failure of Moores law. Researchers have shown that efficient quantum algorithms exist and can perform some calculations significantly faster than classical computers. Quantum computers require very different algorithms than classical computers, so the challenge of quantum computation is to develop efficient quantum algorithms. Cybernet is working with the Air Force Research Laboratory (AFRL) to create image processing algorithms for quantum computers. We have shown that existing quantum algorithms (such as Grovers algorithm) are applicable to image processing tasks. We are continuing to identify other areas of image processing which can be improved through the application of quantum computing.


systems man and cybernetics | 1998

Eye tracker system for use with head mounted displays

Glenn J. Beach; Charles J. Cohen; Jeff Braun; Gary Moody

Head mounted displays (HMDs) are convenient tools for presenting visual imagery. The usefulness of HMDs can be extended by integrating eye tracking and voice recognition with the HMD, creating a true hand-free interface. We have developed a low-cost eye tracking system for an HMD using commercially available hardware and easily configurable software developed in-house. This system, along with commercial speech recognition, allows one to control computer applications without the need for a keyboard or mouse.


IEEE Intelligent Systems | 1993

Integrated mobile-robot design-Winning the AAAI 1992 robot competition

David Kortenkamp; Marcus J. Huber; Charles J. Cohen; Ulrich Raschke; Clint Bidlack; Clare Bates Congdon; Frank V. Koss; Terry E. Weymouth

The Carmel project (computer-aided robotics for maintenance, emergency, and life support) which won the AAAI 1992 Robot Competition, is discussed. Carmels design philosophy and architecture, obstacle avoidance, global path planning, vision sensing, landmark triangulation, and supervisory planning system are described. The Carmel project shows that mobile robots can perform carefully chosen tasks reliably and efficiently, although this requires extensive integration of components and a solid engineering effort. >


applied imagery pattern recognition workshop | 2001

A basic hand gesture control system for PC applications

Charles J. Cohen; Glenn J. Beach; Gene Foulk

We discuss the issues involved in controlling computer applications via gestures composed of both static symbols and dynamic motions. Each gesture is modeled from either static model information or a linear-in-parameters dynamic system. Recognition occurs in a real-time environment using a small amount of processing time and memory. We examine which gestures are appropriate, how the gestures can be recognized, and which commands the gestures should control. The tracking method is detailed, along with its use in providing coordinates for the gesture control a PowerPoint presentation.


IEEE Transactions on Consumer Electronics | 1998

Video mirroring and iconic gestures: enhancing basic videophones to provide visual coaching and visual control

Lynn Conway; Charles J. Cohen

In this paper we present concepts and architectures for mirroring and gesturing into remote sites when video conferencing. Mirroring enables those at one site to visually coach those at a second site by pointing at locally referenceable objects in the scene reflected back to the second site. Thus mirroring provides a way to train people at remote sites in practical tasks such as operating equipment and assembling or fixing things. We also discuss how video mirroring can be extended to enable visual control of remote mechanisms, even when using basic videophones, by using a visual interpreter at the remote site to process transmitted visual cues and derive intended control actions in the remote scene.


applied imagery pattern recognition workshop | 2001

A realtime object tracking system using a color camera

George V. Paul; Glenn J. Beach; Charles J. Cohen

We describe a real-time object tracking system based on a color camera and a personal computer. The system is capable of tracking colored objects in the camera view in real-time. The algorithm uses the color, shape and motion of the object to achieve robust tracking even in the presence of partial occlusion and shape change. A key component of the system is a computationally efficient manner to track colored objects which makes it possible to do robust real-time tracking.


applied imagery pattern recognition workshop | 2008

Behavior recognition architecture for surveillance applications

Charles J. Cohen; Katherine Scott; Marcus J. Huber; Steven C. Rowe; Frank Morelli

Differentiating between normal human activity and aberrant behavior via closed circuit television cameras is a difficult and fatiguing task. The vigilance required of human observers when engaged in such tasks must remain constant, yet attention falls off dramatically over time. In this paper we propose an architecture for capturing data and creating a test and evaluation system to monitor video sensors and tag aberrant human activities for immediate review by human monitors. A psychological perspective provides the inspiration of depicting isolated human motion by point-light walker (PLW) displays, as they have been shown to be salient for recognition of action. Low level intent detection features are used to provide an initial evaluation of actionable behaviors. This relies on strong tracking algorithms that can function in an unstructured environment under a variety of environmental conditions. Critical to this is creating a description of ldquosuspicious behaviorrdquo that can be used by the automated system. The resulting confidence value assessments are useful for monitoring human activities and could potentially provide early warning of IED placement activities.


systems man and cybernetics | 1998

Issues of controlling public kiosks and other self service machines using gesture recognition

Charles J. Cohen; Glenn J. Beach; George V. Paul; Jay Obermark; Gene Foulk

We discuss the issues involved in controlling self service machines via gestures composed of both static symbols and dynamic motions. Each gesture is modeled from either static model information or a linear-in-parameters dynamic system. Recognition occurs in a real-time environment using a small amount of processing time and memory. We will examine which gestures are appropriate, how the gestures can be recognized, and which commands the gestures should control.

Collaboration


Dive into the Charles J. Cohen's collaboration.

Top Co-Authors

Avatar

Lynn Conway

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clare Bates Congdon

University of Southern Maine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge