Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan J. Lipton is active.

Publication


Featured researches published by Alan J. Lipton.


workshop on applications of computer vision | 1998

Moving target classification and tracking from real-time video

Alan J. Lipton; Hironobu Fujiyoshi; Raju S. Patil

This paper describes an end-to-end method for extracting moving targets from a real-time video stream, classifying them into predefined categories according to image-based properties, and then robustly tracking them. Moving targets are detected using the pixel wise difference between consecutive image frames. A classification metric is applied these targets with a temporal consistency constraint to classify them into three categories: human, vehicle or background clutter. Once classified targets are tracked by a combination of temporal differencing and template matching. The resulting system robustly identifies targets of interest, rejects background clutter and continually tracks over large distances and periods of time despite occlusions, appearance changes and cessation of target motion.


Proceedings of the IEEE | 2001

Algorithms for cooperative multisensor surveillance

Robert T. Collins; Alan J. Lipton; Hironobu Fujiyoshi; Takeo Kanade

The Video Surveillance and Monitoring (VSAM) team at Carnegie Mellon University (CMU) has developed an end-to-end, multicamera surveillance system that allows a single human operator to monitor activities in a cluttered environment using a distributed network of active video sensors. Video understanding algorithms have been developed to automatically detect people and vehicles, seamlessly track them using a network of cooperating active sensors, determine their three-dimensional locations with respect to a geospatial site model, and present this information to a human operator who controls the system through a graphical user interface. The goal is to automatically collect and disseminate real-time information to improve the situational awareness of security providers and decision makers. The feasibility of real-time video surveillance has been demonstrated within a multicamera testbed system developed on the campus of CMU. This paper presents an overview of the issues and algorithms involved in creating this semiautonomous, multicamera surveillance system.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Introduction to the special section on video surveillance

Robert T. Collins; Alan J. Lipton; Takeo Kanade

UTOMATED video surveillance addresses real-time observation of people and vehicles within a busy environment, leading to a description of their actions and interactions. The technical issues include moving object detection and tracking, object classification, human motion analysis, and activity understanding, touching on many of the core topics of computer vision, pattern analysis, and aritificial intelligence. Video surveillance has spawned large research projects in the United States, Europe, and Japan, and has been the topic of several international conferences and workshops in recent years. There are immediate needs for automated surveillance systems in commercial, law enforcement, and military applications. Mounting video cameras is cheap, but finding available human resources to observe the output is expensive. Although surveillance cameras are already prevalent in banks, stores, and parking lots, video data currently is used only “after the fact” as a forensic tool, thus losing its primary benefit as an active, real-time medium. What is needed is continuous 24-hour monitoring of surveillance video to alert security officers to a burglary in progress or to a suspicious individual loitering in the parking lot, while there is still time to prevent the crime. In addition to the obvious security applications, video surveillance technology has been proposed to measure traffic flow, detect accidents on highways, monitor pedestrian congestion in public spaces, compile consumer demographics in shopping malls and amusement parks, log routine maintainence tasks at nuclear facilities, and count endangered species. The numerous military applications include patrolling national borders, measuring the flow of refugees in troubled areas, monitoring peace treaties, and providing secure perimeters around bases and embassies. The 11 papers in this special section illustrate topics and techniques at the forefront of video surveillance research. These papers can be loosely organized into three categories. Detection and tracking involves real-time extraction of moving objects from video and continuous tracking over time to form persistent object trajectories. C. Stauffer and W.E.L. Grimson introduce unsupervised statistical learning techniques to cluster object trajectories produced by adaptive background subtraction into descriptions of normal scene activity. Viewpoint-specific trajectory descriptions from multiple cameras are combined into a common scene coordinate system using a calibration technique described by L. Lee, R. Romano, and G. Stein, who automatically determine the relative exterior orientation of overlapping camera views by observing a sparse set of moving objects on flat terrain. Two papers address the accumulation of noisy motion evidence over time. R. Pless, T. Brodský, and Y. Aloimonos detect and track small objects in aerial video sequences by first compensating for the self-motion of the aircraft, then accumulating residual normal flow to acquire evidence of independent object motion. L. Wixson notes that motion in the image does not always signify purposeful travel by an independently moving object (examples of such “motion clutter” are wind-blown tree branches and sun reflections off rippling water) and devises a flow-based salience measure to highlight objects that tend to move in a consistent direction over time. Human motion analysis is concerned with detecting periodic motion signifying a human gait and acquiring descriptions of human body pose over time. R. Cutler and L.S. Davis plot an object’s self-similarity across all pairs of frames to form distinctive patterns that classify bipedal, quadripedal, and rigid object motion. Y. Ricquebourg and P. Bouthemy track apparent contours in XT slices of an XYT sequence volume to robustly delineate and track articulated human body structure. I. Haritaoglu, D. Harwood, and L.S. Davis present W4, a surveillance system specialized to the task of looking at people. The W4 system can locate people and segment their body parts, build simple appearance models for tracking, disambiguate between and separately track multiple individuals in a group, and detect carried objects such as boxes and backpacks. Activity analysis deals with parsing temporal sequences of object observations to produce high-level descriptions of agent actions and multiagent interactions. In our opinion, this will be the most important area of future research in video surveillance. N.M. Oliver, B. Rosario, and A.P. Pentland introduce Coupled Hidden Markov Models (CHMMs) to detect and classify interactions consisting of two interleaved agent action streams and present a training method based on synthetic agents to address the problem of parameter estimation from limited real-world training examples. M. Brand and V. Kettnaker present an entropyminimization approach to estimating HMM topology and


Archive | 2000

A System for Video Surveillance and Monitoring

Robert T. Collins; Alan J. Lipton; Takeo Kanade; Hironobu Fujiyoshi; David Duggins; Yanghai Tsin; David Tolliver; Nobuyoshi Enomoto; Osamu Hasegawa; Peter Burt; Lambert E. Wixson


Archive | 1999

A System for Video Surveillance and Monitoring CMU VSAM Final Report

Takeo Kanade; Robert T. Collins; Alan J. Lipton; Hironobu Fujiyoshi; David Duggins


Archive | 1998

Advances in Cooperative Multi-Sensor Video Surveillance

Takeo Kanade; Robert T. Collins; Alan J. Lipton; Peter J. Burt; Lambert E. Wixson


IEICE Transactions on Information and Systems | 2004

Real-time human motion analysis by image skeletonization

Hironobu Fujiyoshi; Alan J. Lipton; Takeo Kanade


Archive | 1999

Local Application of Optic Flow to Analyse Rigid versus Non-Rigid Motion

Alan J. Lipton


Archive | 1997

Cooperative Multi-Sensor Video Surveillance

Takeo Kanade; Robert T. Collins; Alan J. Lipton; Padmanabhan Anandan; Peter J. Burt; Lambert E. Wixson; David Sarno


Archive | 1998

Using a DEM to Determine Geospatial Object Trajectories

Robert T. Collins; Yanghai Tsin; James Ryan Miller; Alan J. Lipton

Collaboration


Dive into the Alan J. Lipton's collaboration.

Top Co-Authors

Avatar

Robert T. Collins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Duggins

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yanghai Tsin

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David Tolliver

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nobuyoshi Enomoto

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge