Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where I. Burak Özer is active.

Publication


Featured researches published by I. Burak Özer.


IEEE Computer | 2002

Smart cameras as embedded systems

Wayne H. Wolf; I. Burak Özer; Tiehan Lv

Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While todays digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification. Video processing has an insatiable demand for real-time performance. Smart cameras leverage very large-scale integration to meet this need in a low-cost, low-power system with substantial memory. Moving well beyond pixel processing and compression, these VLSI systems run a wide range of algorithms to extract meaning from streaming video. Recently, Princeton University researchers developed a first-generation smart camera system that can detect people and analyze their movement in real time. Because they push the design space in so many dimensions, these smart cameras are a leading-edge application for embedded system research.


computer vision and pattern recognition | 2006

Hardware/Software Co-Design of an FPGA-based Embedded Tracking System

Jason Schlessman; Cheng-Yao Chen; Wayne H. Wolf; I. Burak Özer; Kenji Fujino; Kazurou Itoh

This paper discusses a practical design experience pertaining to a tracking system employing optical flow. The system was previously extracted from an existing software implementation and modified for FPGA deployment. Details are provided regarding transference of the resulting high-level design to a usable form for FPGA fabrics. Furthermore, discussion is given for obstacles made manifest in embedded vision design and the methods employed for overcoming them. This is attempted with the intent of maintaining a consistent level of vision algorithm performance as well as meeting real-time requirements. The system discussed differs from previous embedded systems employing optical flow in that it consists strictly of fully disclosed nonproprietary transferable components while providing performance measures for power consumption, latency, and area. The system was synthesized onto a Xilinx Virtex-II Pro XC2VP30 FPGA utilizing less than 25% of system resources, performing with a maximum operating frequency of 67MHz without pipelining, and consuming 497mW of power.


workshop on human motion | 2000

Human activity detection in MPEG sequences

I. Burak Özer; Wayne H. Wolf; Ali N. Akansu

We propose a hierarchical method for human detection and activity recognition in MPEG sequences. The algorithm consists of three stages at different resolution levels. The first step is based on the principal component analysis of MPEG motion vectors of macroblocks grouped according to velocity, distance and human body proportions. This step reduces the complexity and amount of processing data. The DC DCT components of luminance and chrominance are the input for the second step, to be matched to activity templates and a human skin template. A more detailed analysis of the uncompressed regions extracted in previous steps is done at the last step via model-based segmentation and graph matching. This hierarchical scheme enables working at different levels, from low complexity to low false rates. It is important and interesting to realize that significant information can be obtained from the compressed domain in order to connect to high level semantics.


IEEE Transactions on Multimedia | 2002

A hierarchical human detection system in (un)compressed domains

I. Burak Özer; Wayne H. Wolf

We propose a hierarchical retrieval system where shape, color and motion characteristics of the human body are captured in compressed and uncompressed domains. The proposed retrieval method provides human detection and activity recognition at different resolution levels from low complexity to low false rates and connects low level features to high level semantics by developing relational object and activity presentations. The available information of standard video compression algorithms are used in order to reduce the amount of time and storage needed for the information retrieval. The principal component analysis is used for activity recognition using MPEG motion vectors and results are presented for walking, kicking, and running to demonstrate that the classification among activities is clearly visible. For low resolution and monochrome images it is demonstrated that the structural information of human silhouettes can be captured from AC-DCT coefficients.


international conference on multimedia and expo | 2004

A real-time background subtraction method with camera motion compensation

Tiehan Lv; I. Burak Özer; Wayne H. Wolf

Background subtraction algorithms are critical to many video recognition/analysis systems and have been studied for decades. Most of the algorithms assume that the camera is fixed. We propose a background subtraction algorithm that works when a shaking camera is present. In this algorithm, the input frames are compensated and compared with the given reference frame to separate foreground objects from the background. Experimental results show that the proposed method outperforms the widely used Gaussian mixture model based method in both fixed camera and shaking camera scenarios with respect to accuracy, robustness, and efficiency.


international conference on multimedia and expo | 2007

Heterogeneous MPSoC Architectures for Embedded Computer Vision

Jason Schlessman; Mark Lodato; I. Burak Özer; Wayne H. Wolf

In this paper, architectures for two distinct embedded computer vision operations are presented. Motivation is given for the utilization of heterogeneous processing cores on a single chip. In addition, a brief discussion of applicability of multi-processor system on a chip (MPSoC) design challenges and techniques to nascent multi-core development considerations is given. Furthermore, a composite architecture consisting of the two distinct operations is discussed, with relative merits of this approach provided. Finally, experimental analysis is given for the applicability and feasibility of these heterogeneous multiprocessor architectures. Area, power, and cycle times are provided for each of the aforementioned designs. The architectural mappings were implemented on a Xilinx Virtex-II Pro V2P30 FPGA, and are shown to operate without pipelining at 50 MHz, utilizing roughly 46% of FPGA resources, and consuming 565 mW of power.


international conference on multimedia and expo | 2003

Architectures for distributed smart cameras

Wayne H. Wolf; I. Burak Özer; Tiehan Lv

This paper describes our new multiple-camera architecture for real-time video analysis. This architecture uses an array of relatively inexpensive cameras to gather images in order to provide high resolution at low cost. The system also uses a hierarchy of cameras, including both wide-angle and telephoto views. Wide-angle cameras are responsible for camera coordination while telephoto cameras are primarily responsible for detailed processing of parts of the scene.


international conference on distributed smart cameras | 2007

Real-Time Human Motion Detection with Distributed Smart Cameras

Mark Daniels; Kate Muldawer; Jason Schlessman; I. Burak Özer; Wayne H. Wolf

Many smart camera security systems employ a single camera model; this makes depth perception impossible and the occlusion of objects (either by fixtures or by other body parts of the subject) prevents meaningful task automation. Multi-camera systems have significant overhead in communication and three-dimensional modeling. We have developed a multi-camera system capable of overcoming this issue. Two cameras observing the same space from different vantage points provide depth perception of a subject so that the positions of the hands and face can be mapped in three dimensions. Unlike other three-dimensional modeling programs, we use an ultra-compression method and build on existing message passing interface (MPI) middleware for communication, allowing for real-time performance. Our application provides a framework for robust motion detection and gesture recognition.


international conference on multimedia and expo | 2002

A bottom-up approach for activity recognition in smart rooms

I. Burak Özer; Tiehan Lv; Wayne H. Wolf

We propose a smart camera system where the cameras detect the presence of a person and recognize activities of this person. A relational graph-based modeling of human body and a HMM-based activity recognition of the body parts are proposed for real-time video analysis. The results show that more than 86 percent of the body parts and 88 percent of the activities are correctly classified. We also describe the relationship between the activity detection algorithms and the architectures required to perform these tasks in real time. We achieve a processing rate of more than 20 frames per second for each TriMedia video capture board.


signal processing systems | 2001

A smart camera for real-time human activity recognition

Wayne H. Wolf; I. Burak Özer

This paper describes a smart camera system under development at Princeton University. This smart camera is designed for use in a smart room in which the camera detects the presence of a person in its visual field and determines when various gestures are made by the person. As a first step toward a VLSI implementation, we use Trimedia processors hosted by a PC. This paper describes the relationship between the algorithms used for human activity detection and the architectures required to perform these tasks in real time.

Collaboration


Dive into the I. Burak Özer's collaboration.

Top Co-Authors

Avatar

Wayne H. Wolf

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali N. Akansu

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marilyn Wolf

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge