Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yiğithan Dedeoğlu is active.

Publication


Featured researches published by Yiğithan Dedeoğlu.


Pattern Recognition Letters | 2006

Computer vision based method for real-time fire and flame detection

B. Uğur Töreyin; Yiğithan Dedeoğlu; Uğur Güdükbay; A. Enis Cetin

This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues.


international conference on image processing | 2005

Flame detection in video using hidden Markov models

B.U. Toreyin; Yiğithan Dedeoğlu; A.E. Cetin

This paper proposes a novel method to detect flames in video by processing the data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame flicker process is also detected by using a hidden Markov model. Markov models representing the flame and flame colored ordinary moving objects are used to distinguish flame flicker process from motion of flame colored moving objects. Spatial color variations in flame are also evaluated by the same Markov models, as well. These clues are combined to reach a final decision. False alarms due to ordinary motion of flame colored moving objects are greatly reduced when compared to the existing video based fire detection systems.


international conference on computer vision | 2005

HMM based falling person detection using both audio and video

B. Uğur Töreyin; Yiğithan Dedeoğlu; A. Enis Cetin

Automatic detection of a falling person in video is an important problem with applications in security and safety areas including supportive home environments and CCTV surveillance systems. Human motion in video is modeled using hidden Markov models (HMM) in this paper. In addition, the audio track of the video is also used to distinguish a person simply sitting on a floor from a person stumbling and falling. Most video recording systems have the capability of recording audio as well and the impact sound of a falling person is also available as an additional clue. Audio channel data based decision is also reached using HMMs and fused with results of HMMs modeling the video data to reach a final decision


international conference on computer vision | 2006

Silhouette-Based method for object classification and human action recognition in video

Yiğithan Dedeoğlu; B. Ugur Toreyin; Uğur Güdükbay; A. Enis Cetin

In this paper we present an instance based machine learning algorithm and system for real-time object classification and human action recognition which can help to build intelligent surveillance systems. The proposed method makes use of object silhouettes to classify objects and actions of humans present in a scene monitored by a stationary camera. An adaptive background subtract-tion model is used for object segmentation. Template matching based supervised learning method is adopted to classify objects into classes like human, human group and vehicle; and human actions into predefined classes like walking, boxing and kicking by making use of object silhouettes.


Multimodal Processing and Interaction | 2008

Surveillance Using Both Video and Audio

Yiğithan Dedeoğlu; B. Ugur Toreyin; Uğur Güdükbay; A. Enis Cetin

It is now possible to install cameras monitoring sensitive areas but it may not be possible to assign a security guard to each camera or a set of cameras. In addition, security guards may get tired and watch the monitor in a blank manner without noticing important events taking place in front of their eyes. Current CCTV surveillance systems are mostly based on video and recently intelligent video analysis systems capable of detecting humans and cars were developed for surveillance applications. Such systems mostly use Hidden Markov Models (HMM) or Support Vector Machines (SVM) to reach decisions. They detect important events but they also produce false alarms. It is possible to take advantage of other low cost sensors including audio to reduce the number of false alarms. Most video recording systems have the capability of recording audio as well. Analysis of audio for intelligent information extraction is a relatively new area. Automatic detection of broken glass sounds, car crash sounds, screams, increasing sound level at the background are indicators of important events. By combining the information coming from the audio channel with the information from the video channels, reliable surveillance systems can be built. In this chapter, current state of the art is reviewed and an intelligent surveillance system analyzing both audio and video channels is described.


conference on image and video retrieval | 2007

Dynamic texture detection, segmentation and analysis

B. U. Toreyin; Yiğithan Dedeoğlu; A. E. Cetin; S. Fazekas; D. Chetverikov; Tomer Amiaz; Nahum Kiryati

Dynamic textures are common in natural scenes. Examples of dynamic textures in video include fire, smoke, clouds, trees in the wind, sky, sea and ocean waves etc. In this showcase, (i) we develop real-time dynamic texture detection methods in video and (ii) present solutions to video object classification based on motion information.


conference on image and video retrieval | 2007

Flexible test-bed for unusual behavior detection

István Petrás; Csaba Beleznai; Yiğithan Dedeoğlu; Montse Pardàs; Levente Attila Kovács; Zoltán Szlávik; László Rajmund Havasi; Tamás Szirányi; B. Ugur Toreyin; Uğur Güdükbay; A. Enis Cetin; Cristian Canton-Ferrer

Visual surveillance and activity analysis is an active research field of computer vision. As a result, there are several different algorithms produced for this purpose. To obtain more robust systems it is desirable to integrate the different algorithms. To help achieve this goal, we propose a flexible, distributed software collaboration framework and present a prototype system for automatic event analysis.


Digital Signal Processing | 2013

Motion capture and human pose reconstruction from a single-view video sequence

Uğur Güdükbay; İbrahim Demir; Yiğithan Dedeoğlu

We propose a framework to reconstruct the 3D pose of a human for animation from a sequence of single-view video frames. The framework for pose construction starts with background estimation and the performer@?s silhouette is extracted using image subtraction for each frame. Then the body silhouettes are automatically labeled using a model-based approach. Finally, the 3D pose is constructed from the labeled human silhouette by assuming orthographic projection. The proposed approach does not require camera calibration. It assumes that the input video has a static background, it has no significant perspective effects, and the performer is in an upright position. The proposed approach requires minimal user interaction.


signal processing and communications applications conference | 2005

Real-time smoke and flame detection in video

B.U. Toreyin; Yiğithan Dedeoğlu; A.E. Cetin

A novel method to detect smoke and/or flame by processing the video data generated by an ordinary camera monitoring a scene is proposed. It is assumed the camera is stationary. Since the smoke is semi-transparent, edges of image frames start loosing their sharpness and this leads to a decrease in the high frequency content of the image. To determine the smoke, the background of the scene is estimated and decrease of high frequency energy of the scene is monitored using the spatial wavelet transforms of the current and the background images. For the detection of flames, in addition to ordinary motion and color clues, flicker analysis is also carried out by analyzing the video in wavelet domain. These clues are combined to reach a final decision.


european signal processing conference | 2005

Wavelet based real-time smoke detection in video

B. Uğur Töreyin; Yiğithan Dedeoğlu; A. Enis Cetin

Collaboration


Dive into the Yiğithan Dedeoğlu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

István Petrás

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamás Szirányi

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zoltán Szlávik

Hungarian Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge