B.U. Toreyin
Bilkent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by B.U. Toreyin.
international conference on image processing | 2005
B.U. Toreyin; Yiğithan Dedeoğlu; A.E. Cetin
This paper proposes a novel method to detect flames in video by processing the data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame flicker process is also detected by using a hidden Markov model. Markov models representing the flame and flame colored ordinary moving objects are used to distinguish flame flicker process from motion of flame colored moving objects. Spatial color variations in flame are also evaluated by the same Markov models, as well. These clues are combined to reach a final decision. False alarms due to ordinary motion of flame colored moving objects are greatly reduced when compared to the existing video based fire detection systems.
international conference on acoustics, speech, and signal processing | 2005
N. Dedeoglu; B.U. Toreyin; Uğur Güdükbay; A.E. Cetin
The paper proposes a novel method to detect fire and/or flame by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker are detected by analyzing the video in the wavelet domain. Periodic behavior in flame boundaries is detected by performing a temporal wavelet transform. Color variations in fire are detected by computing the spatial wavelet transform of moving fire-colored regions. Other clues used in the fire detection algorithm include irregularity of the boundary of the fire-colored region and the growth of such regions in time. All of the above clues are combined to reach a final decision.
computer vision and pattern recognition | 2007
B.U. Toreyin; A.E. Cetin
This paper describes an online learning based method to detect flames in video by processing the data generated by an ordinary camera monitoring a scene. Our fire detection method consists of weak classifiers based on temporal and spatial modeling of flames. Markov models representing the flame and flame colored ordinary moving objects are used to distinguish temporal flame flicker process from motion of flame colored moving objects. Boundary of flames are represented in wavelet domain and high frequency nature of the boundaries of fire regions is also used as a clue to model the flame flicker spatially. Results from temporal and spatial weak classifiers based on flame flicker and irregularity of the flame region boundaries are updated online to reach a final decision. False alarms due to ordinary and periodic motion of flame colored moving objects are greatly reduced when compared to the existing video based fire detection systems.
signal processing and communications applications conference | 2008
B.U. Toreyin; E.B. Soyer; Onay Urfalioglu; A.E. Cetin
In this paper, a flame detection system based on a pyroelectric (or passive) infrared (PIR) sensor is described. The flame detection system can be used for fire detection in large rooms. The flame flicker process of an uncontrolled fire and ordinary activity of human beings are modeled using a set of hidden Markov models (HMM), which are trained using the wavelet transform of the PIR sensor signal. Whenever there is an activity within the viewing range of the PIR sensor system, the sensor signal is analyzed in the wavelet domain and the wavelet signals are fed to a set of HMMs. A fire or no fire decision is reached according to the HMM producing the highest probability.
international conference on acoustics, speech, and signal processing | 2006
B.U. Toreyin; M. Trocan; B. Pesquet-Popescu; A.E. Cetin
3D video codecs have attracted recently a lot of attention, due to their compression performance comparable with that of state-of-art hybrid codecs and due to their scalability features. In this work, we propose a least mean square (LMS) based adaptive prediction for the temporal prediction step in lifting implementation. This approach improves the overall quality of the coded video, by reducing both the blocking and ghosting artefacts. Experimental results show that the video quality as well as PSNR values are greatly improved with the proposed adaptive method, especially for video sequences with large contrast between the moving objects and the background and for sequences with illumination variations
signal processing and communications applications conference | 2008
B.U. Toreyin; A.E. Cetin
Lookout posts are commonly installed in the forests all around Turkey and the world. Most of these posts have electricity. Surveillance cameras can be placed on to these surveillance towers to detect possible forest fires. Currently, average fire detection time is 5 minutes in manned lookout towers. The aim of the proposed computer vision based method is to reduce the average fire detection rate. The detection method is based on the wavelet based analysis of the background images at various update rates.
signal processing and communications applications conference | 2008
O. Urfaliglu; E.B. Soyer; B.U. Toreyin; A.E. Cetin
In this paper, we use a modified passive infrared radiation or pyroelectric infrared (PIR) sensor to classify 5 different human motion events with one additional ldquono actionrdquo event. Event detection enables new applications in environments hosting dynamic processes. Typical event detection applications are based on audio or video sensor data. Given a data stream, often the task is to find or classify specific dynamic processes. Most of the applications for the monitoring of human activities in an environment are based on video sensor data. As an alternative or complementary approach, low cost PIR sensors can be used for such applications. The classification is done by a Bayesian approach using conditional Gaussian mixture models (CGMM) trained for each class. We show in experiments that using PIR-sensors, different human motion events in a room can be successfully detected.
international conference on image processing | 2008
B.U. Toreyin; A.E. Cetin
A video based method to detect volatile organic compounds (VOC) leaking out of process equipments used in petrochemical refineries is developed. Leaking VOC plume from a damaged component causes edges present in image frames loose their sharpness. This leads to a decrease in the high frequency content of the image. The background of the scene is estimated and decrease of high frequency energy of the scene is monitored using the spatial wavelet transforms of the current and the background images. Plume regions in image frames are analyzed in low-band sub-images, as well. Image frames are compared with their corresponding low- band images. A maximum likelihood estimator (MLE) for adaptive threshold estimation is also developed in this paper.
signal processing and communications applications conference | 2012
Fatih Erden; B.U. Toreyin; E.B. Soyer; Ihsan Inac; Osman Günay; Kivanc Kose; A.E. Cetin
In this paper, a flame detection system using a differential Pyro-electric Infrared (PIR) sensor is proposed. A differential PIR sensor is only sensitive to sudden temperature variations within its viewing range and it produces a time-varying signal. The wavelet transform of the differential PIR sensor signal is used for feature extraction and feature vectors are fed to Markov models trained with uncontrolled fire flames and walking person. The model yielding the highest probability is chosen. Results suggest that the system can be used in spacious rooms for uncontrolled fire flame detection.
signal processing and communications applications conference | 2004
B.U. Toreyin; A.E. Cetin; Anil Aksay; M.B. Akhan
In many vision based surveillance systems, the video is stored in wavelet compressed form. An algorithm for moving object and region detection in video that is compressed using a wavelet transform (WT) is developed. The algorithm estimates the WT of the background scene from the WTs of the past image frames of the video. The WT of the current image is compared with the WT of the background and the moving objects are determined from the difference. The algorithm does not perform inverse WT to obtain the actual pixels of the current image nor the estimated background. This leads to a computationally efficient method and a system comparable to existing motion estimation methods.