Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Enis Cetin is active.

Publication


Featured researches published by A. Enis Cetin.


Pattern Recognition Letters | 2006

Computer vision based method for real-time fire and flame detection

B. Uğur Töreyin; Yiğithan Dedeoğlu; Uğur Güdükbay; A. Enis Cetin

This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues.


international conference on computer vision | 2005

HMM based falling person detection using both audio and video

B. Uğur Töreyin; Yiğithan Dedeoğlu; A. Enis Cetin

Automatic detection of a falling person in video is an important problem with applications in security and safety areas including supportive home environments and CCTV surveillance systems. Human motion in video is modeled using hidden Markov models (HMM) in this paper. In addition, the audio track of the video is also used to distinguish a person simply sitting on a floor from a person stumbling and falling. Most video recording systems have the capability of recording audio as well and the impact sound of a falling person is also available as an additional clue. Audio channel data based decision is also reached using HMMs and fused with results of HMMs modeling the video data to reach a final decision


IEEE Signal Processing Letters | 1994

Adaptive filtering for non-Gaussian stable processes

Orhan Arikan; A. Enis Cetin; Engin Erzin

A large class of physical phenomena observed in practice exhibit non-Gaussian behavior. In the letter /spl alpha/-stable distributions, which have heavier tails than Gaussian distributions, are considered to model non-Gaussian signals. Adaptive signal processing in the presence of such a noise is a requirement of many practical problems. Since direct application of commonly used adaptation techniques fail in these applications, new algorithms for adaptive filtering for /spl alpha/-stable random processes are introduced.<<ETX>>


international conference on computer vision | 2006

Silhouette-Based method for object classification and human action recognition in video

Yiğithan Dedeoğlu; B. Ugur Toreyin; Uğur Güdükbay; A. Enis Cetin

In this paper we present an instance based machine learning algorithm and system for real-time object classification and human action recognition which can help to build intelligent surveillance systems. The proposed method makes use of object silhouettes to classify objects and actions of humans present in a scene monitored by a stationary camera. An adaptive background subtract-tion model is used for object segmentation. Template matching based supervised learning method is adopted to classify objects into classes like human, human group and vehicle; and human actions into predefined classes like walking, boxing and kicking by making use of object silhouettes.


international conference on image processing | 1997

Speaker identification and video analysis for hierarchical video shot classification

Jeho Nam; A. Enis Cetin; Ahmed H. Tewfik

We present a new video shot classification and clustering technique to support content-based indexing, browsing and retrieval in video databases. The proposed method is based on the analysis of both the audio and visual data tracks. The visual stream is analyzed using a 3-D wavelet transform and segmented into shot units which are matched and clustered by visual content. Simultaneously, speaker changes are detected by tracking voiced phonemes in the audio signal. The clues obtained from the video and speech data are combined to classify and group the isolated video shots. This integrated approach also allows effective indexing of the audio-visual objects in multimedia databases.


Digital Signal Processing | 2007

Feasibility of impact-acoustic emissions for detection of damaged wheat kernels

Tom C. Pearson; A. Enis Cetin; Ahmed H. Tewfik; Ron P. Haff

A non-destructive, real time device was developed to detect insect damage, sprout damage, and scab damage in kernels of wheat. Kernels are impacted onto a steel plate and the resulting acoustic signal analyzed to detect damage. The acoustic signal was processed using four different methods: modeling of the signal in the time-domain, computing time-domain signal variances and maximums in short-time windows, analysis of the frequency spectrum magnitudes, and analysis of a derivative spectrum. Features were used as inputs to a stepwise discriminant analysis routine, which selected a small subset of features for accurate classification using a neural network. For a network presented with only insect damaged kernels (IDK) with exit holes and undamaged kernels, 87% of the former and 98% of the latter were correctly classified. It was also possible to distinguish undamaged, IDK, sprout-damaged, and scab-damaged kernels.


Signal Processing-image Communication | 2005

Moving object detection in wavelet compressed video

B. Uğur Töreyin; A. Enis Cetin; Anil Aksay; M. Bilgay Akhan

In many surveillance systems the video is stored in wavelet compressed form. In this paper, an algorithm for moving object and region detection in video which is compressed using a wavelet transform (WT) is developed. The algorithm estimates the WT of the background scene from the WTs of the past image frames of the video. The WT of the current image is compared with the WT of the background and the moving objects are determined from the difference. The algorithm does not perform inverse WT to obtain the actual pixels of the current image nor the estimated background. This leads to a computationally efficient method and a system compared to the existing motion estimation methods.


visual information processing conference | 2006

Human face detection in video using edge projections

Mehmet Turkan; Berkan Dulek; Ibrahim Onaran; A. Enis Cetin

In this paper, a human face detection algorithm in images and video is presented. After determining possible face candidate regions using colour information, each region is filtered by a high-pass filter of a wavelet transform. In this way, edges of the region are highlighted, and a caricature-like representation of candidate regions is obtained. Horizontal, vertical, filter-like and circular projections of the region are used as feature signals in support vector machine (SVM) based classifiers. It turns out that our feature extraction method provides good detection rates with SVM based classifiers.


EURASIP Journal on Advances in Signal Processing | 2008

Falling person detection using multisensor signal processing

B. Ugur Toreyin; E. Birey Soyer; Ibrahim Onaran; A. Enis Cetin

Falls are one of the most important problems for frail and elderly people living independently. Early detection of falls is vital to provide a safe and active lifestyle for elderly. Sound, passive infrared (PIR) and vibration sensors can be placed in a supportive home environment to provide information about daily activities of an elderly person. In this paper, signals produced by sound, PIR and vibration sensors are simultaneously analyzed to detect falls. Hidden Markov Models are trained for regular and unusual activities of an elderly person and a pet for each sensor signal. Decisions of HMMs are fused together to reach a final decision.


international conference on acoustics speech and signal processing | 1999

The Teager energy based feature parameters for robust speech recognition in car noise

Firas Jabloun; A. Enis Cetin

In this paper, a new set of speech feature parameters based on multirate signal processing and the Teager energy operator is developed. The speech signal is first divided into nonuniform subbands in mel-scale using a multirate filter-bank, then the Teager energies of the subsignals are estimated. Finally, the feature vector is constructed by log-compression and inverse DCT computation. The new feature parameters have a robust speech recognition performance in car engine noise which is low pass in nature.

Collaboration


Dive into the A. Enis Cetin's collaboration.

Top Co-Authors

Avatar

Kivanc Kose

Memorial Sloan Kettering Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rengul Cetin-Atalay

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Serdar Cakir

Scientific and Technological Research Council of Turkey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasemin Yardimci

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge