Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dogancan Temel is active.

Publication


Featured researches published by Dogancan Temel.


international conference on image processing | 2015

PerSIM: Multi-resolution image quality assessment in the perceptually uniform color domain

Dogancan Temel; Ghassan AlRegib

An average observer perceives the world in color instead of black and white. Moreover, the visual system focuses on structures and segments instead of individual pixels. Based on these observations, we propose a full reference objective image quality metric modeling visual system characteristics and chroma similarity in the perceptually uniform color domain (Lab). Laplacian of Gaussian features are obtained in the L channel to model the retinal ganglion cells in human visual system and color similarity is calculated over the a and b channels. In the proposed perceptual similarity index (PerSIM), a multi-resolution approach is followed to mimic the hierarchical nature of human visual system. LIVE and TID2013 databases are used in the validation and PerSIM outperforms all the compared metrics in the overall databases in terms of ranking, monotonic behavior and linearity.


IEEE Signal Processing Letters | 2016

UNIQUE: Unsupervised Image Quality Estimation

Dogancan Temel; Mohit Prabhushankar; Ghassan AlRegib; Alregib; Ghassan

In this letter, we estimate perceived image quality using sparse representations obtained from generic image databases through an unsupervised learning approach. A color space transformation, a mean subtraction, and a whitening operation are used to enhance descriptiveness of images by reducing spatial redundancy; a linear decoder is used to obtain sparse representations; and a thresholding stage is used to formulate suppression mechanisms in a visual system. A linear decoder is trained with 7 GB worth of data, which corresponds to 100 000 8 × 8 image patches randomly obtained from nearly 1000 images in the ImageNet 2013 database. A patch-wise training approach is preferred to maintain local information. The proposed quality estimator UNIQUE is tested on the LIVE, the Multiply Distorted LIVE, and the TID 2013 databases and compared with 13 quality estimators. Experimental results show that UNIQUE is generally a top performing quality estimator in terms of accuracy, consistency, linearity, and monotonic behavior.


Proceedings of SPIE | 2013

Efficient streaming of stereoscopic depth-based 3D videos

Dogancan Temel; Mohammed A. I. Aabed; Mashhour Solh; Ghassan AlRegib

In this paper, we propose a method to extract depth from motion, texture and intensity. We first analyze the depth map to extract a set of depth cues. Then, based on these depth cues, we process the colored reference video, using texture, motion, luminance and chrominance content, to extract the depth map. The processing of each channel in the YCRCB-color space is conducted separately. We tested this approach on different video sequences with different monocular properties. The results of our simulations show that the extracted depth maps generate a 3D video with quality close to the video rendered using the ground truth depth map. We report objective results using 3VQM and subjective analysis via comparison of rendered images. Furthermore, we analyze the savings in bitrate as a consequence of eliminating the need for two video codecs, one for the reference color video and one for the depth map. In this case, only the depth cues are sent as a side information to the color video.


Signal Processing-image Communication | 2016

CSV: Image quality assessment based on color, structure, and visual system

Dogancan Temel; Ghassan AlRegib

Abstract This paper presents a full-reference image quality estimator based on color, structure, and visual system characteristics denoted as CSV. In contrast to the majority of existing methods, we quantify perceptual color degradations rather than absolute pixel-wise changes. We use the CIEDE2000, color difference formulation to quantify low-level color degradations and the Earth Movers Distance between color name probability vectors to measure significant color degradations. In addition to the perceptual color difference, CSV also contains structural and perceptual differences. Structural feature maps are obtained by mean subtraction and divisive normalization, and perceptual feature maps are obtained from contrast sensitivity formulations of retinal ganglion cells. The proposed quality estimator CSV is tested on the LIVE, the Multiply Distorted LIVE, and the TID 2013 databases, and it is always among the top two performing quality estimators in terms of at least ranking, monotonic behavior or linearity.


ieee embs international conference on biomedical and health informatics | 2016

HeartBEAT: Heart beat estimation through adaptive tracking

Huijie Pan; Dogancan Temel; Ghassan AlRegib

In this paper, we propose an algorithm denoted as HeartBEAT that tracks heart rate from wrist-type photo-plethysmography (PPG) signals and simultaneously recorded three-axis acceleration data. HeartBEAT contains three major parts: spectrum estimation of PPG signals and acceleration data, elimination of motion artifacts in PPG signals using recursive least Square (RLS) adaptive filters, and auxiliary heuristics. We tested HeartBEAT on the 22 datasets provided in the 2015 IEEE Signal Processing Cup. The first ten datasets were recorded from subjects performing forearm and upper-arm exercises, jumping, or pushing-up. The last twelve datasets were recorded from subjects running on tread mills. The experimental results were compared to the ground truth heart rate, which comes from simultaneously recorded electrocardiogram (ECG) signals. Compared to state-of-the-art algorithms, HeartBEAT not only produces comparable Pearsons correlation and mean absolute error, but also higher Spearmans ρ and Kendalls τ.


international conference on image processing | 2014

A comparative study of computational aesthetics

Dogancan Temel; Ghassan AlRegib

Objective metrics model image quality by quantifying image degradations or estimating perceived image quality. However, image quality metrics do not model what makes an image more appealing or beautiful. In order to quantify the aesthetics of an image, we need to take it one step further and model the perception of aesthetics. In this paper, we examine computational aesthetics models that use hand-crafted, generic and hybrid descriptors. We show that generic descriptors can perform as well as state of the art hand-crafted aesthetics models that use global features. However, neither generic nor hand-crafted features is sufficient to model aesthetics when we only use global features without considering spatial composition or distribution. We also follow a visual dictionary approach similar to state of the art methods and show that it performs poorly without the spatial pyramid step.


asilomar conference on signals, systems and computers | 2012

Depth map estimation in DIBR stereoscopic 3d videos using a combination of monocular cues

Mohammed A. I. Aabed; Dogancan Temel; Mashhour Solh; Ghassan AlRegib

We propose a method to reconstruct the depth map from multiple estimated depth maps relying on monocular cues. Based on extracted depth cues from luminance, chrominance, motion and texture, we obtain an optimal depth estimation by analytically deriving the best combinations. We first analyze a ground truth depth map to extract a set of depth cues. Then, using these depth cues, we process the colored reference video to reconstruct the depth map. We tested this approach on different video sequences with different monocular properties. The results show that the extracted depth maps generate a 3D video with quality close to the video rendered using the ground truth depth map. We report subjective and objective results using 3VQM.


international conference on image processing | 2016

ReSIFT: Reliability-weighted sift-based image quality assessment

Dogancan Temel; Ghassan AlRegib

This paper presents a full-reference image quality estimator based on SIFT descriptor matching over reliability-weighted feature maps. Reliability assignment includes a smoothing operation, a transformation to perceptual color domain, a local normalization stage, and a spectral residual computation with global normalization. The proposed method ReSIFT is tested on the LIVE and the LIVE Multiply Distorted databases and compared with 11 state-of-the-art full-reference quality estimators. In terms of the Pearson and the Spearman correlation, ReSIFT is the best performing quality estimator in the overall databases. Moreover, ReSIFT is the best performing quality estimator in at least one distortion group in compression, noise, and blur category.


electronic imaging | 2016

Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming.

Jacob Søgaard; Lukáš Krasula; Muhammad Shahid; Dogancan Temel; Kjell Brunnström; Manzoor Razaak

Objective video quality metrics are designed to estimate thequality of experience of the end user. However, these objectivemetrics are usually validated with video streams degraded undercommon dist ...


ieee global conference on signal and information processing | 2014

Fault detection using color blending and color transformations

Zhen Wang; Dogancan Temel; Ghassan AlRegib

In the field of seismic interpretation, univariate data-based maps are commonly used by interpreters, especially for fault detection. In these maps, the contrast between target regions and the background is one of the main factors that affect the accuracy of interpretation. Since univariate data-based maps are not capable of providing a high-contrast representation, to overcome this issue, we turn them into multivariate data-based representations using color blending. We blend neighboring time sections or frames that are viewed in the time direction of migrated seismic volumes as if they corresponded to the red, green, and blue channels of a color image. Furthermore, to extract more reliable structural information, we apply color transformations. Experimental results show that the proposed method improves the accuracy of fault detection by limiting the average distance between detected fault lines and the ground truth into one pixel.

Collaboration


Dive into the Dogancan Temel's collaboration.

Top Co-Authors

Avatar

Ghassan AlRegib

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mohit Prabhushankar

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gukyeong Kwon

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mohammed A. I. Aabed

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Guangcong Zhang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Huijie Pan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qiongjie Lin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhen Wang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jacob Søgaard

Technical University of Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge