Tugba Taskaya Temizel
Middle East Technical University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tugba Taskaya Temizel.
international symposium on computer and information sciences | 2008
Ozkan Bayraktar; Tugba Taskaya Temizel
Local grammar approach relies on constructing polylexical units having frozen characteristics. It has recently been shown to be superior to other named entity extraction approaches including the probabilistic, the symbolic, and the hybrid approach in terms of being able to work with untagged corpora and has successfully been applied to English, Portuguese, Korean, French and Chinese texts. In this paper, we evaluated local grammar-based approach on Turkish financial texts. We have found that although the method is successful in finding person names, the construction of frozen expressions for person name extraction is rather difficult, which can be attributed to that of Turkish word formations.
international conference on high performance computing and simulation | 2011
Hakan Kocakulak; Tugba Taskaya Temizel
The importance of ballistic applications has been recently recognized due to the increasing crime and terrorism threats and incidents around the world. Ballistic image analysis is one of the application areas which requires immediate response with high precision from large databases. Here, the microscopic markings on cartridge case of a bullet obtained in a crime scene are compared with that of images on ballistic databases for similarity in order to find out whether it is fired from any of the firearms within the database. In this paper, we have implemented a MapReduce solution using Hadoop for ballistic image comparison which is a high data and computation intensive task. MapReduce, a programming model developed by Google, provides a scalable, flexible and QoS guaranteed IT infrastructure particularly for embarrassingly parallel data oriented computational tasks. Our results have shown that we can effectively utilize the computing resources and gain significant increases in performance. Furthermore, we will share our experiences in programming and tuning a Hadoop cluster in the paper.
Journal of Real-time Image Processing | 2016
Püren Güler; Deniz Emeksiz; Alptekin Temizel; Mustafa Teke; Tugba Taskaya Temizel
In this article, parallel implementation of a real-time intelligent video surveillance system on Graphics Processing Unit (GPU) is described. The system is based on background subtraction and composed of motion detection, camera sabotage detection (moved camera, out-of-focus camera and covered camera detection), abandoned object detection, and object-tracking algorithms. As the algorithms have different characteristics, their GPU implementations have different speed-up rates. Test results show that when all the algorithms run concurrently, parallelization in GPU makes the system up to 21.88 times faster than the central processing unit counterpart, enabling real-time analysis of higher number of cameras.
GPU Computing Gems Emerald Edition | 2011
Alptekin Temizel; Tugba Halici; Berker Logoglu; Tugba Taskaya Temizel; Fatih Omruuzun; Ersin Karaman
Publisher Summary This chapter addresses the technical challenges and experiences associated with the domain-related algorithms implemented on GPU architectures specifically by using CUDA and OpenCL with an emphasis on real-time issues and optimization. The importance of GPUs has recently been recognized for general-purpose applications such as video and image processing algorithms. An increasing number of studies show substantial performance gains with their GPU-adapted implementations. This chapter implements two image and video processing applications on the CPU and the GPU to compare their effectiveness. As a video processing application, adaptive background subtraction, and as an image processing application, Pearsons correlation algorithms have been implemented. A series of implementations on background subtraction and correlation algorithms are presented, each of which addresses different aspects of GPU programming issues, including I/O operations, coalesced memory use, and kernel granularity. The experiments show that effective consideration of such design issues improves the performance of the algorithms significantly. This study aims to guide users in implementing GPU-based algorithms using CUDA and OpenCL architectures by providing practical suggestions. In addition, these two architectures are compared to demonstrate their advantages over each other, in particular, for video and image processing applications.
Informatics for Health & Social Care | 2015
Rahime Belen Sağlam; Tugba Taskaya Temizel
Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content.Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance.Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems.Results: The proposed framework gives good results...
Telematics and Informatics | 2017
Perin Ünal; Tugba Taskaya Temizel; P. Erhan Eren
Number of user owned apps and their category differ with gender and personality.Having similar apps increases the probability of accepting recommended applications.Number of apps owned in some categories implies higher acceptance of recommended apps.Conscientiousness is positively related with accepting recommended applications.Being agreeable is related with editors choice application preference. The rapid growth in the mobile application market presents a significant challenge to find interesting and relevant applications for users. An experimental study was conducted through the use of a specifically designed mobile application, on users mobile phones. The goals were; first, to learn about the users personality and the applications they downloaded to their mobile phones, second to recommend applications to users via notifications through the use of experimental mobile application and learn about user behavior in mobile environment. The question of how the personality features of users affect their compliance to recommendations is explored in this study. It is found that conscientiousness is positively related with accepting recommended applications and being agreeable is related with the preference for the applications of editors choice. Furthermore, in this study, applications owned by the user and the composition of applications under categories and their relation with personality features are explored. It is shown that the number of user owned applications and their category differ according to gender and personality. Having similar applications and the number of applications owned under specific categories increase the probability of accepting recommended applications.
IEEE Journal of Biomedical and Health Informatics | 2015
Serdar Murat Öztaner; Tugba Taskaya Temizel; S. Remzi Erdem; Mahmut Özer
The incorporation of pharmacogenomics information into the drug dosing estimation formulations has been shown to increase the accuracy in drug dosing and decrease the frequency of adverse drug effects in many studies in the literature. In this paper, an estimation framework based on the Bayesian structural equation modeling, which is driven by pharmacogenomics, is proposed. The results show that the model compares favorably with the linear models in terms of prediction and explaining the variations in warfarin dosing.
advanced video and signal based surveillance | 2014
Ayse Elvan Gunduz; Tugba Taskaya Temizel; Alptekin Temizel
With the increasing focus on safety and security in public areas, anomaly detection in video surveillance systems has become increasingly more important. In this paper, we describe a method that models the temporal behavior and detects behavioral anomalies in the scene using probabilistic graphical models. The Coupled Hidden Markov Model (CHMM) method that we use shows that sparse features obtained via feature detection and description algorithms are suitable for modeling the temporal behavior patterns and performing global anomaly detection. We model the scene using these features, perform perspective independent velocity analysis for anomaly detection purposes and demonstrate the results obtained on UCSD pedestrian walkway dataset. The training is unsupervised and does not require any data having anomaly. This eliminates the need to obtain anomaly data and to define anomalies in advance.
Computer Methods and Programs in Biomedicine | 2014
Rahime Belen Sağlam; Tugba Taskaya Temizel
Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Associations (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites.
Iet Computer Vision | 2016
Ayse Elvan Gunduz; Cihan Ongun; Tugba Taskaya Temizel; Alptekin Temizel
Coherent nature of crowd movement allows representing the crowd motion using sparse features. However, surveillance videos recorded at different periods of time are likely to have different crowd densities and motion characteristics. These varying scene properties necessitate use of different models for an effective representation of behaviour at different periods. In this study, a density aware approach is proposed to detect motion-based anomalies for scenes having varying crowd densities. In the training, the sparse features are modelled using separate hidden Markov models, each of which becomes an expert for specific scene characteristics. These models are then used for anomaly detection. The proposed method automatically adapts to the changing scene dynamics by switching to the most representative model at each frame. The authors demonstrate the effectiveness and real-time performance of the proposed method on real-life datasets as well as on simulated crowd videos that they generated and made publicly available to download.