Turgay Yilmaz
Middle East Technical University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Turgay Yilmaz.
IEEE Transactions on Knowledge and Data Engineering | 2013
Yakup Yildirim; Adnan Yazici; Turgay Yilmaz
Recent increase in the use of video-based applications has revealed the need for extracting the content in videos. Raw data and low-level features alone are not sufficient to fulfill the user s needs; that is, a deeper understanding of the content at the semantic level is required. Currently, manual techniques, which are inefficient, subjective and costly in time and limit the querying capabilities, are being used to bridge the gap between low-level representative features and high-level semantic content. Here, we propose a semantic content extraction system that allows the user to query and retrieve objects, events, and concepts that are extracted automatically. We introduce an ontology-based fuzzy video semantic content model that uses spatial/temporal relations in event and concept definitions. This metaontology definition provides a wide-domain applicable rule construction standard that allows the user to construct an ontology for a given domain. In addition to domain ontologies, we use additional rule definitions (without using ontology) to lower spatial relation computation cost and to be able to define some complex situations more effectively. The proposed framework has been fully implemented and tested on three different domains. We have obtained satisfactory precision and recall rates for object, event and concept extraction.
ad hoc networks | 2015
Fatih Senel; Kemal Akkaya; Melike Erol-Kantarci; Turgay Yilmaz
Self-deployment of sensors with maximized coverage in Underwater Acoustic Sensor Networks (UWASNs) is challenging due to difficulty of access to 3-D underwater environments. The problem is further compounded if the connectivity of the final network is desired. One possible approach to this problem is to drop the sensors on the water surface and then move them to certain depths in the water to maximize the 3-D coverage while maintaining the initial connectivity. In this paper, we propose a fully distributed node deployment scheme for UWASNs which only requires random dropping of sensors on the water surface. The idea is based on determining the connected dominating set (CDS) of the initial network on the surface and then adjust the depths of all neighbors of a particular dominator node (i.e., the backbone of the network) for minimizing the coverage overlaps among them while still keeping the connectivity with the dominator. The process starts with a leader node and spans all the dominators in the network for repositioning them. In addition to depth adjustment, we studied the effects of possible topology alterations due to water mobility caused by several factors such as waves, winds, currents, vortices or random surface effects, on network coverage and connectivity performance. On the one hand the mobility of nodes may help the topology to get stretched in 2-D, which helps to maximize the coverage in 3-D. On the other hand the mobility may cause the network to get partitioned where some of the nodes are disconnected from the rest of the topology. We investigated the best node deployment time where 2-D coverage is maximized and the network is still connected. To simulate the mobility of the sensors, we implemented meandering current mobility model which is one of the existing mobility models for UWASNs that fits our needs. The performance of the proposed approach is validated through simulation. Simulations results indicate that connectivity can be guaranteed regardless of the transmission and sensing range ratio with a coverage very close to a coverage-aware deployment approach.
local computer networks | 2013
Fatih Senel; Kemal Akkaya; Turgay Yilmaz
Self-deployment of sensors with maximized coverage in Underwater Acoustic Sensor Networks (UWASNs) is challenging due to difficulty of access to 3-D underwater environments. The problem is further compounded if the connectivity of the final network is required. One possible approach is to drop the sensors on the surface and then move them to certain depths in the water to maximize the 3-D coverage while maintaining the connectivity. In this paper, we propose a purely distributed node deployment scheme for UWASNs which only requires random dropping of sensors on the water surface. The goal is to expand the initial network to 3-D with maximized coverage and guaranteed connectivity with a surface station. The idea is based on determining the connected dominating set of the initial network and then adjust the depths of all dominatee and dominator neighbors of a particular dominator node for minimizing the coverage overlaps among them while still keeping the connectivity with the dominator. The process starts with a leader node and spans all the dominators in the network for repositioning. Simulations results indicate that connectivity can be guaranteed regardless of the transmission and sensing range ratio with a coverage very close to a coverage-aware deployment approach.
conference on image and video retrieval | 2007
Yakup Yildirim; Turgay Yilmaz; Adnan Yazici
Current solutions are still far from reaching the ultimate goal, namely to enable users to retrieve the desired video clip among massive amounts of visual data in a semantically meaningful manner. With this study we propose a video database model (OVDAM) that provides automatic object, event and concept extraction. By using training sets and expert opinions, low-level feature values for objects and relations between objects are determined. N-Cut image segmentation algorithm is used to determine segments in video keyframes and the genetic algorithm-based classifier is used to make classification of segments (candidate objects) to objects. At the top level ontology of objects, events and concepts are used. Objects and/or events use all these information to generate events and concepts. The system has a reliable video data model, which gives the user the ability to make ontology-supported fuzzy querying. RDF is used to represent metadata. OWL is used to represent ontology and RDQL is used for querying. Queries containing objects, events, spatio-temporal clauses, concepts and low-level features are handled.
international conference on multimedia retrieval | 2012
Turgay Yilmaz; Elvan Gulen; Adnan Yazici; Masaru Kitsuregawa
Despite the extensive number of studies for multimodal information fusion, the issue of determining the optimal modalities has not been adequately addressed yet. In this study, a RELIEF-based multimodal feature selection approach (RELIEF-RDR) is proposed. The original RELIEF algorithm is extended for weaknesses in three major issues; multi-labeled data, noise and class-specific feature selection. To overcome these weaknesses, discrimination based weighting mechanism of RELIEF is supported with two additional concepts; representation and reliability capabilities of features, without an increase in computational complexity. These capabilities of features are exploited by using the statistics on dissimilarities of training instances. The experiments conducted on TRECVID 2007 dataset validated the superiority of RELIEF-RDR over RELIEF.
flexible query answering systems | 2011
Turgay Yilmaz; Adnan Yazici; Yakup Yildirim
Combining multiple features is an empirically validated approach in the literature, which increases the accuracy in querying. However, it entails processing intrinsic high-dimensionality of features and complicates realizing an efficient system. Two primary problems can be discussed for efficient querying: representation of images and selection of features. In this paper, a class-specific feature selection approach with a dissimilarity based representation method is proposed. The class-specific features are determined by using the representativeness and discriminativeness of features for each image class. The calculations are based on the statistics on the dissimilarity values of training images.
international conference on computer communications and networks | 2012
Hakan Oztarak; Turgay Yilmaz; Kemal Akkaya; Adnan Yazici
Object classification from video frames has become more challenging in the context of Wireless Multimedia Sensor Networks (WMSNs). This is mainly due to the fact that these networks are severely resource constrained in terms of the deployed camera sensors. The resources refer to battery, processor, memory and storage of the camera sensor. Limited resources mandates the need for efficient classification techniques in terms of energy consumption, space usage and processing power. In this paper, we propose an efficient yet accurate classification algorithm for WMSNs using a genetic algorithm-based classifier. The efficiency of the algorithm is achieved by extracting two simple but effective features of the objects from the video frames, namely shape of the minimum bounding box of the object and the speed of the object in the monitored region. The accuracy of the classification, on the other hand, is provided through using a genetic algorithm whose space/memory requirements are minimal. The training of this genetic algorithm based classifier is done offline and it is stored at each camera in advance to perform online classification during surveillance missions. The experiments indicate that a promising classification accuracy can be achieved without introducing a major energy and storage overhead on camera sensors.
Multimedia Tools and Applications | 2018
Adnan Yazici; Murat Koyuncu; Turgay Yilmaz; Saeid Sattari; Mustafa Sert; Elvan Gulen
This paper introduces an intelligent multimedia information system, which exploits machine learning and database technologies. The system extracts semantic contents of videos automatically by using the visual, auditory and textual modalities, then, stores the extracted contents in an appropriate format to retrieve them efficiently in subsequent requests for information. The semantic contents are extracted from these three modalities of data separately. Afterwards, the outputs from these modalities are fused to increase the accuracy of the object extraction process. The semantic contents that are extracted using the information fusion are stored in an intelligent and fuzzy object-oriented database system. In order to answer user queries efficiently, a multidimensional indexing mechanism that combines the extracted high-level semantic information with the low-level video features is developed. The proposed multimedia information system is implemented as a prototype and its performance is evaluated using news video datasets for answering content and concept-based queries considering all these modalities and their fused data. The performance results show that the developed multimedia information system is robust and scalable for large scale multimedia applications.
International Journal of Multimedia Data Engineering and Management | 2012
Elvan Gulen; Turgay Yilmaz; Adnan Yazici
Multimedia data by its very nature contains multimodal information in it. For a successful analysis of multimedia content, all available multimodal information should be utilized. Additionally, since concepts can contain valuable cues about other concepts, concept interaction is a crucial source of multimedia information and helps to increase the fusion performance. The aim of this study is to show that integrating existing modalities along with the concept interactions can yield a better performance in detecting semantic concepts. Therefore, in this paper, the authors present a multimodal fusion approach that integrates semantic information obtained from various modalities along with additional semantic cues. The experiments conducted on TRECVID 2007 and CCV Database datasets validates the superiority of such combination over best single modality and alternative modality combinations. The results show that the proposed fusion approach provides 16.7% relative performance gain on TRECVID dataset and 47.7% relative performance improvement on CCV database over the results of best unimodal approaches.
conference on multimedia modeling | 2016
Adnan Yazici; Saeid Sattari; Turgay Yilmaz; Mustafa Sert; Murat Koyuncu; Elvan Gulen
Managing a large volume of multimedia data, which contain various modalities (visual, audio, and text), reveals the need for a specialized multimedia database system (MMDS) to efficiently model, process, store and retrieve video shots based on their semantic content. This demo introduces METU-MMDS, an intelligent MMDS which employs both machine learning and database techniques. The system extracts semantic content automatically by using visual, audio and textual data, stores the extracted content in an appropriate format and uses this content to efficiently retrieve video shots. The system architecture supports various multimedia query types including unimodal querying, multimodal querying, query-by-concept, query-by-example, and utilizes a multimedia index structure for efficiently querying multi-dimensional multimedia data. We demonstrate METU-MMDS for semantic data extraction from videos and complex multimedia querying by considering content and concept-based queries containing all modalities.