Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tayfun Tuna is active.

Publication


Featured researches published by Tayfun Tuna.


technical symposium on computer science education | 2012

Development and evaluation of indexed captioned searchable videos for STEM coursework

Tayfun Tuna; Jaspal Subhlok; Lecia Barker; Varun Varghese; Olin Johnson; Shishir K. Shah

Videos of classroom lectures have proven to be a popular and versatile learning resource. This paper reports on videos featuring Indexing, Captioning, and Search capability (ICS Videos). The goal is to allow a user to rapidly search and access a topic of interest, a key shortcoming of the standard video format. A lecture is automatically divided into logical indexed video segments by analyzing video frames. Text is automatically identified with OCR technology enhanced with image transformations to drive keyword search. Captions can be added to videos. The ICS video player integrates indexing, search, and captioning in video playback and has been used by dozens of courses and 1000s of students. This paper reports on the development and evaluation of ICS videos framework and assessment of its value as an academic learning resource.


applied imagery pattern recognition workshop | 2011

Indexing and keyword search to ease navigation in lecture videos

Tayfun Tuna; Jaspal Subhlok; Shishir K. Shah

Lecture videos have been commonly used to supplement in-class teaching and for distance learning. Videos recorded during in-class teaching and made accessible online are a versatile resource on par with a textbook and the classroom itself. Nonetheless, the adoption of lecture videos has been limited, in large part due to the difficulty of quickly accessing the content of interest in a long video lecture. In this work, we present “video indexing” and “keyword search” that facilitate access to video content and enhances user experience. Video indexing divides a video lecture into segments indicating different topics by identifying scene changes based on the analysis of the difference image from a pair of video frames. We propose an efficient indexing algorithm that leverages the unique features of lecture videos. Binary search with frame sampling is employed to efficiently analyze long videos. Keyword search identifies video segments that match a particular keyword. Since text in a video frame often contains a diversity of colors, font sizes and backgrounds, our text detection approach requires specialized preprocessing followed by the use of off-the-shelf OCR engines, which are designed primarily for scanned documents. We present image enhancements: text segmentation and inversion, to increase detection accuracy of OCR tools. Experimental results on a suite of diverse video lectures were used to validate the methods developed in this work. Average processing time for a one-hour lecture is around 14 minutes on a typical desktop. Search accuracy of three distinct OCR engines - Tesseract, GOCR and MODI increased significantly with our preprocessing transformations, yielding an overall combined accuracy of 97%. The work presented here is part of a video streaming framework deployed at multiple campuses serving hundreds of lecture videos.


Social Network Analysis and Mining | 2016

User characterization for online social networks

Tayfun Tuna; Esra Akbas; Ahmet Aksoy; Muhammed Abdullah Canbaz; Umit Karabiyik; Bilal Gonen; Ramazan Savas Aygün

Online social network analysis has attracted great attention with a vast number of users sharing information and availability of APIs that help to crawl online social network data. In this paper, we study the research studies that are helpful for user characterization as online users may not always reveal their true identity or attributes. We especially focused on user attribute determination such as gender and age; user behavior analysis such as motives for deception; mental models that are indicators of user behavior; user categorization such as bots versus humans; and entity matching on different social networks. We believe our summary of analysis of user characterization will provide important insights into researchers and better services to online users.


frontiers in education conference | 2014

A crowdsourcing caption editor for educational videos

Rucha Deshpande; Tayfun Tuna; Jaspal Subhlok; Lecia Barker

Video of a classroom lecture has been shown to be a versatile learning resource comparable to a textbook. Captions in videos are highly valued by students, especially those with hearing disability and those whose first language is not English. Captioning by automatic speech recognition (ASR) tools is of limited use because of low and variable accuracy. Manual captioning with existing tools is a slow, tedious and expensive task. In this work, we present a web-based crowdsourcing editor to add or correct captions for video lectures. The editor allows a group, e.g., students in a class, to correct the captions for different parts of a video lecture simultaneously. Users can review and correct each others work. The caption editor has been successfully employed to caption STEM coursework videos. Our findings based on survey results and interviews indicate that this innovative crowdsourcing tool is effective and efficient for captioning lecture videos and has considerable value in educational practice. The caption editor is integrated with Indexed Captioned Searchable (ICS) Videos framework at University of Houston that has been used by dozens of courses and 1000s of students. The ICS Videos framework including the captioning tool is open source software available to educational institutions.


frontiers in education conference | 2015

Topic based segmentation of classroom videos

Tayfun Tuna; Mahima Joshi; Varun Varghese; Rucha Deshpande; Jaspal Subhlok; Rakesh M. Verma

Video of classroom lectures is a valuable and increasingly popular learning resource. A major weakness of the video format is the inability to quickly access the content of interest. The goal of this work is to automatically partition a lecture video into topical segments which are then presented to the user in a customized video player. The approach taken in this work is to identify topics based on text similarities across the video. The paper investigates the use of screen text extracted by Optical Character Recognition tools, as well as the speech text extracted by Automatic Speech Recognition tools. An automatic text-based segmentation algorithm is developed to identify topic changes and evaluated on a set of twenty-five lecture videos. The key conclusions are as follows. Screen text is a better guide to discovering topic changes than speech text, the effectiveness of speech text can be improved significantly with the correction of speech text, and combining screen text and accurate speech text can improve accuracy. Results are presented from surveys showing a high level of satisfaction among student users of automatically segmented videos. The paper also discusses the limits of automatic segmentation and the reasons why it is far from perfect.


frontiers in education conference | 2014

Student perceptions of indexed, searchable videos of faculty lectures

Lecia Barker; Christopher Lynnly Hovey; Jaspal Subhlok; Tayfun Tuna


Archive | 2013

Real-Time Risk Prediction During Drilling Operations

Serkan Dursun; Tayfun Tuna; Kaan Duman


SPE Annual Technical Conference and Exhibition | 2014

A Workflow for Intelligent Data-driven Analytics Software Development in Oil and Gas Industry

Serkan Dursun; Kaan Duman; Tayfun Tuna; Mamta Abbas; James Ding


SPE Western Regional Meeting | 2017

A Comprehensive Multi-Platform Petroleum Engineering Toolbox for Oil and Gas Industry

Karthik Balaji; Anuj Suhag; Rahul Ranjith; Tayfun Tuna; Cenk Temizel; Fred Aminzadeh


SPE Kuwait Oil & Gas Show and Conference | 2017

A Practical Petroleum Engineering Toolkit

Cenk Temizel; Tayfun Tuna; Bao Jia; Dike Putra; Raul Moreno

Collaboration


Dive into the Tayfun Tuna's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaan Duman

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Lecia Barker

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge