Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ahmet Ekin is active.

Publication


Featured researches published by Ahmet Ekin.


IEEE Transactions on Image Processing | 2003

Automatic soccer video analysis and summarization

Ahmet Ekin; A.M. Tekalp; R. Mehrotra

We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions.


international conference on multimedia and expo | 2003

Generic play-break event detection for summarization and hierarchical sports video analysis

Ahmet Ekin; A. Murat Tekalp

This paper proposes a single generic real-time (or near real-time) play-break event detection algorithm for multiple sports, which include football, tennis, basketball, and soccer. The proposed algorithm only uses shot-based generic cinematic features, such as shot type and shot length. Detected play-break events are employed for two purposes: 1) all plays in certain sports, such as football and tennis, are presented as summaries, and 2) play-break events, as part of a hierarchical event detection scheme, determine the segments-of-interest for the other event detection algorithms. An example of such event detection algorithms is given for soccer goal events, where the proposed soccer goal detection algorithm exploits the common cinematic techniques that are employed during the breaks that follow the goal plays. We demonstrate the genericity of the proposed play-break detection algorithm over football, tennis, basketball video and the effectiveness of the proposed soccer goal detection algorithm over a large data set.


international conference on image processing | 2003

Robust dominant color region detection and color-based applications for sports video

Ahmet Ekin; A.M. Tekalp

This paper proposes a novel automatic dominant color region detection algorithm that is robust to temporal variations in the dominant color due to field, weather, and lighting conditions throughout a sports video. The algorithm automatically learns the dominant color statistics of the field independent of the sports type, and updates color statistics throughout a sporting event by using two color spaces, a control space and a primary space. The robustness of the algorithm results from adaptation of the statistics of the dominant color in the primary space with drift protection using the control space, and fusion of the information from two spaces. We also propose novel and generic color-based algorithms for referee, player-of-interest, and play-break event detection in sports video. The efficiency of the proposed algorithms is demonstrated over a dataset of various sports video, including basketball, football, golf, and soccer video.


international conference on image processing | 2002

Semantics of multimedia in MPEG-7

Ana B. Benitez; Hawley K. Rising; Corinne Jörgensen; Riccardo Leonardi; Alessandro Bugatti; Kôiti Hasida; Rajiv Mehrotra; A. Murat Tekalp; Ahmet Ekin; Toby Walker

In this paper, we present the tools standardized by MPEG-7 for describing the semantics of multimedia. In particular, we focus on the abstraction model, entities, attributes and relations of MPEG-7 semantic descriptions. MPEG-7 tools can describe the semantics of specific instances of multimedia such as one image or one video segment but can also generalize these descriptions either to multiple instances of multimedia or to a set of semantic descriptions. The key components of MPEG-7 semantic descriptions are semantic entities such as objects and events, attributes of these entities such as labels and properties, and, finally, relations of these entities such as an object being the patient of an event. The descriptive power and usability of these tools has been demonstrated in numerous experiments and applications, these make them key candidates to enable intelligent applications that deal with multimedia at human levels.


international conference on acoustics, speech, and signal processing | 2003

Shot type classification by dominant color for sports video segmentation and summarization

Ahmet Ekin; A.M. Tekalp

This paper introduces a novel generic framework for sports video processing by using the common feature of most sports: the dominant color of the field. This dominant field color is automatically detected and updated to compensate for the lighting and weather changes by a robust dominant color region detection algorithm. We also introduce new shot type classification algorithms for soccer and basketball. Finally, we use shot-based low-level features for domain-specific high-level applications. Specifically, the system detects soccer goals and summarizes soccer games in real-time, and it enables basketball fans to skip fouls, free throws, and time-out events by segmenting a basketball game into plays and breaks.


IEEE Transactions on Image Processing | 2002

Temporal segmentation of video objects for hierarchical object-based motion description

Yue Fu; Ahmet Ekin; A.M. Tekalp; R. Mehrotra

This paper describes a hierarchical approach for object-based motion description of video in terms of object motions and object-to-object interactions. We present a temporal hierarchy for object motion description, which consists of low-level elementary motion units (EMU) and high-level action units (AU). Likewise, object-to-object interactions are decomposed into a hierarchy of low-level elementary reaction units (ERU) and high-level interaction units (IU). We then propose an algorithm for temporal segmentation of video objects into EMUs, whose dominant motion can be described by a single representative parametric model. The algorithm also computes a representative (dominant) affine model for each EMU. We also provide algorithms for identification of ERUs and for classification of the type of ERUs. Experimental results demonstrate that segmenting the life-span of video objects into EMUS and ERUs facilitates the generation of high-level visual summaries for fast browsing and navigation. At present, the formation of high-level action and interaction units is done interactively. We also provide a set of query-by-example results for low-level EMU retrieval from a database based on similarity of the representative dominant affine models.


visual communications and image processing | 2002

Framework for tracking and analysis of soccer video

Ahmet Ekin; A. Murat Tekalp

In this paper, we present a complete framework for automatic analysis of soccer video by using domain specific information. In the proposed framework, following shot boundary detection, soccer shots are classified into 3 classes using the ratio of grass-colored pixels in a frame, and the size and number of soccer objects detected in a shot. These classes are long shots, in-field medium shots, and others, such as out-of-field of close-up shots. The long shots and in-field medium shots are further processed to analyze their semantic content. We observe that different low-level processing algorithms may be required to process different shot classes. For example, we introduce different tracking algorithms for the long shots and in- field medium shots. Furthermore, frame registration onto a reference field model is not usually applicable to in-field medium shots, because the field lines may not be visible. The proposed framework enables development of more effective low-level processing algorithms for high-level scene understanding, which perform nearly in real time. The results show the increased accuracy and efficiency of the proposed methods.


computer vision and pattern recognition | 2003

Generic Event Detection in Sports Video using Cinematic Features

Ahmet Ekin; A. Murat Tekalp

This paper presents real-time, or near real-time, probabilistic event detection methods for broadcast sports video using cinematic features, such as shot classes and slow-motion replays. Novel algorithms have been developed for probabilistic detection of soccer goal events and basketball play-break events in a generic framework. The proposed framework includes generic algorithms for automatic dominant (field) color region detection and shot boundary detection, and domain-specific shot classification algorithms for soccer and basketball. Finally, the detected goal events in soccer and play events in basketball are employed to generate summaries of long games. The efficiency and effectiveness of the proposed system and the algorithms have been shown over more than 13 hours of sports video captured by the broadcasters from different regions around the world.


Storage and Retrieval for Image and Video Databases | 2001

Object-based motion description: from low-level features to semantics

Ahmet Ekin; A. Murat Tekalp; Rajiv Mehrotra

We present a generic model to describe image and video content by a combination of semantic entities and low level features for semantically meaningful and fast retrieval. The proposed model includes semantic entities such as Object, Event and Actors to express relations between the first two. The use of Actors entity increases the efficiency of certain types of search, while the use of semantic and linguistic roles increases the expression capability of the model. The model also contains links to high-level media segments such as actions and interactions, and low level media segments such as elementary motion and reaction units, as well as low-level features such as motion parameters and trajectories. Based on this model, we propose image and video retrieval combining semantic and low-level information. The retrieval performance of our system is tested by using query-by-annotation, query-by-example, query-by-sketch, and a combination of them.


international conference on image processing | 2000

Parametric description of object motion using EMUs

Ahmet Ekin; R. Mehrotra; A.M. Tekalp

Large scale deployment of digital multimedia applications requires effective and efficient image and video representations for indexing. We present a hierarchical representation of video objects based on their motion. The proposed representation enables both low and semantic level description of object motion. At the low level, we consider a parametric description of the dominant motion of video objects, and define elementary motion units (EMU) where the dominant motion of the object is coherent and does not undergo considerable change. A group of EMUs forms an action unit which carries semantic information. Interactions between objects are also considered by calculating the relative motion between objects in an object-based framework. Experimental results for retrieval of EMUs based on similarity of dominant object motion are demonstrated.

Collaboration


Dive into the Ahmet Ekin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.M. Tekalp

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yue Fu

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kôiti Hasida

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge