Ihab Al Kabary
University of Basel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ihab Al Kabary.
conference on information and knowledge management | 2013
Ihab Al Kabary; Heiko Schuldt
We present SportSense, a system for interactive sports video retrieval using sketch-based motion queries. SportSense is based on sports videos of games, enriched with an overlay of metadata that incorporates spatio-temporal information about various events and movements. We present how sketch-based motion queries are formulated and executed, as well as the use of various intuitive input interfaces to acquire the query object. The system uses spatio-temporal index structures to facilitate interactive response times.
international congress on big data | 2014
Ivan Giangreco; Ihab Al Kabary; Heiko Schuldt
The past decade has seen the rapid proliferation of low-priced devices for recording image, audio and video data in nearly unlimited quantity. Multimedia is Big Data, not only in terms of their volume, but also with respect to their heterogeneous nature. This also includes the variety of the queries to be executed. Current approaches for searching in big multimedia collections mainly rely on keywords. However, manually annotating every single object in a large collection is not feasible. Therefore, content-based multimedia retrieval -using sample objects as query input - is increasingly becoming an important requirement for dealing with the data deluge. In image databases, for instance, effective methods exploit the use of exemplary images or hand-drawn sketches as query input. In this paper, we introduce ADAM, a novel multimedia retrieval system that is tailored to large collections and that is able to support both Boolean retrieval for structured data and similarity-based retrieval for feature vectors extracted from the multimedia objects. For efficient query processing in such big multimedia data, ADAM allows the distribution of the indexed collection to multiple shards and performs queries in a MapReduce style. Furthermore, it supports a signature-based indexing strategy for similarity search that heavily reduces the query time. The efficiency of ADAM has been successfully evaluated in a content-based image retrieval application on the basis of 14 million images from the ImageNet collection.
european conference on information retrieval | 2012
Roman Kreuzer; Michael Springmann; Ihab Al Kabary; Heiko Schuldt
A major challenge when dealing with large collections of digital images is to find relevant objects, especially when no metadata on the objects is available. Content-based image retrieval (CBIR) addresses this problem but usually lacks query images that are good enough to express the users information need. Therefore, in Query-by-Sketch, CBIR has been considered with user provided sketches as query objects --- but so far, this has suffered from the limitations of existing user interfaces. In this paper, we present a novel user interface for query by sketch that exploits emergent interactive paper and digital pen technology. Users can draw sketches on paper in a user-friendly way. Search can be started interactively from the paper front-end, due to a streaming interface from the digital pen to the underlying CBIR system. We present the implementation of the interactive paper/digital pen interface on top of QbS, our system for CBIR using sketches, and we present in detail the evaluation of the system on the basis of the MIRFLICKR-25000 image collection.
european conference on information retrieval | 2012
Ivan Giangreco; Michael Springmann; Ihab Al Kabary; Heiko Schuldt
This demo will interactively show a system that exploits a novel user interface, running on Tablet PCs or graphic tablets, that provides query-by-sketch based image retrieval using color sketches. The system uses Angular Radial Partitioning (ARP) for the edge information in the sketches and color moments in the CIELAB space, combined with a distance metric that is robust to deviations in color as they usually need to be taken into account with user-generated color sketches.
conference on information and knowledge management | 2010
Michael Springmann; Ihab Al Kabary; Heiko Schuldt
With the increasingly growing size of digital image collections, known image search is gaining more and more importance. Especially in collections where individual objects are not tagged with metadata describing their content, content-based image retrieval (CBIR) is a promising approach, but usually suffers from the unavailability of query images that are good enough to express the users information need. In this paper, we present the QbS system that provides CBIR based on user-drawn sketches. The QbS system combines angular radial partitioning for the extraction of features in the user-provided sketch, taking into account the spatial distribution of edges, and the image distortion model. This combination offers several highly relevant invariances that allow the query sketch to slightly deviate from the searched image in terms of rotation, translation, relative size, and/or unknown objects in the background. To illustrate the benefits of the approach, we present search results from the evaluation of the QbS system on the basis of the MIRFLICKR collection with 25,000 objects and compare the retrieval results of pure metadata-driven approaches, pure content-based retrieval using different sketches, and combinations thereof.
adaptive multimedia retrieval | 2012
Ihab Al Kabary; Heiko Schuldt
Query-by-Sketch image retrieval, unlike content based image retrieval following a Query-by-Example approach, uses human-drawn binary sketches as query objects, thereby eliminating the need for an initial query image close enough to the users’ information need. This is particularly important when the user is looking for a known image, i.e., an image that has been seen before. So far, Query-by-Sketch has suffered from two main limiting factors. First, users tend to focus on the objects’ main contours when drawing binary sketches, while ignoring any texture or edges inside the object(s) and in the background. Second, the users’ limited ability to sketch the known item being searched for in the correct position, scale and/or orientation. Thus, effective Query-by-Sketch systems need to allow users to concentrate on the main contours of the main object(s) they are searching for and, at the same time, tolerate such inaccuracies. In this paper, we present SKETCHify, an adaptive algorithm that is able to identify and isolate the prominent objects within an image. This is achieved by applying heuristics to detect the best edge map thresholds for each image by monitoring the intensity, spatial distribution and sudden spike increase of edges with the intention of generating edge maps that are as close as possible to human-drawn sketches. We have integrated SKETCHify into QbS, our system for Query-by-Sketch image retrieval, and the results show a significant improvement in both retrieval rank and retrieval time when exploiting the prominent edges for retrieval, compared to Query-by-Sketch relying on normal edge maps. Depending on the quality of the query sketch, SKETCHify even allows to provide invariances with regard to position, scale and rotation in the retrieval process. For the evaluation, we have used images from the MIRFLICKR-25K dataset and a free clip art collection of similar size.
international acm sigir conference on research and development in information retrieval | 2014
Ihab Al Kabary; Heiko Schuldt
Searching for scenes in team sport videos is a task that recurs very often in game analysis and other related activities performed by coaches. In most cases, queries are formulated on the basis of specific motion characteristics the user remembers from the video. Providing sketching interfaces for graphically specifying query input is thus a very natural user interaction for a retrieval application. However, the quality of the query (the sketch) heavily depends on the memory of the user and her ability to accurately formulate the intended search query by transforming this 3D memory of the known item(s) into a 2D sketch query. In this paper, we present an auto-suggest search feature that harnesses spatiotemporal data of team sport videos to suggest potential directions containing relevant data during the formulation of a sketch-based motion query. Users can intuitively select the direction of the desired motion query on-the-fly using the displayed visual clues, thus relaxing the need for relying heavily on memory to formulate the query. At the same time, this significantly enhances the accuracy of the results and the speed at which they appear. A first evaluation has shown the effectiveness and efficiency of our approach.
international acm sigir conference on research and development in information retrieval | 2012
Ihab Al Kabary; Heiko Schuldt
We present a novel and innovative user interface for query-by-sketching based image retrieval that exploits emergent interactive paper and digital pen technology. Users can draw sketches with a digital pen on interactive paper in a user-friendly way. The pen is able to capture the stroke vectors and to interactively stream them to the underlying content-based image retrieval (CBIR) system via the pens Bluetooth interface. We present the integration of interactive paper/digital pen technology with QbS, our CBIR system tailored to Query-by-Sketching, and we demonstrate the use of the paper and pen interface together with QbS for three different collections: MIRFLICKR-25K, a cartoon collection, and a collection of medieval paper watermarks.
international acm sigir conference on research and development in information retrieval | 2014
Ivan Giangreco; Ihab Al Kabary; Heiko Schuldt
The tremendous increase of multimedia data in recent years has heightened the need for systems that not only allow to search with keywords, but that also support content-based retrieval in order to effectively and efficiently query large collections. In this paper, we introduce ADAM, a system that is able to store and retrieve multimedia objects by seamlessly combining aspects from databases and information retrieval. ADAM is able to work with both structured and unstructured data and to jointly provide Boolean retrieval and similarity search. To efficiently handle large volumes of data it makes use of a signature-based indexing and the distribution of the collection to multiple shards that are queried in a MapReduce style. We present ADAM in the setting of a sketch-based image retrieval application using the ImageNet collection containing 14 million images.
european conference on information retrieval | 2014
Ihab Al Kabary; Heiko Schuldt
In team sports, the analysis of a teams tactical behavior is becoming increasingly important. While this is still mostly based on the manual selection of video sequences from games, coaches and analysts increasingly demand more automated solutions to search for relevant sequences of videos, and to support this search by means of easy-to-use interfaces. In this paper, we present a novel intuitive interface for specifying sketched-based motion queries in sport videos using hand gestures. We have built the interface on top of SportSense, a system for interactive sports video retrieval. SportSense exploits spatiotemporal information incorporating various events within sport games, which is used as metadata to the actual sports videos. The interface has been designed to enable users to fully control the system and facilitate acquiring of the query object needed to perform both spatial and spatiotemporal motion queries using intuitive hand gestures.