Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ivan Giangreco is active.

Publication


Featured researches published by Ivan Giangreco.


conference on multimedia modeling | 2015

IMOTION — A Content-Based Video Retrieval Engine

Luca Rossetto; Ivan Giangreco; Heiko Schuldt; Stéphane Dupont; Omar Seddati; T. Metin Sezgin; Yusuf Sahillioglu

This paper introduces the IMOTION system, a sketch-based video retrieval engine supporting multiple query paradigms. For vector space retrieval, the IMOTION system exploits a large variety of low-level image and video features, as well as high-level spatial and temporal features that can all be jointly used in any combination. In addition, it supports dedicated motion features to allow for the specification of motion within a video sequence. For query specification, the IMOTION system supports query-by-sketch interactions (users provide sketches of video frames), motion queries (users specify motion across frames via partial flow fields), query-by-example (based on images) and any combination of these, and provides support for relevance feedback.


international symposium on multimedia | 2014

Cineast: A Multi-feature Sketch-Based Video Retrieval Engine

Luca Rossetto; Ivan Giangreco; Heiko Schuldt

Despite the tremendous importance and availability of large video collections, support for video retrieval is still rather limited and is mostly tailored to very concrete use cases and collections. In image retrieval, for instance, standard keyword search on the basis of manual annotations and content-based image retrieval, based on the similarity to query image (s), are well established search paradigms, both in academic prototypes and in commercial search engines. Recently, with the proliferation of sketch-enabled devices, also sketch-based retrieval has received considerable attention. The latter two approaches are based on intrinsic image features and rely on the representation of the objects of a collection in the feature space. In this paper, we present Cineast, a multi-feature sketch-based video retrieval engine. The main objective of Cineast is to enable a smooth transition from content-based image retrieval to content-based video retrieval and to support powerful search paradigms in large video collections on the basis of user-provided sketches as query input. Cineast is capable of retrieving video sequences based on edge or color sketches as query input and even supports one or multiple exemplary video sequences as query input. Moreover, Cineast also supports a novel approach to sketch-based motion queries by allowing a user to specify the motion of objects within a video sequence by means of (partial) flow fields, also specified via sketches. Using an emergent combination of multiple different features, Cineast is able to universally retrieve video (sequences) without the need for prior knowledge or semantic understanding. The evaluation with a general purpose video collection has shown the effectiveness and the efficiency of the Cineast approach.


international congress on big data | 2014

ADAM - A Database and Information Retrieval System for Big Multimedia Collections

Ivan Giangreco; Ihab Al Kabary; Heiko Schuldt

The past decade has seen the rapid proliferation of low-priced devices for recording image, audio and video data in nearly unlimited quantity. Multimedia is Big Data, not only in terms of their volume, but also with respect to their heterogeneous nature. This also includes the variety of the queries to be executed. Current approaches for searching in big multimedia collections mainly rely on keywords. However, manually annotating every single object in a large collection is not feasible. Therefore, content-based multimedia retrieval -using sample objects as query input - is increasingly becoming an important requirement for dealing with the data deluge. In image databases, for instance, effective methods exploit the use of exemplary images or hand-drawn sketches as query input. In this paper, we introduce ADAM, a novel multimedia retrieval system that is tailored to large collections and that is able to support both Boolean retrieval for structured data and similarity-based retrieval for feature vectors extracted from the multimedia objects. For efficient query processing in such big multimedia data, ADAM allows the distribution of the indexed collection to multiple shards and performs queries in a MapReduce style. Furthermore, it supports a signature-based indexing strategy for similarity search that heavily reduces the query time. The efficiency of ADAM has been successfully evaluated in a content-based image retrieval application on the basis of 14 million images from the ImageNet collection.


Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia | 2014

Crowd-based Semantic Event Detection and Video Annotation for Sports Videos

Fabio Sulser; Ivan Giangreco; Heiko Schuldt

Recent developments in sport analytics have heightened the interest in collecting data on the behavior of individuals and of the entire team in sports events. Rather than using dedicated sensors for recording the data, the detection of semantic events reflecting a teams behavior and the subsequent annotation of video data is nowadays mostly performed by paid experts. In this paper, we present an approach to generating such annotations by leveraging the wisdom of the crowd. We present the CrowdSport application that allows to collect data for soccer games. It presents crowd workers short video snippets of soccer matches and allows them to annotate these snippets with event information. Finally, the various annotations collected from the crowd are automatically disambiguated and integrated into a coherent data set. To improve the quality of the data entered, we have implemented a rating system that assigns each worker a trustworthiness score denoting the confidence towards newly entered data. Using the DBSCAN clustering algorithm and the confidence score, the integration ensures that the generated event labels are of high quality, despite of the heterogeneity of the participating workers. These annotations finally serve as a basis for a video retrieval system that allows users to search for video sequences on the basis of a graphical specification of team behavior or motion of the individual player. Our evaluations of the crowd-based semantic event detection and video annotation using the Microworkers platform have shown the effectiveness of the approach and have led to results that are in most cases close to the ground truth and can successfully be used for various retrieval tasks.


conference on multimedia modeling | 2017

Enhanced Retrieval and Browsing in the IMOTION System

Luca Rossetto; Ivan Giangreco; Claudiu Tănase; Heiko Schuldt; Stéphane Dupont; Omar Seddati

This paper presents the IMOTION system in its third version. While still focusing on sketch-based retrieval, we improved upon the semantic retrieval capabilities introduced in the previous version by adding more detectors and improving the interface for semantic query specification. In addition to previous year’s system, we increase the role of features obtained from Deep Neural Networks in three areas: semantic class labels for more entry-level concepts, hidden layer activation vectors for query-by-example and 2D semantic similarity results display. The new graph-based result navigation interface further enriches the system’s browsing capabilities. The updated database storage system \(\textsf {ADAM}_{{pro }}\) designed from the ground up for large scale multimedia applications ensures the scalability to steadily growing collections.


acm multimedia | 2016

vitrivr: A Flexible Retrieval Stack Supporting Multiple Query Modes for Searching in Multimedia Collections

Luca Rossetto; Ivan Giangreco; Claudiu Tanase; Heiko Schuldt

vitrivr is an open source full-stack content-based multimedia retrieval system with focus on video. Unlike the majority of the existing multimedia search solutions, vitrivr is not limited to searching in metadata, but also provides content-based search and thus offers a large variety of different query modes which can be seamlessly combined: Query by sketch, which allows the user to draw a sketch of a query image and/or sketch motion paths, Query by example, keyword search, and relevance feedback. The vitrivr architecture is self-contained and addresses all aspects of multimedia search, from offline feature extraction, database management to frontend user interaction. The system is composed of three modules: a web-based frontend which allows the user to input the query (e.g., add a sketch) and browse the retrieved results (vitrivr-ui), a database system designed for interactive search in large-scale multimedia collections (ADAM), and a retrieval engine that handles feature extraction and feature-based retrieval (Cineast). The vitrivr source is available on GitHub under the MIT open source (and similar) licenses and is currently undergoing several upgrades as part of the Google Summer of Code 2016.


european conference on information retrieval | 2012

A user interface for query-by-sketch based image retrieval with color sketches

Ivan Giangreco; Michael Springmann; Ihab Al Kabary; Heiko Schuldt

This demo will interactively show a system that exploits a novel user interface, running on Tablet PCs or graphic tablets, that provides query-by-sketch based image retrieval using color sketches. The system uses Angular Radial Partitioning (ARP) for the edge information in the sketches and color moments in the CIELAB space, combined with a distance metric that is robust to deviations in color as they usually need to be taken into account with user-generated color sketches.


intelligent user interfaces | 2016

Semantic Sketch-Based Video Retrieval with Autocompletion

Claudiu Tanase; Ivan Giangreco; Luca Rossetto; Heiko Schuldt; Omar Seddati; Stéphane Dupont; Ozan Can Altiok; T. Metin Sezgin

The IMOTION system is a content-based video search engine that provides fast and intuitive known item search in large video collections. User interaction consists mainly of sketching, which the system recognizes in real-time and makes suggestions based on both visual appearance of the sketch (what does the sketch look like in terms of colors, edge distribution, etc.) and semantic content (what object is the user sketching). The latter is enabled by a predictive sketch-based UI that identifies likely candidates for the sketched object via state-of-the-art sketch recognition techniques and offers on-screen completion suggestions. In this demo, we show how the sketch-based video retrieval of the IMOTION system is used in a collection of roughly 30,000 video shots. The system indexes collection data with over 30 visual features describing color, edge, motion, and semantic information. Resulting feature data is stored in ADAM, an efficient database system optimized for fast retrieval.


conference on multimedia modeling | 2016

Searching in Video Collections Using Sketches and Sample Images – The Cineast System

Luca Rossetto; Ivan Giangreco; Silvan Heller; Claudiu Tănase; Heiko Schuldt

With the increasing omnipresence of video recording devices and the resulting abundance of digital video, finding a particular video sequence in ever-growing collections is more and more becoming a major challenge. Existing approaches to retrieve videos based on their content usually require prior knowledge about the origin and context of a particular video to work properly. Therefore, most state of the art video platforms still rely on text-based retrieval techniques to find desired sequences. In this paper, we present Cineast, a content-based video retrieval engine which retrieves video sequences based on their visual content. It supports Query-by-Example as well as Query-by-Sketch by using a multitude of low-level visual features in parallel. Cineast uses a collection of 200 videos from various genres with a combined length of nearly 20 h.


conference on multimedia modeling | 2018

Competitive Video Retrieval with vitrivr

Luca Rossetto; Ivan Giangreco; Ralph Gasser; Heiko Schuldt

This paper presents the competitive video retrieval capabilities of vitrivr. The vitrivr stack is the continuation of the IMOTION system which participated to the Video Browser Showdown competitions since 2015. The primary focus of vitrivr and its participation in this competition is to simplify and generalize the system’s individual components, making them easier to deploy and use. The entire vitrivr stack is made available as open source software.

Collaboration


Dive into the Ivan Giangreco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yusuf Sahillioglu

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge