Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oge Marques is active.

Publication


Featured researches published by Oge Marques.


IEEE Transactions on Neural Networks | 2007

Neural Network Approach to Background Modeling for Video Object Segmentation

Dubravko Culibrk; Oge Marques; Daniel Socek; Hari Kalva; Borko Furht

This paper presents a novel background modeling and subtraction approach for video object segmentation. A neural network (NN) architecture is proposed to form an unsupervised Bayesian classifier for this application domain. The constructed classifier efficiently handles the segmentation in natural-scene sequences with complex background motion and changes in illumination. The weights of the proposed NN serve as a model of the background and are temporally updated to reflect the observed statistics of background. The segmentation performance of the proposed NN is qualitatively and quantitatively examined and compared to two extant probabilistic object segmentation algorithms, based on a previously published test pool containing diverse surveillance-related sequences. The proposed algorithm is parallelized on a subpixel level and designed to enable efficient hardware implementation.


Spie Reviews | 2010

Video browsing interfaces and applications: a review

Klaus Schoeffmann; Frank Hopfgartner; Oge Marques; Laszlo Boeszoermenyi; Joemon M. Jose

We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data-which, if presented in its raw format, is rather unwieldy and costly-have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other.


acm southeast regional conference | 2006

Using visual attention to extract regions of interest in the context of image retrieval

Oge Marques; Liam M. Mayron; Gustavo B. Borba; Humberto Remigio Gamba

Recent research on computational modeling of visual attention has demonstrated that a bottom-up approach to identifying salient regions within an image can be applied to diverse and practical problems for which conventional machine vision techniques have not succeeded in producing robust solutions. This paper proposes a new method for extracting regions of interest (ROIs) from images using models of visual attention. It is presented in the context of improving content-based image retrieval (CBIR) solutions by implementing a biologically-motivated, unsupervised technique of grouping together images whose salient ROIs are perceptually similar. In this paper we focus on the process of extracting the salient regions of an image. The excellent results obtained with the proposed method have demonstrated that the ROIs of the images can be independently indexed for comparison against other regions on the basis of similarity for use in a CBIR solution.


computer vision and pattern recognition | 2008

Stereo depth with a Unified Architecture GPU

Joel Gibson; Oge Marques

This paper describes how the calculation of depth from stereo images was accelerated using a GPU. The Compute Unified Device Architecture (CUDA) from NVIDIA was employed in novel ways to compute depth using BT cost matching and the semi-global matching algorithm. The challenges of mapping a sequential algorithm to a massively parallel thread environment and performance optimization techniques are considered.


international semantic web conference | 2003

Semi-automatic semantic annotation of images using machine learning techniques

Oge Marques; Nitish Barman

The success of the Semantic Web hinges on being able to produce semantic markups on Web pages and their components, in a way that is cost-effective and consistent with adopted schemas and ontologies. Since images are an essential component of the Web, this work focuses on an intelligent approach to semantic annotation of images. We propose a three-layer architecture, in which the bottom layer organizes visual information extracted from the raw image contents, which are mapped to semantically meaningful keywords in the middle layer, which are then connected to schemas and ontologies on the top layer. Our key contribution is the use of machine learning algorithms for user-assisted, semi-automatic image annotation, in such a way that the knowledge of previously annotated images - both at metadata and visual levels - is used to speed up the annotation of subsequent images within the same domain (ontology) as well as to improve future query and retrieval of annotated images.


Multimedia Systems | 2007

New approaches to encryption and steganography for digital videos

Daniel Socek; Hari Kalva; Spyros S. Magliveras; Oge Marques; Dubravko Culibrk; Borko Furht

In this work we propose a novel type of digital video encryption that has several advantages over other currently available digital video encryption schemes. We also present an extended classification of digital video encryption algorithms in order to clarify these advantages. We analyze both security and performance aspects of the proposed method, and show that the method is efficient and secure from a cryptographic point of view. Even though the method is currently feasible only for a certain class of video sequences and video codecs, the method is promising and future investigations might reveal its broader applicability. Finally, we extend our approach into a novel type of digital video steganography where it is possible to disguise a given video with another video.


EURASIP Journal on Advances in Signal Processing | 2007

An attention-driven model for grouping similar images with image retrieval applications

Oge Marques; Liam M. Mayron; Gustavo B. Borba; Humberto Remigio Gamba

Recent work in the computational modeling of visual attention has demonstrated that a purely bottom-up approach to identifying salient regions within an image can be successfully applied to diverse and practical problems from target recognition to the placement of advertisement. This paper proposes an application of a combination of computational models of visual attention to the image retrieval problem. We demonstrate that certain shortcomings of existing content-based image retrieval solutions can be addressed by implementing a biologically motivated, unsupervised way of grouping together images whose salient regions of interest (ROIs) are perceptually similar regardless of the visual contents of other (less relevant) parts of the image. We propose a model in which only the salient regions of an image are encoded as ROIs whose features are then compared against previously seen ROIs and assigned cluster membership accordingly. Experimental results show that the proposed approach works well for several combinations of feature extraction techniques and clustering algorithms, suggesting a promising avenue for future improvements, such as the addition of a top-down component and the inclusion of a relevance feedback mechanism.


international conference on multimedia and expo | 2006

Challenges and Opportunities in Video Coding for 3D TV

Hari Kalva; Lakis Christodoulou; Liam M. Mayron; Oge Marques; Borko Furht

This paper explores the challenges opportunities in developing and deploying 3D TV services. The 3D TV services can be seen as a general case of the multi-view video that has been receiving significant attention lately. The keys to a successful 3D TV experience are the availability of content, the ease of use, the quality of experience, and the cost of deployment. Recent technological advances have made possible experimental systems that can be used to evaluate the 3D TV services. We have developed a 3D TV prototype and have currently conducting our first user study to evaluate the quality and experience. These experiences have allowed us to identify challenges and opportunities in developing 3D TV services


Multimedia Tools and Applications | 2010

A novel tool for summarization of arthroscopic videos

Mathias Lux; Oge Marques; Klaus Schöffmann; Laszlo Böszörmenyi; Georg Lajtai

Arthroscopic surgery is a minimally invasive procedure that uses a small camera to generate video streams, which are recorded and subsequently archived. In this paper we present a video summarization tool and demonstrate how it can be successfully used in the domain of arthroscopic videos. The proposed tool generates a keyframe-based summary, which clusters visually similar frames based on user-selected visual features and appropriate dissimilarity metrics. We discuss how this tool can be used for arthroscopic videos, taking advantage of several domain-specific aspects, without losing its ability to work on general-purpose videos. Experimental results confirm the feasibility of the proposed approach and encourage extending it to other application domains.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Co.Vi.Wo.: Color Visual Words Based on Non-Predefined Size Codebooks

Savvas A. Chatzichristofis; Chryssanthi Iakovidou; Yiannis S. Boutalis; Oge Marques

Due to the rapid development of information technology and the continuously increasing number of available multimedia data, the task of retrieving information based on visual content has become a popular subject of scientific interest. Recent approaches adopt the bag-of-visual-words (BOVW) model to retrieve images in a semantic way. BOVW has shown remarkable performance in content-based image retrieval tasks, exhibiting better retrieval effectiveness over global and local feature (LF) representations. The performance of the BOVW approach depends strongly, however, on predicting the ideal codebook size, a difficult and database-dependent task. The contribution of this paper is threefold. First, it presents a new technique that uses a self-growing and self-organized neural gas network to calculate the most appropriate size of a codebook for a given database. Second, it proposes a new soft-weighting technique, whereby each LF is classified into only one visual word (VW) with a degree of participation. Third, by combining the information derived from the method that automatically detects the number of VWs, the soft-weighting method, and a color information extraction method from the literature, it shapes a new descriptor, called color VWs. Experimental results on two well-known benchmarking databases demonstrate that the proposed descriptor outperforms 15 contemporary descriptors and methods from the literature, in terms of both precision at K and its ability to retrieve the entire ground truth.

Collaboration


Dive into the Oge Marques's collaboration.

Top Co-Authors

Avatar

Borko Furht

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Liam M. Mayron

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Hari Kalva

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Mathias Lux

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Daniel Socek

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gustavo B. Borba

Federal University of Technology - Paraná

View shared research outputs
Top Co-Authors

Avatar

Humberto Remigio Gamba

Federal University of Technology - Paraná

View shared research outputs
Top Co-Authors

Avatar

Joel Gibson

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge